entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 14
193
| authors
sequencelengths 1
1.14k
| primary_category
stringclasses 125
values | categories
sequencelengths 1
6
| text
stringlengths 12
495k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2409.03234v1 | 20240905040542 | The star formation histories, star formation efficiencies and ionizing sources of ATLASGAL clumps with HII regions | [
"J. W. Zhou",
"Sami Dib",
"Pavel Kroupa"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.SR"
] |
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany
[email protected]
Max Planck Institute für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany
[email protected]
Helmholtz-Institut für Strahlen- und Kernphysik (HISKP), Universität Bonn, Nussallee 14–16, 53115 Bonn, Germany
[email protected]
Charles University in Prague, Faculty of Mathematics and Physics, Astronomical Institute, V Holešovičkách 2, CZ-180 00 Praha 8, Czech Republic
1226 ATLASGAL clumps with HII regions (HII-clumps) were matched with radio sources in the CORNISH-North/South surveys, and 392 of them have corresponding radio sources. We determined the stellar luminosity L_*, T84 according to the Lyman continuum flux N_Ly. When the bolometric luminosity of HII-clumps is less than log_10(L_ bol, obs/L_⊙) ≈ 3.7, corresponding to a clump mass log_10(M_ cl/M_⊙) ≈ 2.55, the values of L_*, T84 derived from N_Ly overestimate the actual stellar luminosities, because the accretion onto the protostars contributes significantly to the radio emission. After subtracting the accretion luminosity from L_*, T84, we obtained reasonable estimates of the stellar luminosity. Using the 0.5 Myr isochrone, we calculated the stellar masses according to the stellar luminosities, and found that they roughly follow the m_ max-M_ ecl relation of embedded clusters, consistent with the ionizing sources representing the most massive stars in the embedded clusters of HII-clumps. We also studied the contribution of the possible flaring activity to the observed stellar luminosity and found that they can be neglected. We further studied the change of SFE with the clump mass. According to the derived mass of the most massive star in each HII-clump, using the theoretical m_ max-M_ ecl relation, we calculated the mass of the corresponding embedded cluster and then the SFE of the clump. The SFE decreases with increasing clump mass, with a median value of ≈0.3. We also independently derived the SFE for each HII-clump based on the model developed in our previous work. The SFEs of HII-clumps derived from the observation and the model are in good agreement.
Concerning the star formation histories of the ATLASGAL clumps, low-mass clumps may reach the peak of star formation earlier than high-mass clumps, consistent with the shorter free-fall time of low-mass clumps.
Star formation properties of ATLASGAL clumps
J. W. Zhou, Sami Dib, Pavel Kroupa
The star formation histories, star formation efficiencies and ionizing sources of ATLASGAL clumps with HII regions
J. W. Zhou<ref>
Sami Dib <ref>
Pavel Kroupa<ref>
<ref>
Accepted XXX. Received YYY; in original form ZZZ
==================================================================================================================
§ INTRODUCTION
Understanding the contribution of stars in different mass ranges to the total energetics and dynamics of star clusters is crucial.
A majority, if not all, of the stars we observe were born within embedded clusters <cit.>, which adds significant intricacy to pinpointing and analyzing individual protostellar objects.
Unlike their lower-mass counterparts, massive stars evolve towards the main sequence while still deeply embedded within their parental clumps. This renders their initial stages invisible, even when observed using mid-infrared wavelengths. Nowadays,
our comprehension of massive star and cluster formation has significantly advanced, owing to the combination of Galactic plane surveys and high-angular resolution images obtained through submillimeter facilities <cit.>. Surveys such as HiGAL <cit.> and ATLASGAL <cit.> have played pivotal roles by providing unbiased compilations of dense clumps that track the early phases of massive star and cluster formation. Explorations employing representative clump samples can provide valuable insights into the efficiency of converting molecular gas into stellar clusters <cit.>.
In <cit.>, 5007 ATLASGAL clumps have been categorized into four distinct evolutionary stages, with the most advanced stage being referred to as HII-clumps, i.e. ATLASGAL clumps with HII regions.
In <cit.> (paper I), we synthesized the embedded clusters within HII-clumps under the assumption that the stellar initial mass function (IMF) follows a universal optimal distribution function rather than a probability density <cit.>.
We utilized the 0.1 Myr isochrone to estimate the bolometric luminosity of individual stars within an embedded cluster, augmenting this with the accretion luminosity of each star in the cluster. The cumulative bolometric luminosity of the synthetic embedded clusters aligns closely with the observed bolometric luminosity of HII-clumps, validating the effectiveness of the method. As a follow up study based on paper I, in this work, we focus on the ionizing sources in HII-clumps. They may represent the most massive stars in the embedded clusters. We explore their physical properties by comparing the observations with numerical simulations.
§ SAMPLE
The physical parameters of 1246 HII-clumps have been calculated and listed in <cit.>. 1226 HII-clumps with mass (M_ cl) and bolometric luminosity (L_ bol, obs) measurements were matched with radio sources in the catalogs of the
CORNISH-North <cit.>
and the CORNISH-South <cit.> surveys. The separations between the central coordinates of a radio source and a HII-clump was required to be smaller than the radius of the clump.
Generally, one HII-clump matches with one radio source. If one HII-clump includes more than one radio source, we only consider the radio source with the strongest radio emission, because we are only interested in the most massive star in a HII-clump. Finally, the numbers of radio sources matched with HII-clumps are 244 and 148 in the CORNISH-South and the CORNISH-North survey catalogs, respectively.
Fig. <ref>(a) displays the samples of HII clumps in the L_ bol, obs-M_ cl diagram. We can see that the samples are mainly concentrated at the upper end, similar to Fig. 5 in <cit.>, where the HII-clumps were divided into two populations, i.e. radio-loud and radio-quiet. Our samples are radio-loud, which are more luminous than the radio-quiet HII-clumps. This indicates that they are hosting high-mass stars.
In Fig. <ref>(b), the radii of radio sources are systematically smaller than the radii of the HII-clumps. These radio sources should be excited by the most massive stars in the embedded clusters of HII-clumps. From the radio emission, we can infer some physical parameters of the massive stars.
§ RESULTS AND DISCUSSION
§.§ Lyman continuum flux
As discussed in <cit.>, assuming optically thin radio sources at 5 GHz (CORNISH-North) and 5.5 GHz (CORNISH-South) is justified for our samples. We calculated the physical parameters of the matched radio sources following the formalism of <cit.> and <cit.> using the equations collected in <cit.>.
The emission measure (EM) and the electron density (n_e) were calculated using:
[EM/cm^-6 pc] =
1.7×10^7 [S_ν/Jy]
[ν/GHz]^0.1[T_e/K]^0.35[θ_s/]^-2,
and
[n_e/cm^-3] =
2.3×10^6 [S_ν/Jy]^0.5[ν/GHz]^0.05[T_e/K]^0.175[d/pc]^-0.5[θ_s/]^-1.5,
where S_ν is the integrated radio flux density, T_e is the electron temperature assumed to be 10^4 K, θ_s is the angular diameter of the source, d is the distance, and ν is the observing frequency.
S_ν and θ_s are assigned the values in the catalogs of <cit.> and <cit.>.
The values of d are the distances of the corresponding HII-clumps listed in <cit.>.
The number of Lyman-continuum photons per second (N_Ly; hereafter Lyman continuum flux) is calculated from the flux density and distance as:
[N_Ly/photons s^-1] =
8.9×10^40 [S_ν/Jy]
[ν/GHz]^0.1[T_e/10^4K]^-0.45[d/pc]^2.
Table. 1 of <cit.> gives the relationship between the stellar parameters and Lyman continuum flux.
For stars with masses above 60 M_⊙, they made the assumption that Lyman continuum flux per unit area is constant, which is likely to slightly underestimate the Lyman flux of very massive stars.
In Fig. <ref>(a), we compared the data in Table. 1 of <cit.> with Table. 1 of <cit.>.
It seems that Table. 1 of <cit.> preforms better at the end with high N_Ly and stellar luminosity, L_*. The relationship between
stellar mass and luminosity in <cit.> was taken from <cit.>, which is consistent with the isochrones of 0.5 Myr or 1 Myr from the MESA Isochrones and Stellar Tracks (MIST) project <cit.>, as shown in Fig. <ref>(b).
In this work, we determined the stellar luminosity L_*, T84 corresponding to the calculated N_Ly by interpolating the
stellar luminosity given as a function of the Lyman continuum flux in Table. 1 of <cit.>, shown in Fig. <ref>(a).
§.§ The turning points
The stellar luminosity L_*, T84 of the brightest star should be smaller than the bolometric luminosity L_ bol, obs of the corresponding HII-clump.
However, in Fig. <ref>(a),
for low-luminosity HII-clumps, L_*, T84 is significantly larger than L_ bol, obs, which is abnormal.
In Eq. <ref>, N_Ly∝ S_ν, and in Fig. <ref>, N_Ly∝ L_*. If the bolometric luminosity L_ bol, obs of an HII-clump is mainly from the stars, L_ bol, obs∝ L_*.
Finally, L_ bol, obs∝ S_ν. However, in Fig. <ref>(b), there is a turning point around log_10(L_ bol, obs/L_⊙) ≈ 3.7. For HII-clumps with luminosity lower than this value, there is no clear correlation between L_ bol, obs and radio flux density S_ν.
The turning point log_10(L_ bol, obs/L_⊙) ≈ 3.7 corresponds to the HII-clump mass log_10(M_ cl/M_⊙) ≈ 2.55 in Fig. <ref>(a).
Using the 0.5 Myr isochrone, we estimated the stellar mass m_*, iso according to the stellar luminosity L_*, T84.
The ionizing sources represent the most massive stars in HII-clumps, i.e. m_*, iso≈ m_ max.
Considering the m_ max-M_ ecl
relation <cit.>, a correlation between the embedded star cluster mass M_ ecl and the mass of the most massive star m_ max,
there should be a correlation between
m_*, iso and
the mass of the corresponding embedded cluster. The clump mass M_ cl and its embedded cluster mass M_ ecl satisfy M_ ecl = SFE × M_ cl, where SFE is the final star formation efficiency of the clump, and which we assumed to be SFE=0.33 <cit.>, see Sec.<ref> for more discussion.
For 100 embedded clusters with age < 5 Myr, the table in <cit.> gives the values of their m_ max and M_ ecl. This sample fits the m_ max-M_ ecl relation well.
In Fig. <ref>(c), we compared our sample with the sample from <cit.>. When log_10(M_ ecl/M_⊙) > 2.1, the m_*, iso-M_ ecl relation follows the m_ max-M_ ecl relation.
Assuming SFE=0.33, from the turning point
log_10(M_ ecl/M_⊙) ≈ 2.1, we have
log_10(M_ cl/M_⊙) ≈ 2.55, consistent with the findings above.
§.§ Synthetic embedded clusters
We adopted the optimal sampling algorithm <cit.> to produce a population of stars for an embedded cluster with the mass M_ ecl using the publicly available GalIMF code. Detailed descriptions of the optimal sampling and the publicly available GalIMF code that optimally populates clusters and galaxies with stars are available in <cit.> and <cit.>. The code has three input parameters:
the mass of the embedded cluster, M_ ecl, its age, and a metallicity associated with this adopted age. In this work, we assume that the metallicity of the clumps and their associated embedded clusters is the solar metallicity ([Fe/H] = 0). For the age, following <cit.>, it is taken to be 1 Myr.
In Fig. <ref>, we picked out the most massive star with mass m_ max in each synthetic embedded cluster and compared it with the corresponding m_*, iso. When
log_10(m_ max/M_⊙) > 1, m_ max and m_*, iso are roughly comparable. If the mass of the most massive star in a synthetic embedded cluster is log_10(m_ max/M_⊙) ≈ 1, which corresponds to log_10(M_ cl/M_⊙) ≈ 2.55 or log_10(M_ ecl/M_⊙) ≈ 2.1, consistent with the results in Sec. <ref>. Moreover,
using the fitted m_ max-M_ ecl relation, i.e. equation.1 in <cit.>, when log_10(M_ ecl/M_⊙) ≈ 2.1, we can also obtain log_10(m_ max/M_⊙) ≈ 1.
§.§ Accretion
The turning point log_10(M_ cl/M_⊙) ≈ 2.55 is used to judge the relative importance of accretion luminosity in <cit.>, also presented in Fig. <ref>(a).
Therefore, the abnormal tail in Fig. <ref> and Fig. <ref> on the left of the vertical dashed lines means that if the accretion luminosity is dominant, the formalism in Sec. <ref> cannot be used, because the main Lyman continuum flux or radio emission are not predominantly from the stars.
The high radio flux density of low-luminosity and low-mass HII-clumps in Fig. <ref>(b) indicate that accretion is an effective way to produce radio emission. Therefore, the Lyman continuum flux N_Ly derived from radio flux density S_ν includes a significant contribution from the accretion process. The previous studies of HII regions in <cit.> have noted that there is an excess of Lyman photons over what would be expected from the ionizing stars compared to the bolometric luminosities. The inclusion of the luminosity contribution from accretion may provide an explanation.
In Sec. <ref>, we obtained the stellar luminosity L_*, T84 from the Lyman continuum flux N_Ly. Actually, L_*, T84 is not only the stellar luminosity, which includes the real stellar luminosity L_*, but the sum of L_* and the accretion luminosity L_acc, i.e. L_*, T84=L_*+L_acc.
Using the method described in <cit.>, we calculated the accretion luminosity L_acc of the most massive stars of the synthetic embedded clusters, and subtracted L_acc from the corresponding L_*, T84.
Then we repeated the analysis in Sec. <ref> using L_* rather than L_*, T84.
As can be seen in Fig. <ref>(a), the abnormal tail disappears.
§.§ Flares
The value of L_* still remains somewhat higher in Fig. <ref>(a). The abnormal tail in Fig. <ref>(b) is not completely eliminated, although it shows a substantial improvement over the case without considering accretion, presented in Fig. <ref>.
Therefore,
the stellar luminosity L_* may still be overestimated. There are likely other physical processes contributing to the luminosity that have not been considered, similar to the accretion luminosity, which also needs to be subtracted.
Flaring represents a common occurrence of magnetic activity in low-mass stars, including our Sun. While flares, originating from both the Sun and other stars, are predominantly observed in the soft X-ray band, the majority of the emitted energy is released at optical/UV wavelengths <cit.>.
For a sample of pre-main-sequence (PMS) stars in the NGC 2264 star-forming region, <cit.> detected seventy-eight X-ray flares with optical and/or mid-infrared counterparts.
The optical emission of flares, encompassing both emitted energy and peak flux, demonstrates a strong correlation with, and notably surpasses, the X-ray emission.
The luminosities in X-ray (L_ x) and optical (L_ opt) bands satisfy L_ opt= a × L_ x^b with a ranging from 5.2 to 14.7 and b between 0.6 and 1.0. Here we take the average values, i.e. L_ opt≈ 10 × L_ x^0.8.
<cit.> identified a sample of 1086 X-ray superflares and megaflares.
These events are generated by young stars across all masses throughout evolutionary stages spanning from protostars to stars without disks.
Only the sample of <cit.> has mass estimates of the PMS stars. We converted the X-ray luminosity of the flare measured in <cit.> into optical luminosity using the relation fitted in <cit.>. In Fig. <ref>(a), the distribution of L_ opt is highly scattered, and it has almost no correlation with stellar mass. The ratio of the optical and bolometric luminosities (L_ opt/L_ bol) of the PMS stars decreases rapidly as mass increases. Even for low-mass stars, the ratio is always less than 1. Thus, the luminosity from the flares can be neglected, and does not make a significant contribution to the abnormal tail in Sec. <ref>.
§.§ Star formation efficiency
§.§.§ Observations
In Fig. <ref>(c), for low-mass clusters, there is a mismatch between our sample and the sample in <cit.>. As discussed in Sec. <ref>, we may be overestimating the stellar mass.
Another possibility is that the low-mass clumps have higher SFEs > 0.33. Assuming all measured maximum stellar masses follow the m_ max-M_ ecl relation, we can calculate the mass of the embedded cluster and then the star formation efficiency for each clump.
The third-order polynomial fit to the observed m_ max-M_ ecl relation presented in <cit.> has large errors.
As an alternative, we derived a theoretical m_ max-M_ ecl relation directly from the initial mass function (IMF). The details are described in Sec.<ref> and the embedded cluster mass, M_ ecl, is given by
M_ecl≈5.37/0.77/m_max^1.3 - 0.001 - 3.33/m_max^0.3(0.77/m_max^1.3 - 0.001),
where we use the m_ max values displayed in Fig .<ref>(c) to derive the corresponding value of M_ ecl for each embedded cluster.
The abnormal tail in Fig. <ref>(b) is visible in Fig. <ref>(a) again, presented as the SFEs close to 1. We excluded this tail in the subsequent analysis by requiring the clump mass larger than 10^1.5 M_⊙.
Fig. <ref>(b) displays a strong correlation between the SFE and the clump mass. The SFE decreases with increasing clump mass, with an upper limit of ≈0.65, This is also visible in Fig.5 of <cit.>. We fit the relation between M_ cl and the SFE with a power law of the form SFE∝ M_ cl^k and find a value of k=-0.44±0.02.
This anti-correlation between the mass of the clumps and their SFE has been predicted theoretically by <cit.>. For the simple case of uniform density clumps, <cit.> predicted the clump mass and the SFE to follow SFE∝ M_ cl^-0.6 and to be the result of an increasingly stronger effect of stellar feedback in clumps with higher masses which leads to a faster gas expulsion and limits the value of the SFE. Interestingly, the median value of the SFEs is ≈0.3, consistent with our previous work <cit.>. An SFE≈ 0.33 is also consistent with the value obtained from hydrodynamic calculations including self-regulation <cit.> and also with observations of embedded systems in the solar neighborhood <cit.>.
§.§.§ Models
In order to assess the uncertainty in the observations, now we independently study the clump's SFE using theoretical models. In Fig. <ref>(b), the upper limit of the SFE is ≈0.65. In Fig. 4 of <cit.>, a SFE larger than 0.5 is necessary to fit the low-mass HII-clumps.
Therefore, we consider the SFEs that fall in the range [0.05, 0.65] with a step of 0.05. Then each HII-clump has 13 alternative SFEs.
We consider both the stellar luminosity and accretion luminosity of each star in the embedded cluster of a HII-clump, and also apply the pre-main sequence evolutionary track for each star.
There is certainly an age spread of the stellar populations in embedded clusters within HII-clumps. Therefore, selecting the appropriate isochrones according to the stars' ages is essential. Additionally, not all stars in HII-clumps are in the accretion phase, as this too is age-dependent. We should identify the fraction of stellar objects that remain protostars to properly add their accretion luminosities. In order to address these issues, we apply the method described in <cit.> to generate an age distribution for the stellar population in each synthetic cluster. The age distribution of stars within a HII-clump depends on the star formation history (SFH) of the clump. As discussed in <cit.>, compared to a constant
SFH, burst-like and time-dependent SFHs can better fit the observational data. Thus, we only consider the time-dependent SFHs in this work. As described in <cit.>, there are 12 different SFHs with different age peaks (t_ p) and age standard deviations (σ_ t), i.e. t_ p= 0.1, 0.25, 0.5, 1 Myr and σ_ t= 0.25, 0.5, 1 Myr. Finally, each clump has 156 different combinations of SFEs and SFHs, resulting in 156 distinct bolometric luminosities. The combination of a SFE and a SFH that produces the bolometric luminosity of the synthetic embedded cluster closest to the observed bolometric luminosity of the corresponding HII-clump is considered as the optimal one. The optimal SFE and SFH distributions of all HII-clumps are shown in Fig. <ref>. The optimal combination has the smallest difference from the observed bolometric luminosity.
To test the robustness of the model, we also selected the second optimal combination, which has the second smallest difference from the observed bolometric luminosity. As shown in Fig. <ref>, the optimal and the second optimal combinations give almost the same results.
Compared with Fig. <ref>, overall, the SFEs of HII-clumps derived from the observation and the model are comparable.
From Fig. <ref>(b) and (c), we have
log_10 (SFE) = (-0.37 ± 0.01) ×log_10 (M_cl) + (0.42 ± 0.04),
and
log_10 (M_cl) = (1.02 ± 0.02) ×log_10 (M_ecl) + (0.52 ± 0.05).
Then we use Eq. <ref> to calculate the SFE or the mass of the embedded cluster for each HII-clump. As shown in Fig. <ref>,
after the modification of the SFE, the mismatch in Fig. <ref>(c) indeed disappears.
§.§ Star formation history
Fig. <ref> displays the distributions of the optimal t_ p and σ_ t for HII-clumps with different masses. For each t_ p or σ_ t, we calculated the median and mean masses of the corresponding HII-clump groups. The median and mean clump masses are comparable for different σ_ t. However, there is a weak trend that clumps with larger masses have larger t_ p, indicating that low-mass clumps reach the peak of star formation earlier than high-mass clumps, possibly because star formation occurs more rapidly in low-mass clumps. The clump free-fall time (τ_ ff) is useful in providing a lower limit to the star formation time-scales. In Fig.<ref>, we compared the free-fall time of ATLASGAL clumps in the earliest ("quiescent") and the latest ("HII region") stages <cit.>.
Low-mass clumps indeed have a shorter free-fall time.
§ CONCLUSION
In this work, 1226 HII-clumps from the ATLASGAL survey with mass M_ cl and bolometric luminosity L_ bol, obs estimates were matched with radio sources in the catalogs of the CORNISH-North and the CORNISH-South
surveys. 392 HII-clumps have corresponding radio sources in the CORNISH-North/South surveys. These HII-clumps are mainly concentrated at the
upper end in the L_ bol, obs-M_ cl diagram, which are more luminous than the remaining HII-clumps, indicating they host high-mass stars. Radio sources are excited by the most massive stars in the embedded clusters of HII-clumps. We calculated the emission measure (EM), the electron density (n_e) and the Lyman continuum flux (N_Ly) for each radio source. Using the relationship between the stellar parameters and the Lyman continuum flux in Table. 1 of <cit.>,
we determined the stellar luminosity L_*, T84 according to the calculated N_Ly.
We found a turning point around log_10(L_ bol, obs/L_⊙) ≈ 3.7, which corresponds to a HII-clump mass log_10(M_ cl/M_⊙) ≈ 2.55. When the bolometric luminosity of HII-clumps is less than this value, L_*, T84 is significantly larger than L_ bol, obs. Thus, there is an abnormal tail in the L_ bol, obs-L_*, T84 diagram. Using the 0.5 Myr isochrone, we estimated the stellar mass m_*, iso according to the stellar luminosity L_*, T84.
To investigate the physical origin of this turning point, we adopted the optimal sampling algorithm to produce a population of stars for an embedded cluster using the publicly available GalIMF code.
Then, we picked out the most massive star with mass m_ max
in each synthetic embedded cluster and compared it with the corresponding m_*, iso. When log_10(m_ max/M_⊙) > 1, m_ max and m_*, iso are roughly comparable.
Actually, the turning point at log_10(M_ cl/M_⊙) ≈ 2.55 exactly corresponds to log_10(m_ max/M_⊙) ≈ 1, which is also used to judge the relative importance of accretion luminosity in <cit.>.
The abnormal tail means that if the accretion luminosity is dominant, we can not directly estimate the stellar parameters from the radio emission, and have to subtract the contribution of accretion from the radio flux density. To address this issue, we calculated the accretion luminosity L_acc of the most massive star in each synthetic embedded cluster, and subtracted L_acc from the corresponding L_*, T84.
However, the cleaned stellar luminosity L_* (=L_*, T84-L_acc) may be still overestimated. There are likely other physical processes contributing to the luminosity apart from the accretion. For example, the flaring activity represents a common occurrence of magnetic activity in low-mass stars. We found that the ratio of the optical and bolometric luminosities (L_ opt/L_ bol) of pre-main sequence stars decreases rapidly as mass increases. Even for low-mass stars, the ratio is always less than
1. Thus, the luminosity from the flaring activity can be neglected. Other physical processes need to be further investigated.
In the above analysis, we assumed a constant SFE=0.33. We further studied the change of SFE with the clump mass. According to the derived mass of the most massive star in each HII-clump, using the theoretical m_ max-M_ ecl relation, we calculated the mass of the corresponding embedded cluster and then the SFE of the clump. We find a strong anti-correlation between the SFE and the clump mass. The SFE decreases with increasing clump mass in agreement with theoretical expectations <cit.>, with a median value of ≈0.3.
We also independently derived the SFE for each HII-clump based on the model developed in <cit.>. We consider the SFEs that fall in the range [0.05, 0.65] with a step of 0.05.
We employ 12 different star formation histories for each HII-clump to distribute the ages to the stars in the clump. According to the stars' age,
we consider both the stellar luminosity and accretion luminosity of each star in the embedded cluster of a HII-clump, and apply the pre-main sequence evolutionary track for each star. Finally,
each clump has 156 different combinations of SFEs and SFHs, resulting in 156 distinct bolometric luminosities. The optimal one has the smallest difference from the observed bolometric luminosity of the corresponding HII-clump. Overall, the SFEs of HII-clumps derived from the observations and the models are comparable.
Using the quantitative relation between the SFE and the clump mass, we calculated the SFE and the mass of the embedded cluster for each HII-clump. As shown in Fig. <ref>,
after the modification of the SFE, the mismatch in Fig. <ref>(c) disappears.
For the optimal t_ p or σ_ t, we estimated the median and mean masses of the corresponding HII-clump groups. There is a weak trend that clumps with larger masses have larger t_ p, indicating that low-mass clumps reach the peak of star formation earlier than high-mass clumps, possibly because star formation occurs more rapidly in low-mass clumps, consistent with the shorter free-fall time of low-mass clumps.
We would like to thank the referee for the comments and suggestions that helped improve and clarify this work.
aa
§ THE THEORETICAL M_ MAX-M_ ECL RELATION
The stellar IMF, ξ_⋆(m), is canonical <cit.>:
ξ_⋆(m,M) =
0, m<0.08 M_⊙,
2k_⋆ m^-1.3, 0.08 M_⊙⩽ m<0.5 M_⊙,
k_⋆ m^-2.3, 0.5 M_⊙⩽ m<1M_⊙,
k_⋆ m^-α_3, 1 M_⊙⩽ m<m_max(M),
0, m_max(M) ⩽ m,
where α_3=2.3 is the constant Salpeter-Massey index for the invariant canonical IMF but will change for larger ρ_cl (the density of star-forming clump) to account for IMF variation under star-burst conditions <cit.>.
0.08 M_⊙ in Eq. <ref> is about the lower mass limit of stars <cit.>.
The mass conservation of the embedded cluster gives
M_ecl=∫_0.08 M_⊙^m_maxm ξ_⋆(m) dm.
The optimal sampling normalization condition is
1=∫_m_max^150 M_⊙ξ_⋆(m) dm,
where 150 M_⊙ is the adopted stellar upper mass limit <cit.>. When m_max > 1 M_⊙,
the optimal sampling normalization condition becomes
1=∫_m_max^150 M_⊙ k_⋆ m^-α_3 dm.
For larger ρ_cl, ξ_⋆(m) becomes top-heavy where a α_3(ρ_cl) relation is adopted from <cit.>:
α_3=
2.3, ρ_cl<9.5× 10^4,
1.86-0.43log_10(ρ_cl/10^6), ρ_cl≥ 9.5× 10^4.
Here
ρ_cl=3M_cl/4π r_ h^3
in the unit of [M_⊙/pc^3] is the clump density when the embedded cluster is forming.
For M_cl and r_ h in the equation, we take the clump's mass and radius calculated in <cit.>.
As shown in Fig.<ref>, for ATLASGAL clumps with HII regions (HII-clumps), generally, they have ρ_cl<9.5× 10^4. Thus, we take α_3=2.3 in this work.
Setting x ≡ m_max and solving the above equations, we obtain
M_ecl=∫_0.08 M_⊙^xm ξ_⋆(m) dm
=∫_0.08 M_⊙^0.5 M_⊙m 2k_⋆ m^-1.3 dm+∫_0.5 M_⊙^xm k_⋆ m^-2.3 dm
≈5.37/0.77/x^1.3 - 0.001 - 3.33/x^0.3(0.77/x^1.3 - 0.001)
§ ROBUST OF THE MODEL
|
http://arxiv.org/abs/2409.03331v1 | 20240905081032 | Quantitative Diophantine approximation and Fourier dimension of sets: Dirichlet non-improvable numbers versus well-approximable numbers | [
"Bo Tan",
"Qing-Long Zhou"
] | math.NT | [
"math.NT",
"28A80, 11K55, 11J83"
] |
^1 School of Mathematics and Statistics,
Huazhong University of Science and Technology, 430074 Wuhan, PR China
[email protected]
^2 Department of Mathematics, Wuhan University of Technology, 430070 Wuhan, PR China
[email protected]
^† Corresponding author.
§ ABSTRACT
Let E⊂ [0,1] be a set that supports a probability measure μ with the property that |μ(t)|≪ (log |t|)^-A for some constant A>2.
Let 𝒜=(q_n)_n∈ be a positive, real-valued, lacunary sequence.
We present a quantitative inhomogeneous Khintchine-type theorem in
which the points of interest are restricted to E and the denominators of the shifted fractions are restricted to 𝒜.
Our result improves and extends a previous result in this direction obtained by Pollington-Velani-Zafeiropoulos-Zorin (2022). We also show that the Dirichlet non-improvable set VS well-approximable set is of positive Fourier dimension.
[2010]Primary 28A80; Secondary 11K55, 11J83
Quantitative Diophantine approximation and Fourier dimension of sets
Dirichlet non-improvable numbers versus well-approximable numbers
Bo Tan Qing-Long Zhou^†
======================================================================================================================================
§ INTRODUCTION
§.§ Metric Diophantine approximation
The classical metric Diophantine approximation is concerned with the rational approximations to real numbers.
A qualitative answer is provided by the fact that the rationals are dense in the set of reals.
Dirichlet's theorem initiates the quantitative description of the rational approximation, which states that for any x∈ℝ and Q>1,
there exists (p,q)∈ℤ×ℕ such that
|qx-p|≤1/Q and q<Q.
A direct corollary of (<ref>) reads for any x∈ℝ,
there exists infinitely many (p,q)∈ℤ×ℕ such that
|qx-p|≤1/q.
The statements above provide two possible ways to pose Diophantine approximation problems,
often referred to as approximation problems of uniform vs. asymptotic type, namely studying the
solvability of inequalities for all large enough values of certain parameters vs. for infinitely
many parameters, which induces respectively limsup and liminf sets (see <cit.>). In this respect, it is natural to ask how much we can improve inequalities (<ref>) and (<ref>).
§.§.§ Improvability of asymptotic version
Hurwitz's Theorem <cit.> shows that the approximation function 1/q on the right side of (<ref>) can only be improved to 1/√(5)q.
A major obstacle for further improvement is the existences of the badly approximable numbers such as the Golden ratio 1+√(5)/2.
Instead of considering all points, we main concern the Diophantine properties of generic points (in a sense of measure). Precisely, for a point γ∈ [0,1) and an approximation function ψ:ℕ→ℝ^+, we consider the set
W(γ,ψ):={x∈ [0,1): ||qx-γ||<ψ(q) for infinitely many q∈},
where ||α||:=min{|α-m| m∈ℤ} denotes the distance from α∈ to the nearest integer.
We call W(γ,ψ) a homogeneous or inhomogeneous ψ-well approximable set according as γ=0 or not.
Khintchine's Theorem describes the Lebesgue measure of W(0,ψ) for a monotone approximation function ψ.
Let ψ→ [0,1/2) be a monotonically decreasing function.
We have
ℒ(W(0,ψ))=
0 if ∑_q=1^∞ψ(q)<∞,
1 if ∑_q=1^∞ψ(q)=∞,
where ℒ is the Lebesgue measure.
We remark that, in the convergence part of Khintchine's theorem, the monotone condition on ψ can be removed as the proof is an application of the first Borel-Cantelli Lemma
. However, the monotonicity is an essential assumption in the divergence part.
Indeed, Duffin-Schaeffer <cit.> constructed a non-increasing function ψ such that ∑_qψ(q)=∞, but W(0,ψ) is of Lebesgue measure 0. Further, consider the set
W^∗(γ, ψ):={x∈ [0,1) ||qx-γ||^∗<ψ(q) for infinitely many q∈},
where
||qx-γ||^∗:=min_gcd(p,q)=1|qx-p-γ|.
Duffin-Schaeffer conjecture claimed that, for any ψ→ [0,1/2), the Lebesgue measure of the set W^∗(0, ψ)
is either 0 or 1 according as the series ∑_qϕ(q)/qψ(q) converges or diverges, where ϕ is the Euler's totient function.
This conjecture animated a great deal of research until it was finally proved in a breakthrough of Koukoulopoulos-Maynard <cit.>.
Assuming the monotonicity of ψ, Szüsz <cit.> proved the inhomogeneous variant of Khintchine's Theorem. To gain a deeper understanding of the results of Khintchine and Szüsz, Schmidt further considered the corresponding quantitative problem.
Given x∈[0,1), Q∈, and ψ→ [0,1/2) a monotonically decreasing function,
define
S(x,Q):=♯{1≤ q≤ Q ||qx-γ||<ψ(q)}.
Then for any ε>0, for Lebesgue almost all x∈[0,1),
S(x,Q)=Ψ(Q)+O(Ψ^1/2(Q)log^2+εΨ(Q)),
where Ψ(Q):=∑_q=1^Q2ψ(q).
Without the monotonicity assumption of ψ,
the inhomogeneous version of the Duffin-Schaeffer conjecture is still a widely open question, for recent progress see <cit.>. For the homogeneous case, Aistleitner-Borda-Hauke established a Schmidt-type result of Koukoulopoulos-Maynard's theorem, and raised an open problem: to what extent the error term in Theorem <ref> can be improved?
Let ψ→ [0,1/2) be a function. Write
S^∗(x,Q):=♯{1≤ q≤ Q ||qx||^∗<ψ(q)}.
Let C>0 be an arbitrary constant. Then for almost all x∈[0,1),
S^∗(x,Q)=Ψ^∗(Q)+O(Ψ^∗(Q)(logΨ^∗(Q))^-C),
where Ψ^∗(Q):=∑_q=1^Q2ϕ(q)ψ(q)/q.
Recently, Hauke-Vazquez Saez-Walker <cit.> improved the error-term (logΨ^∗(Q))^-C in Theorem <ref> to exp(-(logΨ^∗(Q))^1/2-ε); Koukoulopoulos-Maynard-Yang
obtained an almost sharp quantitative version.
Assume that Ψ^∗(Q)=∑_q=1^Q2ϕ(q)ψ(q)/q→∞ as Q→∞.
Then for almost all x∈[0,1) and ε>0,
S^∗(x,Q)=Ψ^∗(Q)+O(Ψ^∗(Q)^1/2+ε).
We now turn to a brief account of Hausdorff dimension of the ψ-well approximable set
W(γ,ψ). Jarník Theorem <cit.> shows, under the monotonicity of ψ, that
_ HW(0,ψ)=2/τ+1, where τ=lim inf_q→∞-logψ(q)/log q.
It is worth mentioning that Jarník Theorem can be deduced by combining Theorem <ref> and the mass transference principle of Beresnevich-Velani <cit.>. For a general function ψ, the Hausdorff dimension of W(0,ψ) was studied extensively by Hinokuma-Shiga <cit.>. Recently, Yu <cit.> completely determined the Hausdorff dimension of W^∗(γ,ψ).
§.§.§ Improvability of uniform version
The improvability of the asymptotic version leads to the study of
the ψ-Dirichlet improvable set
D(ψ):={x∈[0,1)min_1≤ q<Q||qx||≤ψ(Q) for all Q>1}.
As far as the metric theory of D(ψ) is concerned, the continued fraction expansion plays a significant role. With the help of the Gauss transformation T [0,1) → [0,1) defined by
T(0)=0, T(x)= 1/x1 for x∈(0,1),
each irrational number x in [0,1) can be uniquely expanded into the following form
x=1/a_1(x)+1/a_2(x)+1/⋱+1/a_n(x)+T^n(x)
=1/a_1(x)+1/a_2(x)+1/a_3(x)+⋱,
with a_n(x)=⌊1/T^n-1(x)⌋ (here ⌊·⌋ denotes the greatest integer less than or equal to a real number and T^0 denotes the identity map), called the partial quotients of x.
For simplicity of notation, we write (<ref>) as
x=[a_1(x),a_2(x),…,a_n(x)+T^n(x)]=[a_1(x),a_2(x),a_3(x),…].
We also write the truncation
[a_1(x), a_2(x), …, a_n(x)]=:p_n(x)/q_n(x),
called the n-th convergent to x.
Davenport-Schmidt <cit.> proved that a point x belongs to D(c/Q) for a certain c∈ (0, 1) if and only if
the partial quotients sequence {a_n(x)}_n=1^∞ of x is uniformly bounded. Thus the Lebesgue measure of D(c/Q) is zero. Write
Φ_1(q)=1/qψ(q) and Φ_2(q)=qψ(q)/1-qψ(q),
and set
𝒦(Φ_1):={x∈[0,1) a_n+1(x)≥Φ_1(q_n(x)) for infinitely many n∈},
𝒢(Φ_2):={x∈[0,1) a_n(x)a_n+1(x)≥Φ_2(q_n(x)) for infinitely many n∈}.
By the optimal approximation property of the convergents, namely
min_1≤ q<q_n+1(x)||qx||=||q_n(x)· x||,
and the monotonicity of ψ, we have the following inclusions about the sets W(0,ψ) and D(ψ)
𝒦(Φ_1)⊂ W(0,ψ)⊂𝒦(1/3Φ_1),
𝒢(Φ_2)⊂ D^c(ψ)⊂𝒢(1/4Φ_2),
where D^c(ψ), called the ψ-Dirichlet non-improvable set, denotes the complement set of D(ψ).
It follows that W(0,ψ) and 𝒦(Φ_1), D^c(ψ) and 𝒢(Φ_2)
share the same Khintchine-type `0-1' law and Hausdorff dimension respectively.
Based on these analysis, Kleinbock-Wadleigh <cit.> completely determined the Lebesgue measure of
D^c(ψ). Hussain et al. <cit.> considered the Hausdorff measure of D^c(ψ) and showed that
_ H𝒢(Φ)=_ H𝒦(Φ)
for a non-decreasing function Φ→ℝ. Noting that 𝒦(Φ)⊂𝒢(Φ), it is desirable to know how large is the difference between 𝒦(Φ)
and 𝒢(Φ) (for more related results, one can refer to <cit.>).
Let Φ→ℝ^+ be a non-decreasing function. Then
_ H𝒦(Φ)
=_ H(𝒢(Φ) \𝒦(Φ))
=2/τ'+2 where τ'=lim inf_q→∞logΦ(q)/log q.
To sum up, if Φ(q)=1/qψ(q) is non-decreasing, we have
_ HW(0,ψ)=_ H𝒦(Φ)
=_ H𝒢(Φ)
=_ H(𝒢(Φ) \𝒦(Φ)).
§.§ Normal numbers, Equidistribution and Diophantine approximation on fractals
Normal numbers were introduced by Borel in the seminal paper <cit.> published in 1909.
From the very beginning the concept of normality was closely related to the concept of “randomness”.
The normality of real numbers was originally defined in terms of counting the number
of blocks of digits. Let b≥ 2 be an integer. For any x∈[0,1), it admits a b-adic expansion
x=∑_n=1^∞ε_n(x)/b^n with ε_n(x)∈{0,1,…,b-1}.
For k∈ and x^(k):=(x_1,…,x_k)∈{0,1,…,b-1}^k, write
f_N^(k)(x,x^(k))=♯{1≤ j≤ N (ε_j(x),…,ε_j+k-1(x))= (x_1,…,x_k) }/N.
The number x is called to be
normal to base b if lim_Nf_N^(k)(x,x^(k))=1/b^k for any k≥1
and x^(k)∈{0,1,…,b-1}^k, and
(absolutely) normal if it is normal to every base b≥ 2.
Borel <cit.> first showed that Lebesgue almost every x∈[0,1) is normal. And thus any subset of [0,1) of positive Lebesgue measure must contain normal numbers; it is natural to ask what conditions can guarantee that a fractal set E (usually of Lebesgue measure zero) contains normal numbers.
We note that the condition _ HE>0 is not sufficient by considering the Cantor middle-thirds set.
A real-valued sequence (x_n)_n∈ is called equidistribution or uniformly distributed modulo one if for each sub-interval [a,b]⊆[0,1] we have
lim_N→∞♯{1≤ n≤ N x_n mod 1∈ [a,b]}/N=b-a.
The notation of equidistribution has been studied intensively since the beginning of the 20th century, originating in Weyl's seminal paper <cit.>. This topic has developed into an important area of mathematics, with many deep connections to fractal geometry, number theory, and probability theory.
Generally, it is a challenging problem to determine whether a given sequence of is equdistribution. For example, we do not know whether the sequence {(3/2)^n} is equdistribution or not. This problem seems to be completely out of reach for current methods <cit.>,
even though Weyl (<cit.>) established his famous criterion, which reduces the equidistribution problem to bounds of some exponential sums.
The following theorem of Davenport et al. <cit.> established the generic distribution properties of a sequence (q_nx)_n∈ with x is restricted to a subset E of [0,1]. As is customary, we write e(x)=exp(2π ix).
Let μ be a Borel probability measure supported on a subset E⊆[0,1]. Let 𝒜=(q_n)_n∈
be a sequence of reals satisfying
∑_N=1^∞1/N∫|1/N∑_n=1^Ne(kq_nx)|^2μ̣(x)<∞
for any k∈ℤ∖{0}, then for μ almost all x∈ E the sequence (q_nx)_n∈ is equdistributed.
Recalling that the Fourier transform of a non-atomic measure μ is defined by
μ(t):=∫ e(-tx)μ̣(x) (t∈ℝ),
we may rewrite the series in Theorem <ref> as
∑_N=1^∞1/N∫|1/N∑_n=1^Ne(kq_nx)|^2 μ̣(x)
=∑_N=1^∞1/N^3∑_m,n=1^Nμ(k(q_n-q_m)).
Wall <cit.> proved that a real number x is normal to integer base b≥2 if
and only if the sequence (b^nx)_n∈ is equdistributed. Along this line, Theorem <ref> and (<ref>) show that Lebesgue almost all reals are normal to every integer base, as already proved by Borel.
Also, we may transfer the existence of normal numbers in a fractal set to an equidistribution problem of lacunary sequences.
Recall that a sequence 𝒜=(q_n)_n∈ is said to be lacunary if there exists C>1 such that 𝒜 satisfies the classical Hadamard gap condition
q_n+1/q_n≥ C (n∈).
Pollington et al. <cit.> showed that if
μ(t)=O((loglog|t|)^-(1+ε))
with ε>0, then for μ almost every x∈ E, (<ref>) holds with x_n=q_nx for a lacunary sequence 𝒜={q_n} of natural numbers. It follows that μ almost every x∈ E is a normal number.
Hence the existence of normal numbers in a fractal set can be deduced from the Fourier decay of a measure supported on the set.
We refer to <cit.> for some recent works on the logarithmic and polynomial Fourier decay for fractal measures.
We now turn to another aspect of the equidistribution theory which is relevant to the hitting problems.
Let 𝔹:=B(γ,r) denote the ball with centere γ∈ [0,1] and radius r≤ 1/2.
Theorem <ref> implies that, for μ-almost all x∈ E, the sequence (q_nx)_n∈ hits the ball 𝔹 for the expected number of times, namely:
lim_N→∞♯{1≤ n≤ N ||q_nx-γ|| ≤ r}/N=2r.
Instead of a fixed r, we consider the situation in which r is allowed to shrink with time. In view of this, let ψ→ (0,1)
be a real function, and set the counting function
R(x,N):=R(x,N; γ, ψ, 𝒜):=♯{1≤ n≤ N ||q_nx-γ||≤ψ(q_n)}.
As alluded to in the definition, we will often simply write R(x,N) for R(x,N; γ, ψ, 𝒜) since the other dependencies are usually fixed and will be clear from
the context.
Pollington et al. <cit.> studied the quantitative property of the counting function R(x,N).
Let 𝒜=(q_n)_n∈ be a lacunary sequence of natural numbers.
Let γ∈ [0,1] and ψ→ (0,1).
Let μ be a probability measure supported on a subset E of [0,1]. Suppose that
there exists A>2 such that
μ(t)=O((log |t|)^-A) as |t|→∞.
Then for any ε>0, we have that
R(x,N)=2Ψ(N)+O(Ψ(N)^2/3(log(Ψ(N)+2))^2+ε)
for μ-almost all x∈ E, where Ψ(N)=∑_n=1^Nψ(q_n).
In Theorem <ref>, the authors mentioned that the associated error terms would be able to improve. We continued the study for a sequence 𝒜 of real numbers (instead of natural numbers).
Note that the factorization and coprimeness properties of the natural numbers played a significant role in the proof of Theorem <ref>.
Let 𝒜=(q_n)_n∈ be a real-valued lacunary sequence. Let γ∈ [0,1]
and ψ→ (0,1).
Let μ be a probability measure supported on a subset E of [0,1]. Suppose that there exists A>2 such that
μ(t)=O((log |t|)^-A) as |t|→∞.
Then for any ε>0, we have that
R(x,N)=2Ψ(N)+O(Ψ(N)^1/2(log(Ψ(N)+2))^3/2+ε)
for μ-almost all x∈ E, where Ψ(N)=∑_n=1^Nψ(q_n).
The exponent 1/2 in the error term is optimal in Theorem <ref>.
Motivated by the metric theory of Diophantine approximation on manifolds, we consider the Diophantine properties of points which are restricted to a sub-manifold of [0,1]. Given a real number γ∈[0,1],
a real function ψℝ→ (0,1) and a sequence 𝒜=(q_n)_n∈, set
W_𝒜(γ, ψ):={x∈ [0,1] ||q_nx-γ|| ≤ψ(q_n) for infinitely many n ∈}.
Our goal is to obtain an analogue of Khintchine-type Theorem for the size of W_𝒜(γ, ψ)∩ E, where E is a subset of [0,1]. A direct corollary of Theorem <ref> implies that μ-almost all x∈ E, the quantity R(x,N) is bounded if Ψ(N) is bounded, and tends to infinity if Ψ(N) tends to infinity.
Let 𝒜=(q_n)_n∈ be a real-valued lacunary sequence. Let γ∈ [0,1]
and ψ→ (0,1). Let μ be a probability measure supported on a subset E of [0,1]. Suppose there exists A>2 such that
μ(t)=O((log |t|)^-A) as |t|→∞.
Then
μ(W_𝒜(γ, ψ)∩ E)=
0 if ∑_n=1^∞ψ(q_n)<∞,
1 if ∑_n=1^∞ψ(q_n)=∞.
No monotonicity assumption on ψ is necessary in Corollary <ref>.
§.§ Fourier dimension and Salem set
The Fourier dimension of a Borel set E⊆ℝ is defined by
_ FE=sup{s∈[0,1]∃μ∈𝒫(E) such that |μ(Θ)|≤ C_s(1+|Θ|)^-s/2}.
Here and elsewhere 𝒫(E) denotes the set of Borel probability measure on ℝ whose support is contained in E. Hence, by the definition of Fourier dimension and condition (<ref>), we obtain that:
If _ FE>0, then E must contains normal numbers.
A classical construction of a measure with polynomial Fourier decay by Kaufman <cit.>, later updated by Quefféle-Ramaré <cit.>, shows that the set of badly approximable numbers has positive Fourier diemension. Chow et al. <cit.> further developed the ideas from <cit.> and established an inhomogeneous variant of Kaufman's measure. Recently,
Fraser-Wheeler <cit.> proved that the set of exact approximation order has positive Fourier diemension. Some Fourier dimension estimates of sets of well-approximable matrices are given in <cit.>.
Fourier dimension is closely related to Hausdorff dimension. Indeed, Frostman's lemma <cit.> states that the Hausdorff dimension of a Borel set E⊆ℝ is
_ HE=sup{s∈[0,1]∃μ∈𝒫(E) such that ∫ |μ(Θ)|^2|Θ|^s-1μ̣<∞}.
It follows that
_ FE≤_ HE
for every Borel set E⊆ℝ. See <cit.> for more information.
Generally the Hausdorff and Fourier dimensions of a set are distinct. For example, every (n-1)-dimensional hyperplane in ℝ has Hausdorff dimension n-1 and Fourier dimension 0. The Cantor middle-thirds set has Hausdorff dimension log2/log3 and Fourier dimension 0.
A set E⊆ℝ is called a Salem set if _ FE=_ HE. Salem <cit.> proved that for every s∈[0,1] there exists a Salem set with dimension s by constructing random Cantor-type sets. Kahane <cit.> showed that for every s∈[0,n] there exists a Salem set in ℝ^n with dimension s by considering images of Brownian motion. For other random Salem sets the readers are referred to <cit.> and references therein.
We now focus on finding explicit Salem set. To the best of our knowledge, explicit Salem sets are much more rare. We list some Salem sets of explicit version:
* Kaufman <cit.> (1981), Bluhm <cit.> (1996):
If τ≥1, the well-approximable set W(0, q↦ q^-τ) is a Salem of dimension 2/τ+1;
* Hambrook <cit.> (2017):
Identify ℝ^2 with ℂ, and the lattice ℤ^2 as the ring of the Gaussian integers ℤ+iℤ. And thus, for q∈ℤ^2 and
x∈ℝ^2, qx can be regarded as a product of complex numbers.
If τ≥1, the set
{x∈ℝ^2 |qx-p|_∞≤ (|q|_∞)^-τ for i.m. (q, p)∈ℤ^2×ℤ^2}
is a Salem of dimension min{4/τ+1, 2}, where |x|_∞ denotes the max-norm of x∈ℝ^2 and `i.m.' means `infinitely many';
* Fraser-Hambrook <cit.> (2023):
Let K be a number field of degree n with ℤ_K its ring of integers. Fix an integral basis {ω_1,…,ω_n} for ℤ_K.
The mapping (q_1,…,q_n)↦∑_i=1^nq_iω_i establishes an identification between ℚ^n and K, as well as
ℤ^n and ℤ_K. When τ≥1, the set
{x∈ℝ^n|x-p/q|_2≤ (|q|_2)^-τ-1 for i.m. (p, q)∈ℤ^n×ℤ^n}
is a Salem of dimension 2n/τ+1, where |x|_2 denotes the 2-norm of
x∈ℝ^n for n∈.
For τ≥1, let Φ(q)=q^τ-1. Then 𝒦(3Φ) and 𝒢(3Φ)
are Salem sets of dimension 2/τ+1, where 𝒦(3Φ) and 𝒢(3Φ) are defined as (<ref>) and (<ref>) respectively.
By (<ref>), (<ref>) and the fact 𝒦(3Φ)⊂𝒢(3Φ), we have
_ FW(0, q↦ q^-τ)≤_ F𝒦(3Φ)≤_ F𝒢(3Φ)≤_ HW(0, q↦ q^-τ).
We complete the proof by recalling W(0, q↦ q^-τ) is a Salem set.
By Thoerem 1.2 in <cit.>, we remark that if τ is sufficiently close to 1, then both 𝒦(3Φ) and 𝒢(3Φ) contain non-trivial 3-term arithmetic progressions.
Corollary <ref> motivates the following questions:
* The sets 𝒦(3Φ) and 𝒢(3Φ) must contain normal numbers. Does the difference 𝒢(3Φ) \𝒦(3Φ) contain normal numbers?
* When Φ(q)=q^τ-1, is 𝒢(3Φ) \𝒦(3Φ) a Salem set?
* For a general function Φ, are 𝒦(3Φ), 𝒢(3Φ) or 𝒢(3Φ) \𝒦(3Φ) Salem?
We obtain a partial result.
Let Φ [1,∞) →ℝ^+ be a non-decreasing function satisfying
lim inf_q→∞logΦ(q)/log q=:τ <√(73)-3/8.
Then 𝒢(3Φ) \𝒦(3Φ) has positive Fourier dimension.
In the proof of Theorem <ref>,
the similarly argument as in subsection <ref> can apply to extend the main result Theorem 1.2 in <cit.> by reducing the constraints on the approximation function Φ. To be accurate, assuming that the function Φ satisfies the conditions in Theorem <ref>, we have that the
set of exact approximation order
Exact(Φ)={x∈[0,1)lim sup_n→∞log a_n+1(x)/logΦ(q_n(x))=1}
is of positive Fourier dimension.
§ PRELIMINARIES
§.§ Continued fractions
This subsection is devoted to recalling some elementary properties of continued fractions. For more information, the readers are referred to <cit.>.
For an irrational number x∈[0,1) with continued fraction expansion (<ref>), the sequences
{p_n(x)}_n≥ 0, {q_n(x)}_n≥0 satisfy the following recursive relations <cit.>:
p_n+1(x)=a_n+1(x)p_n(x)+p_n-1(x), q_n+1(x)=a_n+1(x)q_n(x)+q_n-1(x),
with the conventions that (p_-1(x),q_-1(x))=(1,0), (p_0(x),q_0(x))=(0,1).
Clearly, q_n(x) is determined by a_1(x),…,a_n(x), so we also write
q_n(a_1(x),…,a_n(x))
instead of q_n(x). We write a_n and q_n in place of a_n(x) and q_n(x) when no confusion can arise.
For (a_1,…,a_n)∈ℕ^n, we have
(1) q_n≥ 2^n-1/2, and ∏_k=1^na_k≤ q_n≤∏_k=1^n(a_k+1).
(2) For k≥ 1,
1≤q_n+k(a_1,…,a_n,a_n+1,…,a_n+k)/q_n(a_1,…,a_n)q_k(a_n+1,…,a_n+k)≤ 2.
A basic cylinder of order n is a set of the form
I_n(a_1,…,a_n):={x∈[0,1) a_k(x)=a_k, 1≤ k≤ n};
the basic cylinder of order n containing x will be denoted by I_n(x), i.e.,
I_n(x)=I_n(a_1(x),…,a_n(x)).
For (a_1,…,a_n)∈ℕ^n, we have
1/2q_n^2≤|I_n(a_1,…,a_n)|=1/q_n(q_n+q_n+1)≤1/q_n^2.
The next lemma describes the distribution of basic cylinder I_n+1 of order n+1 inside an n-th basic interval I_n.
Let I_n(a_1,…,a_n) be a basic cylinder of order n, which is partitioned into sub-intervals I_n+1(a_1,…,a_n,a_n+1) with a_n+1∈ℕ.
When n is odd (resp. even), these sub-intervals are positioned from left to right (resp. from right to left), as a_n+1 increases.
We conclude this subsection by citing a dimensional result on continued fractions.
Let Bad(N) be the set consisting of all points in [0,1) whose partial quotients are not great than N, i.e.,
Bad(N)={x∈[0,1) 1≤ a_n(x)≤ N for n≥ 1}.
For N≥ 8,
1-1/Nlog 2≤_ HBad(N)≤ 1-1/8Nlog N.
§.§ Oscillatory integrals
In this subsection, we cite three lemmas on the oscillatory integrals for later use.
The first van der Corput-type inequality is useful for nonstationary phases.
If f is a C^2 function on [0,1], and satisfies that |f'(x)|≥ A and |f”(x)|≤ B, then
|∫_0^1e(f(x))x̣|≤1/A+B/A^2.
The second van der Corput-type inequality applies when f'(x) vanishes at some points in [0,1] in question.
If f is C^2 on [0,1], and f'(t)=(C_1x+C_2)g(x) where g satisfies |g(x)|≥ A
and |g'(x)|≤ B with B≥ A, then we have
|∫_0^1e(f(x))x̣|≤6B/A^3/2C_1^1/2.
The last is a comparison lemma which enables us to compare an integral with respect to a general measure to an integral with respect to Lebesgue measure. The original ideas for the proofs date back to Kaufman <cit.> (see also <cit.>).
Let F be a C^2 function on [0,1] satisfying |F(x)|≤ 1 and |F'(x)|≤ M, and write m_2=∫_0^1|F(x)|^2x̣. Let μ be a Borel probability measure on [0,1], and let Λ(h)
be the maximum μ-measure of all intervals [t,t+h] of length h. Then we have for all r>0,
∫_0^1|F(x)|μ̣(x)≤ 2r+Λ(r/M)·(1+m_2M/r^3).
§
ESTABLISHING THEOREM <REF>
Before presenting the proof of Theorem <ref>, we cite a quantitative version of the Borel-Cantelli lemma.
Let (X,ℬ,μ) be a probability space, let (f_n(x))_n∈ be a sequence of non-negative μ-measurable functions defined
on X, and let (f_n)_n∈, (ϕ_n)_n∈ be sequences of reals such that
0≤ f_n≤ϕ_n (n=1,2,…).
Suppose that for arbitrary a, b∈ with a<b, we have
∫_X(∑_n=a^b(f_n(x)-f_n))^2μ̣(x)≤ C∑_n=a^bϕ_n
for an absolute constant C>0. Then, for any given ε>0,
∑_n=1^Nf_n(x)=∑_n=1^Nf_n+O(Ψ(N)^1/2(logΨ(N))^3/2+ε+max_1≤ n≤ Nf_n)
for μ-almost all x∈ X, where Ψ(N)=∑_n=1^Nϕ_n.
The rest of this section is devoted to proving Theorem <ref>. We first fix some notation.
Let X=[0,1], f_n(x)=𝕀_E_q_n^γ(x) and f_n=2ψ(q_n), where 𝕀_E_q_n^γ is the indicator function of the set
E_q_n^γ:={x∈[0,1] ||q_nx-γ||≤ψ(q_n)}.
Hence
R(x,N)=∑_n=1^Nf_n(x).
Furthermore, it is readily verified for a, b∈ with a<b that
(∑_n=a^b(f_n(x)-f_n))^2=∑_n=a^bf_n(x)+2∑_a≤ m<n≤ bf_m(x)f_n(x)+
(∑_n=a^bf_n)^2-2∑_n=a^bf_n·∑_n=a^bf_n(x),
and thus
∫_0^1(∑_n=a^b(f_n(x)-f_n))^2μ̣(x)= ∑_n=a^bμ(E_q_n^γ)
+2∑_a≤ m<n≤ bμ(E_q_n^γ∩ E_q_m^γ)
-4∑_n=a^bψ(q_n)(∑_n=a^bμ(E_q_n^γ)-∑_n=a^bψ(q_n)).
§.§ Estimation of ∑_n=a^bμ(E_q_n^γ)
We set
𝕀_ψ(q_n), γ(x)=
1 if |x-γ|≤ψ(q_n),
0 otherwise,
i.e., 𝕀_ψ(q_n), γ(x) is the indicator function of the interval
[l_n, r_n] with l_n=γ-ψ(q_n) and r_n=γ+ψ(q_n). Hence f_n(x)=∑_k∈ℤ𝕀_ψ(q_n), γ(q_nx+k).
We require an approximation function g(x) to 𝕀_ψ(q_n), γ(x) such that g(x) vanishes outside some interval,
say [-n^3,n^3], and thus we also have
∑_k=-n^3^n^3g(k)e(kx)
as an approximation to 𝕀_ψ(q_n), γ(x).
We begin the construction of the approximation by writing
𝕀^∗(x)= 1 if x≥0,
-1 if x<0.
Hence 𝕀_ψ(q_n), γ(x)=1/2[𝕀^∗(H(x-l_n))+
𝕀^∗(H(r_n-x))]
for any H>0. Beurling and Selberg <cit.> showed that the function
F(z)=[sinπ z/π]^2[∑_n=0^∞(z-n)^-2-∑_n=1^∞(z+n)^-2+2/z]
satisfies that
F(x)≥𝕀^∗(x), ∫_-∞^+∞[F(x)-𝕀^∗(x)]x̣=1,
and for any α, β∈ℝ,
∫_-∞^+∞[F(x-α)+F(x-β)]e(-tx)x̣=0 when |t|≥ 1.
We define the desired functions as
g_1(x) = 1/2[F(n^3(x-l_n))+ F(n^3(r_n-x))],
g_2(x) =-1/2[F(n^3(l_n-x))+ F(n^3(x-r_n))].
We list some properties of g_i(x) (i=1, 2); the reader may find more details in Lemmas 2.3-2.5 of the book <cit.>.
* g_i(x)∈ L^1(ℝ);
* g_2(x)≤𝕀_ψ(q_n), γ(x)≤ g_1(x);
* g_i(m)=0 for |m|> n^3;
* ∫_-∞^+∞ |g_i(x)-𝕀_ψ(q_n), γ(x)|x̣=1/n^3.
These properties together with Poisson summation Formula yield that
∑_k=-n^3^n^3g_2(k)e(kq_nx)≤ f_n(x)=∑_k∈ℤ𝕀_ψ(q_n), γ(q_nx+k)≤∑_k=-n^3^n^3g_1(k)e(kq_nx),
g_1(0)=2ψ(q_n)+1/n^3,
g_2(0)=2ψ(q_n)-1/n^3.
Moreover, for k∈ℤ∖{0}, we estimate
|g_i(k)|
=|∫_-∞^+∞𝕀_ψ(q_n), γ(x)e(-kx)x̣+
∫_-∞^+∞[g_i(x)-𝕀_ψ(q_n), γ(x)]e(-kx)x̣|
≤min{1/|k|, 2ψ(q_n)}+1/n^3,
and thus
∑_n=a^bμ(E_q_n^γ)
=∑_n=a^b∫_0^1f_n(x)μ̣(x)
≤∑_n=a^b∑_k=-n^3^n^3∫_0^1g_1(k)e(kq_nx)μ̣(x)
=∑_n=a^b[2ψ(q_n)+1/n^3]+ ∑_n=a^b∑_-n^3≤ k≤ n^3, k≠ 0g_1(k)μ(-kq_n)
=2∑_n=a^bψ(q_n)+O(∑_n=a^b(log q_n)^-A∑_1≤ k≤ n^3(1/k+1/n^3))
(<ref>)=2∑_n=a^bψ(q_n)+O(1).
Argued in a similar way (with g_2 instead of g_1), we obtain the reversed inequality. And thus we reach that
∑_n=a^bμ(E_q_n^γ)=2∑_n=a^bψ(q_n)+O(1).
§.§ Estimation of ∑_n=a^bμ(E_q_n^γ∩ E_q_m^γ)
For m, n∈ with m<n, we have
μ(E_q_n^γ∩ E_q_m^γ)
= ∫_0^1f_n(x)f_m(x)μ̣(x)
≤∑_-n^3≤ s,t≤ n^3g_1(s)g_1(t)μ(-(sq_n+tq_m)).
We will divide the above summation into four parts.
Case 1 s=t=0. We have
g_1(0)g_1(0)μ(0)
=4ψ(q_n)ψ(q_m)+2ψ(q_n)/n^3+2ψ(q_m)/n^3+(1/n^3)^2.
Case 2 s≠ 0, t=0. We deduce
∑_-n^3≤ s≤ n^3
s≠ 0g_1(s)g_1(0)μ(-sq_n) ≤∑_-n^3≤ s≤ n^3
s≠ 0[min(1/|s|, 2ψ(q_n))+1/n^3](2ψ(q_m)+1/n^3)μ(-sq_n)
≪(2ψ(q_m)+1/n^3)log n/n^A.
Case 3 s= 0, t ≠ 0.
Similar argument to Case 2 shows that
∑_-n^3≤ t≤ n^3
t≠ 0g_1(0)g_1(t)μ(-tq_m)
≪(2ψ(q_n)+1/n^3)log n/n^A.
Case 4 s≠ 0, t ≠ 0. We calculate
∑_-n^3≤ s≤ n^3
s≠ 0∑_-n^3≤ t≤ n^3
t≠ 0g_1(s)g_1(t)μ(-(sq_n+tq_m))
≤∑_-n^3≤ s,t≤ n^3
st≠ 0[min(1/|s|, 2ψ(q_n))+1/n^3]
[min(1/|t|, 2ψ(q_n))+1/n^3]μ(-(sq_n+tq_m)).
The latter summation is divided into two parts as follows.
(1) When s and t have the same sign, we have the summation over such kind of (s,t)
<∑_-n^3≤ s, t≤ n^3
st> 0(1/|s|+1/n^3)(1/|t|+1/n^3)n^-A≪ n^-A(log n)^2.
(2) When s and t have the opposite signs, we estimate the summation
by considering two subcases.
(2.1) |sq_n-tq_m|≥q_n/3n^6. The summation over such (s,t)
≪(log(q_n/3n^6))^-A∑_1≤ s,t ≤ n^3(1/s+1/n^3)(1/t+1/n^3)≪(log n)^2/(n-18log n)^A.
(2.2) |sq_n-tq_m|<q_n/3n^6.
In this case, corresponding to (m,n), there is at most one pair (s,t)=(s_0,t_0) with (s_0,t_0)=1 satisfying the condition;
any other solution (s,t) will take the form (ks_0, kt_0) with 1≤ k≤ n^3.
Since s_0≠ 0, 1/t_0≤1/s_0(q_m/q_n+1/3n^6).
Hence using the trivial bound |μ(t)|≤1, we deduce that the corresponding summation
≪∑_k=1^n^3[min(1/k, 2ψ(q_n))min(1/k(q_m/q_n+1/3n^6),2ψ(q_m))+2/kn^3+(1/n^3)^2]
≤∑_k=1^n^3min(1/k, 2ψ(q_n))min(q_m/kq_n,2ψ(q_m))+∑_k=1^n^31/3k^2n^6+6log n/n^3+1/n^3
≤∑_k≤q_m/ψ(q_m)q_n4ψ(q_n)ψ(q_m)+∑_q_m/ψ(q_m)q_n<k<1/ψ(q_m)2ψ(q_n)·q_m/kq_n+∑_k≥1/ψ(q_m)q_m/k^2q_n+12log n/n^3
≪ψ(q_n)·q_m/q_n+ψ(q_n)logq_n/q_m·q_m/q_n+ψ(q_m)·q_m/q_n+12log n/n^3
≤ 2ψ(q_n)·(q_m/q_n)^1/2+ψ(q_m)·q_m/q_n+12log n/n^3,
where the second inequality is due to the fact min{a+b, c}≤min{a, c}+min{b,c} for a, b, c>0; the last one follows from log x≤ x^1/2 for x≥ 1.
since (q_n)_n∈ is lacunary,
∑_1≤ m<n(q_m/q_n)^1/2≪ 1. Taking this into account, we combine Cases 1-4 to obtain
∑_a≤ m<n≤ bμ(E_q_n^γ∩ E_q_m^γ)≤ 4∑_a≤ m<n≤ bψ(q_m)ψ(q_n)+O(∑_n=a^bψ(q_n)).
§.§ Conclusion
Therefore we have that
r.h.s of (<ref>) ≤ ∑_n=a^bμ(E_q_n^γ)+(∑_n=a^bμ(E_q_n^γ))^2
-4∑_n=a^bψ(q_n)(∑_n=a^bμ(E_q_n^γ)-∑_n=a^bψ(q_n))+O(∑_n=a^bψ(q_n))
= O(∑_n=a^bψ(q_n))=O(∑_n=a^bf_n).
Applying Lemma <ref> with ϕ_n=f_n, we complete the proof.
§ PROOF OF THEOREM <REF>
In this section, we will demonstrate Theorem <ref>. Note that Fraser-Wheeler <cit.> presented a detailed Fourier dimension analysis of the set of real numbers with the partial quotient in their continued fraction expansion growing at a certain rate. In our case, we study the behavior of the product of consecutive partial quotients.
We will first prove the theorem for a specific approximating function Φ(q)=1/3q^τ with τ>0, and then extend the result to a general approximating function.
§.§ An analogue of Kaufman's measure
Take Φ(q)=1/3q^τ. We are now in position to construct the probability measure μ supported on the objective set
𝒢(3Φ) \𝒦(3Φ) in Theorem <ref>.
Let 0<ε<min{1/100τ,1/8} be fixed. By Lemma <ref>, we choose N so large that _ HBad(N)>1-ε.
Writing
Σ_m:=∑_a_1=1^N⋯∑_a_m=1^Nq_m(a_1,…,a_m)^-2(1-ε),
we have that
Σ_m→∞ as m→∞; we may take m sufficiently large so that m≥log N/ε+1 and Σ_m≥ 2^10.
A coding space Θ^ℕ with Θ={1,…,N} underlines the Cantor set Bad(N), and thus an irrational number x in Bad(N) is associated with its continued fraction expansion [a_1(x), a_2(x),…] as an element in the symbolic space Θ^ℕ.
By regrouping the digits we may regard Θ^ℕ, or equivalently Bad(N), as (Θ^m)^ℕ constructed by m-blocks in Θ^m :={1,…,N}^m.
We define a probability measure λ_m on Θ^m
by setting
λ_m((a_1,…,a_m))=q_m(a_1,…,a_m)^-2(1-ε)/Σ_m ((a_1,…,a_m)∈Θ^m ).
We take σ_m so that mσ_m is the mean of log q_m(a_1,…,a_m)
with respect to this measure, that is,
mσ_m=∑_(a_1,…,a_m)∈Θ^m log q_m(a_1,…,a_m) λ_m((a_1,…,a_m)).
By Lemma <ref>(1), we have
log q_m(a_1,…,a_m)≥log q_m(1,…,1)≥ (m-1)log√(2), and
mσ_m≥ (m-1)log√(2).
Denote
Y_1=q_m(a_1,…,a_m), Y_2=q_m(a_m+1,…,a_2m), ….
Hence (Y_j)_j=1^∞ forms a sequence of independent and identically distributed random variables
over the space
(Θ^m)^ℕ
endowed with the probability measure
λ_m×λ_m×⋯.
By the weak law of large numbers, there exists j_0≥1 such that the set ℰ=ℰ(j_0)⊆ (Θ^m)^j_0
on which
|log Y_1+⋯+log Y_j_0/j_0-𝔼(log Y_1)| ≤ε𝔼(log Y_1)
is of measure
λ_m×⋯×λ_m_j_0(ℰ)>1/2.
Fix such a j_0 and set p=j_0m,
ν_p=
λ_m×⋯×λ_m (j_0 times).
We deduce from (<ref>) that, if l≥ 1 and (a_1,…,a_lp)∈ℰ^l, then
|log q_lp(a_1,…,a_lp)-lpσ_m|
≤ |log q_lp-∑_i=0^l-1log q_p(a_ip+1,…,a_(i+1)p )|+|∑_i=0^l-1log q_p(a_ip+1,…,a_(i+1)p)-lpσ_m|
≤ llog 2+∑_i=0^l-1|log q_p(a_ip+1,…,a_(i+1)p)-∑_j=1^j_0-1log q_m(a_ip+jm+1,…,a_ip+(j+1)m)|
+∑_i=0^l-1|∑_j=0^j_0-1log q_m(a_ip+jm+1,…,a_ip+(j+1)m-pσ_m|
≤ 2lj_0log 2+ε lpσ_m=(2log 2/mσ_m+ε)lpσ_m<2ε lpσ_m
and consequently
exp(lpσ_m(1-2ε)) ≤ q_lp(a_1,…,a_lp)≤exp(lpσ_m(1+2ε)) .
We write ν̅_p to be the measure induced by ν_p on ℰ, i.e.,
ν̅_p(A)=ν_p(A∩ℰ)/ν_p(ℰ), and (ν̅_p)^n denotes the product measure ν̅_p×⋯×ν̅_p (n times).
Let us fix some notation:
we always use boldface letter (e.g. a) to denote a p-tuple in ℰ. And thus, G=(a,c,4, b), for example, denotes
a finite word consisting of 2p+2 digits which is the concatenation of the p-tuple a, the digits c, 4 and the tuple b. Moreover, if there is no risk of confusion, we will drop the length-indicating subscript 2p+2 in p_2p+2( G), q_2p+2( G) and I_2p+2( G).
For finite words G, H, we write ( G,H) or G· H
for the concatenation of G and H.
We have the following estimates on the measure (ν̅_p)^n; a detailed proof can be found in <cit.> (Lemmas 11.3 & 11.7).
For G=(a_1,…,a_n)∈ supp(ν̅_p)^n, we have
|I(G)|^1+2ε≤ (ν̅_p)^n(I(G))≤ |I(G)|^1-ε.
§.§ Construction of Cantor-like subset
Choose a rapidly increasing sequence of integers {n_k}_k=1^∞ such that for all k,
n_k+1ε≥12klog 2+(1+τ)[k(k+1)+k(n_1+⋯+n_k)].
Denote by E the set of
x=[a_1,a_2,…,a_n_1,4,c_1,a_n_1+1,a_n_1+2,
…,a_n_2,4,c_2,…]
such that each a_j:=(a_jp+1,…,a_(j+1)p) belongs to ℰ, and each integer c_k satisfies
1/4q(a_1,…,a_n_k,4)^τ≤ c_k≤1/2q(a_1,…,a_n_k,4)^τ.
By the construction of E, we directly show that
E⊆𝒢(3Φ) \𝒦(3Φ).
To study the Cantor structure of the set E, we define
Ω_n={(a_1,…,a_n)∈^n E∩ I_n(a_1,…, a_n)≠∅}.
Then
E=⋂_n=1^∞⋃_(a_1,…,a_n)∈Ω_nI_n(a_1,…,a_n).
An element (a_1,…,a_n) of Ω_n is called an admissible word,
and the corresponding cylinder I_n(a_1,…, a_n) is called an admissible cylinder.
Let G=(a_1,…,a_n_1, 4, c_1, …, a_n_k,4, c_k,…,a_n) be an admissible word
of length np+2k, where
n_k< n≤ n_k+1. Then we have
|log q(G)-(n+τ n_k)pσ_m|≤ 4ε npσ_m.
Write G_1=(a_1,…,a_n_1, 4, c_1, …, a_n_k-1,4, c_k-1,…,a_n_k).
We eliminate all the terms (4,c_i) with i∈{1,…,k} in G and denote the resulted word by G'; we eliminate all (4,c_i) with i∈{1,…,k-1} in
G_1 to obtain G'_1.
By Lemma <ref>(2), we have that
|log q(G)-(n+τ n_k)pσ_m|
≤ |log q(G')-npσ_m|+∑_j=1^k-1log c_j+|log c_k-τ n_kpσ_m|+6klog 2
≤ 2ε npσ_m+(1+τ)∑_j=1^k-1log c_j+τ|log q(G'_1)-τ n_kpσ_m|+12klog 2
≤ (τ+2)ε npσ_m+(1+τ)∑_j=1^k-1log c_j+12klog 2≤ 4ε npσ_m,
where the last inequality follows by the definitions of {n_k} and c_j.
§.§ Mass distribution on E
We define a measure μ on E via assigning mass among the admissible cylinders:
for 0< n≤ n_1,
μ(I(a_1,…,a_n))
=ν̅_p(I(a_1))×⋯×ν̅_p(I(a_n)),
μ(I(a_1,…,a_n_1,4))
=μ(I(a_1,…,a_n_1)),
μ(I(a_1,…,a_n_1,4,c_1))
=μ(I(a_1,…,a_n_1,4))×4/q(a_1,…,a_n_1,4)^τ.
And inductively, for n_k< n≤ n_k+1 with k≥ 1, we define
μ (I(a_1,…,a_n))
=μ(I(a_1,…,a_n_k,4,c_k))×ν̅_p(I(a_n_k+1))×⋯×ν̅_p(I(a_n)),
μ (I(a_1,…,a_n_k+1,4))
=μ(I(a_1,…,a_n_k+1)),
μ (I(a_1,…,a_n_k+1,4,c_k+1))
=μ(I(a_1,…,a_n_k+1,4))×4/q(a_1,…,a_n_k+1,4)^τ.
Due to the consistency property, we extend μ to a measure, denoted still by μ, to all Borel sets.
§.§ Hölder exponent of μ
In this subsection, we derive some key geometric properties of μ.
We first estimate the μ-measure of admissible cylinders.
For k≥ 1, we have the following estimates
(1) If G=(a_1,…,a_n_k) or
G=(a_1,…,a_n_k,4),
then
|I(G)|^1+O(ε)≤μ(I(G))≤ |I(G)|^1-O(ε).
(2) If G=(a_1,…,a_n_k,4,c_k), then
|I(G)|^τ+2/2τ+2+O(ε)≤μ(I(G))≤ |I(G)|^τ+2/2τ+2-O(ε).
(3) If G=(a_1,…,a_n) with n_k<n< n_k+1,
then
|I(G)|^1+O(ε)≤μ(I(G))≤ |I(G)|^τ+2/2τ+2-O(ε).
We proceed the proof by induction.
Case 1 G=(a_1,…,a_n_1). By Lemma <ref>, we have
|I(G)|^1+2ε≤μ(I(G))≤ |I(G)|^1-ε,
which yields that
|I(G,4)|^1+O(ε)≤μ(I(G,4))≤ |I(G,4)|^1-O(ε).
Case 2 G=(a_1,…,a_n_k,4,c_k) with k≥ 1. Induction Hypothesis means that
|I(a_1,…,a_n_k,4)|^1+O(ε)≤μ(I(a_1,…,a_n_k,4))≤ |I(a_1,…,a_n_k,4)|^1-O(ε).
Then by the definition of measure μ, we have
μ(I(G)) =μ(I(a_1,…,a_n_k,4))·4/q(a_1,…,a_n_k,4)^τ
≤ |I(a_1,…,a_n_k,4)|^1-O(ε)·4/q(a_1,…,a_n_k,4)^τ
≤1/q(a_1,…,a_n_k,4)^2(1-O(ε))·4/q(a_1,…,a_n_k,4)^τ
≤(1/q(a_1,…,a_n_k,4)^2τ+2)^τ+2/2τ+2-O(ε)
≤(1/2q(a_1,…,a_n_k,4,c_k)^2)^τ+2/2τ+2-O(ε)≤ |I(G)|^τ+2/2τ+2-O(ε).
Similar calculations show that
μ(I(G)) ≥ |I(a_1,…,a_n_k,4)|^1+O(ε)·1/q(a_1,a_2,…,a_n_k,4)^τ
≥1/q(a_1,a_2,…,a_n_k,4)^2(1+O(ε))·1/q(a_1,a_2,…,a_n_k,4)^τ
≥(1/q(a_1,a_2,…,a_n_k,4)^2τ+2)^τ+2/2τ+2+O(ε)
≥(1/q(a_1,a_2,…,a_n_k,4,c_k)^2)^τ+2/2τ+2+O(ε)≥ |I(G)|^τ+2/2τ+2+O(ε).
Case 3 G=(a_1,…,a_n_k,4,c_k, a_n_k+1,…,a_n) with n_k<n<n_k+1. We deduce
μ(I(G)) =μ(I(a_1,…,a_n_k,4,c_k))ν̅_p(a_n_k+1)×⋯×ν̅_p(a_n)
≤|I(a_1,…,a_n_k,4,c_k)|^τ+2/2τ+2-O(ε)|I(a_n_k+1,…,a_n)|^1-ε
≤(|I(a_1,…,a_n_k,4,c_k)|·|I(a_n_k+1,…,a_n)|)^τ+2/2τ+2-O(ε)
≤ |I(G)|^τ+2/2τ+2-O(ε).
On the other hand, we have
μ(I(G)) ≥|I(a_1,…,a_n_k,4,c_k)|^τ+2/2τ+2+O(ε)|I(a_n_k+1,…,a_n)|^1+2ε
≥(|I(a_1,…,a_n_k,4,c_k)|·|I(a_n_k+1,…,a_n)|)^1+O(ε)
≥ |I(G)|^1+O(ε).
Case 4 G=(a_1,…,a_n_k,4,c_k, a_n_k+1,…,a_n_k+1).
We deduce
μ(I(G)) =μ(I(a_1,…,a_n_k,4,c_k))ν̅_p(a_n_k+1)×⋯×ν̅_p(a_n_k+1)
≤|I(a_1,…,a_n_k,4,c_k)|^τ+2/2τ+2-O(ε)|I(a_n_k+1,…,a_n_k+1)|^1-ε
=(|I(a_1,…,a_n_k,4,c_k)| |I(a_n_k+1,…,a_n_k+1)|)^1-O(ε)|I(a_1,…,a_n_k,4,c_k)|^-τ/2τ+2
≤ |I(G)|^1-O(ε),
where the last inequality holds by |I(G)|/|I(a_1,…,a_n_k,4,c_k)| → 0. On the other hand,
μ(I(G)) ≥|I(a_1,…,a_n_k,4,c_k)|^τ+2/2τ+2+O(ε)|I(a_n_k+1,…,a_n_k+1)|^1+ε
≥(|I(a_1,…,a_n_k,4,c_k)|· |I(a_n_k+1,…,a_n_k+1)|)^1+O(ε)
≥ |I(G)|^1+O(ε).
We are now in a position to establish the Hölder exponent of the measure μ on balls.
Let I be an interval of length h. For sufficiently small h, we have
μ(I)≤ h^2/τ+2-O(ε).
If the interval I intersects the support of μ, there is a longest admissible word
G such that I∩supp(μ)⊂ I(G). The
proof falls naturally into three parts according to the form of G
Case 1 G=(a_1,…,a_n) for some n≠ n_k.
Since G is the longest admissible word such that I∩supp(μ)⊂ I(G). Combining this with Lemma <ref>, we have that h must be at least the minimum of the distances between I(G, a) and I(G, b) when a≠b run through the set supp (ν̅_p). Further, the gap between I(G, a) and I(G, b) is at least an (N, p)-dependent constant times |I(G)|. Thus, by Lemma <ref>, we have
μ(I)≤μ(I(G))≤ |I(G)|^τ+2/2τ+2-O(ε)≤ h^2/τ+2-O(ε).
Case 2 G=(a_1,…,a_n_k) or
G=(a_1,…,a_n_k,4).
Since μ(I(a_1,…,a_n_k))=μ(I(a_1,…,a_n_k,4))
and the lengths of two cylinders I(a_1,…,a_n_k) and I(a_1,…,a_n_k,4) are comparable,
we need only deal with the case G=(a_1,…,a_n_k).
We consider two subcases according to the size of h.
Case 2-1 h>|I(G)|^τ+2/2. By Lemma
<ref>, we obtain
μ(I)≤μ(I(G))≤ |I(G)|^1-O(ε)≤ h^2/τ+2-O(ε).
Case 2-2
h≤ |I(G)|^τ+2/2.
In this case, since
|I(a_1,…,a_n_k,4,c_k)|≥1/2q(a_1,…,a_n_k,4,c_k)^2≥1/32q(G)^2τ+2,
there are at most
32hq(G)^2τ+2+2≤64hq(G)^2τ+2
number of cylinders of the form (G,4,c_k) intersecting I, and thus
μ(I) ≤ 64hq(G)^2τ+2μ(I(G,4,c_k))
≤ 64hq(G)^2τ+2μ(I(G,4))·1/q(G,4)^τ
≤ hq(G)^2τ+2(1/q(G))^2-O(ε)1/q(G)^τ
=hq(G)^τ-O(ε)≤ h^2/τ+2-O(ε),
where the penultimate inequality holds by Lemma <ref>; the last one follows by
h≤ |I(G)|^τ+2/2≤ q(G)^-(τ+2).
Case 3 G=(a_1,…,a_n_k,4,c_k).
Analysis similar to that in Case 1 shows that
μ(I)≤ h^2/τ+2-O(ε).
§.§ Geometry of relative measures
In this subsection, we define the relative measure μ_G, and derive some key geometric properties of μ_G.
Let G=(a_1,…,a_n) be an admissible word.
We define the relative measure μ_G as
μ_G(I(H))=μ(I(G·H))/μ(I(G)),
where H is a finite word such that the concatenation G·H is admissible.
Let ζ>1. If there exists k∈ such that
(1-4ε)n_kpσ_m<logζ<(τ+1+4ε)n_kpσ_m,
we call
ζ an exceptional scale.
If
(τ+1+4ε)n_kpσ_m≤logζ≤ (1-4ε)n_k+1pσ_m
for some k∈, we call ζ a typical scale.
For a sufficiently large ξ and some α∈(0,1/3), we write ζ=|ξ|^α.
When ζ is a typical scale, we define
n(ζ):=⌊logζ-τ n_kpσ_m/pσ_m⌋.
For an admissible word G=(a_1,…,a_n_k,4,c_k,a_n_k+1,…,a_n(ζ)),
we have
(1) q(G)∈ [|ξ|^α-4ε,|ξ|^α+4ε] and |I(G)|∈[1/2|ξ|^-2α-8ε,|ξ|^-2α+8ε].
(2) Let I be an interval of length
|I|=|ξ|^-1+2α-O(ε).
Then
μ_G(I)≪ |I|^2/τ+2-2α/1-2α-O(ε).
(1) It follows by Lemmas <ref>, <ref> and the definition of n(ζ).
(2) Let π be the projection sending a finite or infinite sequence (a_1,a_2,…) to the corresponding continued fraction [a_1,a_2,…]. We have
μ_G(I) =μ(π(G·π^-1(I)))/μ(I(G))≤ |I(G)|^-1-O(ε)μ(π(G·π^-1(I)))
≤ |ξ|^2α+O(ε)μ(π(G·π^-1(I))),
where the first inequality holds by Lemma <ref>, the last one follows by Lemma <ref>(1).
By Lemma <ref>, we deduce that
μ(π(G·π^-1(I))) ≤ |π(G·π^-1(I))|^2/τ+2-O(ε)
≤sup_x_1,x_2∈ I|p(G)x_1+p'(G)/q(G)x_1+q'(G)-p(G)x_2+p'(G)/q(G)x_2+q'(G)|^2/τ+2-O(ε)
≤|N^2q'(G)^-2|I||^2/τ+2-O(ε)
≤(|ξ|^-2α+O(ε)|ξ|^-1+2α-O(ε))^2/τ+2-O(ε)≤ |ξ|^-2/τ+2+O(ε),
here and hereafter, q'(G) denotes the penultimate dominator, that is, q'(G)=q_n-1(a_1,…,a_n-1) if
G=(a_1,…,a_n); and similarly for p'.
Hence, we have
μ_G(I)≤ |ξ|^2α+O(ε)· |ξ|^-2/τ+2+O(ε)≤ |ξ|^2α-2/τ+2+O(ε)≤ |I|^2/τ+2-2α/1-2α-O(ε).
Fix a sufficiently large ξ. Suppose that ξ^α is an exceptional scale satisfying (1-4ε)n_kpσ_m<log |ξ|^α<(τ+1+4ε)n_kpσ_m. Write
α'=(τ+1+10ε)α and ζ=|ξ|^α'. Then ζ is a typical scale. Furthermore, let I be an interval of length
|I|=|ξ|^-1+2α'-O(ε).
Then for any admissible word G=(a_1,…,a_n_k,c_k,a_n_k+1,…,a_n(ζ)),
we have
μ_G(I)≤ |I|^1-O(ε).
We readily check that
|ξ|^α' is a typical scale.
Write ζ_1=ξ^1-2α'/2 and ñ(ζ_1)=⌊logζ_1/pσ_m⌋. Set
Ω:={H=(b_1,…,b_ñ(ζ_1))∈ (supp(ν̅_p))^ñ(ζ_1) I(G·H)∩π(G·π^-1(I))≠∅}.
Since n_k< ε n_k+1, we have n(ζ)+ñ(ζ_1)<logζ+logζ_1/pσ_m<n_k+1. Writing ♯Ω for the cardinality of Ω, we have
μ_G(I)≤♯Ω·μ (I(G·H))/μ(I(G))=♯Ω· (ν̅_p)^ñ(ζ_1)(I(H)).
By (<ref>), for any H∈Ω, we have
|I(H)|≤ζ_1^-2+O(ε)=|ξ|^-1+2α'+O(ε)≤ |I|^1-O(ε),
|I(H)|≥ζ_1^-2-O(ε)=|ξ|^-1+2α'-O(ε)≥ |I|^1+O(ε).
Thus
♯Ω≤|I|/ |I|^1+O(ε)≤|I|^-O(ε).
Combining with Lemma <ref>, we have
μ_G(I)≤ |I|^-O(ε)· |I(H)|^1-ε≤ |I|^1-O(ε).
§.§ Establishing Theorem <ref> for Φ(q)=1/3q^τ.
In this subsection, we deal with the asymptotic behavior of μ(ξ)
for large |ξ|. Without loss of generality, we may assume that ξ is positive.
Suppose that ζ=ξ^α is a typical scale.
Set
S(ζ):={G=(a_1,…,a_n_k,4,c_k,a_n_k+1,…,a_n(ζ))G is an admissible word}.
Given G∈ S(ζ), we remark that
t=[G,x]=p(G)x+p'(G)/q(G)x+q'(G).
And thus
μ(ξ)=∑_G∈ S(ζ)μ(I(G))∫ e(-ξp(G)x+p'(G)/q(G)x+q'(G)) μ̣_G(x).
For G∈ S(ζ), by Lemma <ref>(1), we know
q'(G)∈ [ξ^α-5ε,ξ^α+5ε],
q(G)∈ [ξ^α-5ε,ξ^α+5ε].
Now we cover the square =[ξ^α-5ε,ξ^α+5ε]^2 by a mesh of width ξ^α-200ε; there are at most ξ^410ε small squares which together cover . We collect all left-lower endpoints of those squares to form a set P. For each a∈ P, we set
Θ_a={G∈ S(ζ) q(G)∈ [a, a+ξ^α-200ε]}.
If Θ_a is nonempty, we pick a representative element G_a∈Θ_a and write μ_a for μ_G_a. We will approximate μ̣ by μ̣_a.
The sum in (<ref>) can be split
into sums over the classes Θ_a, that is,
μ(ξ)
= ∑_a∈ P∑_G∈Θ_aμ(I(G))∫ e(-ξp(G)x+p'(G)/q(G)x+q'(G)) μ̣_G(x)
= ∑_a∈ P∫∑_G∈Θ_aμ(I(G))e(-ξp(G)x+p'(G)/q(G)x+q'(G)) μ̣_a(x)+
∑_a∈ P∑_G∈Θ_aμ(I(G))(∫ e(-ξp(G)x+p'(G)/q(G)x+q'(G)) (μ̣_G(x)-μ̣_a(x)))
:= S_1+S_2.
§.§.§ Estimation of S_1
We use the comparison Lemma <ref> to deduce an upper bound for
S_1=∑_a∈ P∫∑_G∈Θ_aμ(I(G))e(-ξp(G)x+p'(G)/q(G)x+q'(G)) μ̣_a(x).
Write
F(x)=∑_G∈Θ_aμ(I(G))e(-ξp(G)x+p'(G)/q(G)x+q'(G)) .
We have
(1) max_x∈[0,1)|F'(x)|≤ξ^1-2α+11ε:=M.
(2) m_2:=∫_0^1|F(x)|^2x̣≤ξ^3α-1/2+O(ε)+ξ^-α(τ+2)/τ+1+O(ε).
(1) We directly calculate
max_x∈[0,1)|F'(x)|≤|2πξ∑_G∈Θ_aμ(I(G))1/(q(G)x+q'(G))^2|≤ξ^1-2α+11ε.
(2) We have
m_2=∑_G∈Θ_a∑_G_1∈Θ_aμ(I(G))μ(I(G_1))
∫_0^1e(f(x))x̣,
where
f(x)=-ξ(p(G)x+p'(G)/q(G)x+q'(G)-p(G_1)x+p'(G_1)/q(G_1)x+q'(G_1)).
The argument f(x) admits a derivative, up to a multiplicative factor of ± 1, that
f'(x)= ξ(q(G)+q(G_1))x+q'(G)+q'(G_1)/(q(G)x+q'(G))^2(q(G_1)x+q'(G_1))^2((q(G)-q(G_1))x+q'(G)-q'(G_1)),
which may be written as g(x)(C_1 x+C_2) with
g(x)=ξ(q(G)+q(G_1))x+q'(G)+q'(G_1)/(q(G)x+q'(G))^2(q(G_1)x+q'(G_1))^2,
C_1=q(G)-q(G_1), C_2=q'(G)-q'(G_1).
We continue to estimate
∫_0^1e(f(x))x̣
by discussing whether or not there exists a stationary point of f.
* q(G)=q(G_1) and q'(G)=q'(G_1).
It means that G=G_1.
Furthermore, we have
m_2≤∑_G∈Θ_a∑_G_1=Gμ(I(G))μ(I(G_1))
≤ |I(G)|^τ+2/2τ+2-O(ε)≤ξ^-α(τ+2)/τ+1+O(ε),
where the second inequality holds by Lemma <ref>.
* q(G)=q(G_1) but q'(G) ≠ q'(G_1).
The phase is non-stationary, and we have
|f'(x)|≥ξ^1-3α-O(ε)|q'(G)-q'(G_1)|,
|f”(x)|≤ξ^1-3α+O(ε)|q'(G)-q'(G_1)|.
Applying Lemma <ref>, we obtain
m_2≤∑_G∈Θ_a∑_G_1∈Θ_aμ(I(G))μ(I(G_1))ξ^-1+3α+O(ε)/|q'(G)-q'(G_1)|≤ξ^-1+3α+O(ε).
* q(G)≠ q(G_1) but q'(G) ≠ q'(G_1).
In this case, it is easy to verify that
|g(x)|≥ξ^1-3α-O(ε) and |g'(x)|≤ξ^1-3α+O(ε). By Lemma <ref>, we have
m_2≤∑_G∈Θ_a∑_G_1∈Θ_aμ(I(G))μ(I(G_1)) ξ^-1+3α+O(ε)/2 |q(G)-q(G_1)|^-1/2≤ξ^-1+3α+O(ε)/2.
Combining these estimates, we complete the proof.
Let α_0=116-13√(73)/144∈(0,1/3).
If ξ^α_0 is a typical scale, put α=α_0; otherwise, put α=α_0(τ+1+10ε). We have
S_1= O(ξ^-ε).
Choose r=ξ^-411ε. By Lemma <ref>(1),
r/M≤ξ^-1+2α-O(ε). We consider two cases:
Case 1 ξ^α_0 is a typical scale.
By Lemmas <ref>, <ref> and <ref>, we obtain
S_1 ≤∑_a∈ P[2ξ^-411ε+ξ^-(2/τ+2-2α-O(ε))(1+(ξ^3α-1/2+O(ε)+ξ^-α(τ+2)/τ+1+O(ε))ξ^1-2α+O(ε))]
≤ 2ξ^-ε+ξ^-(2/τ+2-2α-O(ε))+ξ^3α-1/2+τ/τ+2+O(ε)+ξ^τ/τ+2-α(τ+2)/τ+1+O(ε).
Therefore we will establish the desired result if the following inequalities hold:
2/τ+2-2α>0; 3α-1/2+τ/τ+2<0; τ/τ+2-α(τ+2)/τ+1<0.
Solving those inequalities for α yields
τ^2+τ/(τ+2)^2<α<2-τ/3(τ+2).
Observe that the left side of the inequality (<ref>) is a increasing function of τ,
and the right side is decreasing. Thus, to verify that
(<ref>) holds when τ< √(73)-3/8:=τ_1 and α=α_0,
it suffices to check that
α_0=τ^2_1+τ_1/(τ_1+2)^2=2-τ_1/3(τ_1+2).
Case 2 ξ^α_0 is an exceptional scale.
Let α=α_0(τ+1+10ε). It is easy to check that α∈(0,1/3), and
ξ^α is a typical scale. We use the same analysis as Case 1. By Lemmas <ref>, <ref> and <ref>, we have
S_1 ≤∑_a∈ P[2ξ^-411ε+ξ^-(1-2α-O(ε))(1+(ξ^3α-1/2+O(ε)+ξ^-α(τ+2)/τ+1+O(ε))ξ^1-2α+O(ε))]
≤ 2ξ^-ε+ξ^-(1-2α-O(ε))+ξ^(1-2α)(3α-1)/2+O(ε)+ξ^2α-1-α(τ+2)/τ+1+O(ε).
And it remains to be proven that
1-2α>0; (1-2α)(3α-1)/2<0; 2α-1-α(τ+2)/τ+1<0.
§.§.§ Estimation of S_2
Let α_0=116-13√(73)/144∈(0,1/3). If ξ^α_0 is a typical scale, put α=α_0; otherwise, put α=α_0(τ+1+10ε). We have
S_2= O(ξ^-ε).
For G∈Θ_a, the elements of supp(μ_G) are of the form
(a_n(ζ)+1,…,4,c_k,a_n_k+1,…,a_n_k+1,4,c_k+1,…),
the elements of supp(μ_a) take the form
(a_n(ζ)+1,…,4,c'_k,a_n_k+1,…,a_n_k+1,4,c'_k+1,…).
Set
Ω_1^∗ ={H:=(a_n(ζ)+1,…,4,c_k,a_n_k+1,…,a_n_k+1)G·H is an admissible word},
Ω_2^∗ ={H':=(a_n(ζ)+1,…,4,c'_k,a_n_k+1,…,a_n_k+1)G_a·H' is an admissible word}.
Then
∫ e(-ξp(G)x+p'(G)/q(G)x+q'(G)) (μ̣_G(x)-μ̣_a(x))
= ∑_H∈Ω_1^∗∫_I(H)e(-ξp(G)x+p'(G)/q(G)x+q'(G)) μ̣_G(x)
-∑_H'∈Ω_2^∗∫_I(H') e(-ξp(G)x+p'(G)/q(G)x+q'(G)) μ̣_a(x)
= S_21+S_22+S_23,
where
S_21:= ∑_H∈Ω_1^∗∫_I(H)e(-ξp(G)x+p'(G)/q(G)x+q'(G)) -e(-ξp(G)p(H)/q(H)+p'(G)/q(G)p(H)/q(H)+q'(G))μ̣_G (x);
S_22:= ∑_H'∈Ω_2^∗∫_I(H')e(-ξp(G)x+p'(G)/q(G)x+q'(G)) -e(-ξp(G)p(H')/q(H')+p'(G)/q(G)p(H')/q(H')+q'(G))μ̣_a (x);
S_23:= ∑_H∈Ω_1^∗μ_G(I(H))e(-ξp(G)p(H)/q(H)+p'(G)/q(G)p(H)/q(H)+q'(G))-∑_H'∈Ω_2^∗μ_a(I(H))e(-ξp(G)p(H')/q(H')+p'(G)/q(G)p(H')/q(H')+q'(G)).
(1) Estimate of S_21. We deduce
|S_21| ≤∑_H∈Ω_1^∗μ_G (I(H))sup_x,y∈ I(H)|e(-ξp(G)x+p'(G)/q(G)x+q'(G)) -e(-ξp(G)y+p'(G)/q(G)y+q'(G))|
≤∑_H∈Ω_1^∗μ_G (I(H))ξsup_x,y∈ I(H)|x-y/(q(G)x+q'(G))(q(G)y+q'(G))|
≤∑_H∈Ω_1^∗μ_G (I(H))ξ|I(H)|.
Since n_k+1 may be chosen to be sufficiently large compared to n_k such that
|I(H)|<ξ^-100, we have
|S_21|≤ξ^-99.
(2) Estimate of S_22. Similar arguments to those above show that
|S_22|≤ξ^-99.
(3) Estimate of S_23. We deduce
|S_23|= |∑_H\H'∈Ω_1^∗μ_G(I(H))e(ξp(G)p(H)/q(H)+p'(G)/q(G)p(H)/q(H)+q'(G))-∑_H'\H∈Ω_2^∗μ_a(H)e(ξp(G)p(H')/q(H')+p'(G)/q(G)p(H')/q(H')+q'(G))|
≤ ∑_H\H'∈Ω_1^∗μ_G(I(H))+∑_H'\H∈Ω_2^∗μ_a(I(H'))
≤ ∑_a_n(ζ)+1,…,a_n_k∑_1/2q(G_a,a_n(ζ)+1,…,a_n_k,4)^τ≤ c_k<1/2q(G,a_n(ζ)+1,…,a_n_k,4)^τμ_G(I(G,a_n(ζ)+1,…,a_n_k,4,c_k))
+∑_a_n(ζ)+1,…,a_n_k∑_1/4q(G_a,a_n(ζ)+1,…,a_n_k,4)^τ≤ c_k'<1/4q(G,a_n(ζ)+1,…,a_n_k,4)^τμ_a(I(G_a,a_n(ζ)+1,…,a_n_k,4,c_k'))
:= T_1+T_2.
We estimate T_1
≤ 2∑_a_n(ζ)+1,…,a_n_kμ(I(G, a_n(ζ)+1,…,a_n_k))/μ(I(G))×q(G, a_n(ζ)+1,…,a_n_k,4)^τ-q(G_a, a_n(ζ)+1,…,a_n_k,4)^τ/q(G, a_n(ζ)+1,…,a_n_k,4)^τ
≤ 2∑_a_n(ζ)+1,…,a_n_kμ(I(G, a_n(ζ)+1,…,a_n_k))/μ(I(G))×q(G, a_n(ζ)+1,…,a_n_k)-q(G_a, a_n(ζ)+1,…,a_n_k)/q(G, a_n(ζ)+1,…,a_n_k)
≤ 2∑_a_n(ζ)+1,…,a_n_kμ(I(G, a_n(ζ)+1,…,a_n_k))/μ(I(G))×(q(G)-q(G_a)/q(G)+|q'(G)-q'(G_a)|/q'(G))
≤ O(ξ^-ε),
where the penultimate inequality follows by
q(a_1,…,a_n,b_1,…,b_m)=q(b_1,…,b_m)q(a_1,…,a_n-1)+p(b_1,…,b_m)q(a_1,…,a_n).
In a similar way, we show that T_2≤ O(ξ^-ε).
Estimates (1), (2) and (3) together yields that
S_2≤∑_a∈ P∑_G∈Θ_aμ(I(G)) O(ξ^-ε)= O(ξ^-ε).
To sum up, we complete the proof of Theorem <ref> for Φ(q)=1/3q^τ.
§.§ Establishing Theorem <ref> for general function Φ
For 0<ε<min{1/1000τ,1/8}, we choose a rapidly increasing sequence {Q_k}_k=1^∞ of positive integers satisfying
log Q_k/log Q_k+1<ε/1000, Q_k^τ-ε≤ 3Φ(Q_k)≤ Q_k^τ+ε for all k≥ 1,
where
τ=lim_k→∞logΦ(Q_k)/log Q_k.
Let n_0=0. We define {n_k}_k=1^∞ as follows
n_k=max{N>n_k-1exp(Npσ_m(1+3ε))≤ Q_k}.
It follows for all k≥ 1 that,
exp(n_kpσ_m(1+3ε))≤ Q_k<exp((n_k+1)pσ_m(1+3ε)).
We use the same mechanics as the one in subsection <ref>. Denote by E(Φ) the set of
x=[a_1,a_2,…,a_n_1,4,c_1,a_n_1+1,a_n_1+2,
…,a_n_2,4,c_2,…]
such that each a_j:=(a_jp,…,a_(j+1)p-1) belongs to ℰ (as defined in subsection <ref>), and each integer c_k satisfies
1/4q(a_1,…,a_n_k,4)^τ+20ε≤ c_k≤1/2q(a_1,…,a_n_k,4)^τ+20ε.
Now we check that
E(Φ)⊂𝒢(3Φ) \𝒦(3Φ).
For x∈ E(Φ), using (<ref>) and the monotonicity of Φ, we deduce that
4c_k ≥ q(a_1,…,a_n_k,4)^τ+20ε≥ q(a_1,…,a_n_k)^τ+20ε
≥exp((τ+20ε)n_kpσ_m(1-2ε))
≥exp((τ+ε)(n_k+1)pσ_m(1+3ε))
≥ Q_k^τ+ε≥ 3Φ(Q_k)≥ 3Φ(exp(n_kpσ_m(1+3ε)))
≥ 3Φ(q(a_1,…,a_n_k,4)).
Hence x∈𝒢(3Φ) \𝒦(3Φ).
The Fourier dimension of the Cantor set E(Φ) may be obtained by following similar steps and calculations to those for finding the Fourier dimension of the Cantor set E; we omit the details.
Acknowledgements. This work was supported by NSFC Nos. 12171172, 12201476.
100
ABS22 A. Algom, S. Baker and P. Shmerkin.
On normal numbers and self-similar measures.
Adv. Math. 399 (2022), Paper No. 108276, 17 pp.
ARS22A. Algom, F.R. Hertz and Z.R. Wang.
Pointwise normality and Fourier decay for self-conformal measures.
Adv. Math. 393 (2021), Paper No. 108096, 72 pp.
AR23D. Allen and F. Ramírez.
Independence inheritance and Diophantine approximation for systems of linear forms.
Int. Math. Res. Not. IMRN (2023), no. 2, 1760–1794.
ABH23 C. Aistleitner, B. Borda and M. Hauke.
On the metric theory of approximations by reduced fractions: a quantitative Koukoulopoulos-Maynard theorem.
Compos. Math. 159 (2023), no. 2, 207–231.
B86R.C. Baker.
Diophantine inequalities.
London Mathematical Society Monographs. New Series, 1. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1986.
BBH20 A. Bakhtawar, P. Bos and M. Hussain.
The sets of Dirichlet non-improvable numbers versus well-approximable numbers.
Ergodic Theory Dynam. Systems 40 (2020), no. 12, 3217–3235.
BV06V. Beresnevich and S. Velani.
A mass transference principle and the Duffin-Schaeffer conjecture for Hausdorff measures.
Ann. of Math. (2) 164 (2006), no. 3, 971–992.
B98C. Bluhm.
On a theorem of Kaufman: Cantor-type construction of linear fractal Salem sets.
Ark. Mat. 36 (1998), no. 2, 307–316.
B09 E. Borel.
Les probabilités denombrables et leurs applications arithmétiques.
Rend. Circ. Mat. Palermo 27 (1909), 247–271.
B12 Y. Bugeaud.
Distribution Modulo One and Diophantine Approximation.
Cambridge Tracts in Mathematics 193. Cambridge: Cambridge University Press, 2012.
CH24 T. Cai and K. Hambrook.
On the Exact Fourier Dimension of Sets of Well-Approximable Matrices.
(2024), arXiv: 2403.19410
C59 J.W.S. Cassels.
On a problem of Steinhaus about normal numbers.
Colloq. Math. 7 (1959), 95–101.
CS17 X.H. Chen and A. Seeger.
Convolution powers of Salem measures with applications.
Canad. J. Math. 69 (2017), no. 2, 284–320.
CZZ23 S. Chow, A. Zafeiropoulos and E. Zorin.
Inhomogeneous Kaufman Measures and Diophantine Approximation.
(2023), arXiv: 2312.15455
DGW02 Y. Dayan, G. Arijit and B. Weiss.
Random walks on tori and normal numbers in self similar sets.
(2020), arXiv: 2002.00455
DEL63 H. Davenport, P. Erdös and W.J. LeVeque.
On Weyl's criterion for uniform distribution.
Michigan Math. J. 10 (1963), 311–314.
DS70H. Davenport and W.M. Schmidt.
Dirichlet's theorem on diophantine approximation. Symposia Mathematica, Vol. IV (INDAM, Rome, 1968/69), pp. 113–132, Academic Press, London-New York, 1970.
D09A. Dubickas.
Powers of a rational number modulo 1 cannot lie in a small interval.
Acta Arith. 137 (2009), no. 3, 233–239.
DS41R.J. Duffin and A.C. Schaeffer.
Khintchine's problem in metric Diophantine approximation.
Duke Math. J. 8 (1941), 243–255.
E16 F. Ekström.
Fourier dimension of random images.
Ark. Mat. 54 (2016), no. 2, 455–471.
FLP95L. Flatto, J.C. Lagarias and A.D. Pollington.
On the range of fractional parts {ξ(p/q)^n}.
Acta Arith. 70 (1995), no. 2, 125–147.
FH23R. Fraser and K. Hambrook.
Explicit Salem sets in ℝ^n.
Adv. Math. 416 (2023), Paper No. 108901, 23 pp.
FWR. Fraser and R. Wheeler.
Fourier dimension estimates for sets of exact approximation order the well-approximable case.
Int. Math. Res. Not. IMRN (2023), no. 24, 20943–20969.
FW23 R. Fraser and R. Wheeler.
Fourier Dimension Estimates for Sets of Exact Approximation Order The Badly-Approximable Case.
(2023), arXiv: 2309.05851
H17K. Hambrook.
Explicit Salem sets in ℝ^2.
Adv. Math. 311 (2017), 634–648.
H19 K. Hambrook.
Explicit Salem sets and applications to metrical Diophantine approximation.
Trans. Amer. Math. Soc. 371 (2019), no. 6, 4353–4376.
HY23K. Hambrook and H. Yu.
Non-Salem sets in metric diophantine approximation.
Int. Math. Res. Not. IMRN (2023), no. 15, 13136–13152.
HW79 G.H. Hardy and E.M. Wright.
An introduction to the theory of numbers. Fifth edition.
The Clarendon Press, Oxford University Press, New York, 1979.
HVW24 M. Hauke, S. Vazquez Saez and A. Walker.
Proving the Duffin-Schaeffer conjecture without GCD graphs.
(2024), arXiv: 2404.15123
HS96T. Hinokuma and H. Shiga.
Hausdorff dimension of sets arising in Diophantine approximation.
Kodai Math. J. 19 (1996), no. 3, 365–377.
HS15 M. Hochman and P. Shmerkin.
Equidistribution from fractal measures.
Invent. Math. 202 (2015), no. 1, 427–479.
HW19L.L. Huang and J. Wu.
Uniformly non-improvable Dirichlet set via continued fractions.
Proc. Amer. Math. Soc. 147 (2019), no. 11, 4617–4624.
HKWW18M. Hussain, D. Kleinbock, N. Wadleigh and B.W. Wang.
Hausdorff measure of sets of Dirichlet non-improvable numbers.
Mathematika 64 (2018), no. 2, 502–518.
J28I. Jarník.
Zur metrischen Theorie der diopahantischen Approximationen.
Proc. Mat. Fyz. 36 (1928), 91–106.
J31 V. Jarník.
Über die simultanen diophantischen Approximationen.
Math. Z. 33 (1931), no. 1, 505–543.
K66 J.P. Kahane.
Images browniennes des ensembles parfaits.
C. R. Acad. Sci. Paris Sér. A-B 263 (1966), A613–A615.
K80R. Kaufman.
Continued fractions and Fourier transforms.
Mathematika 27 (1980), no. 2, 262–267.
K81R. Kaufman.
On the theorem of Jarník and Besicovitch.
Acta Arith. 39 (1981), no. 3, 265–267.
K24A. Khintchine.
Einige Sätze über Kettenbrüche, mit Anwendungen auf die Theorie der Diophantischen Approximationen.
Math. Ann. 92 (1924), no. 1-2, 115–125.
K63 A. Khintchine.
Continued fractions. Translated by Peter Wynn.
P. Noordhoff, Ltd., Groningen 1963.
KL19D.H. Kim and L.M. Liao.
Dirichlet uniformly well-approximated numbers.
Int. Math. Res. Not. IMRN 2019, no. 24, 7691–7732.
KK22T. Kim and W. Kim.
Hausdorff measure of sets of Dirichlet non-improvable affine forms.
Adv. Math. 403 (2022), Paper No. 108353, 39 pp.
KW18D. Kleinbock and N. Wadleigh.
A zero-one law for improvements to Dirichlet's Theorem.
Proc. Amer. Math. Soc. 146 (2018), no. 5, 1833–1844.
KW19D. Kleinbock and N. Wadleigh.
An inhomogeneous Dirichlet theorem via shrinking targets.
Compos. Math. 155 (2019), no. 7, 1402–1423.
KM20D. Koukoulopoulos and J. Maynard.
On the Duffin-Schaeffer conjecture.
Ann. of Math. (2) 192 (2020), no. 1, 251–307.
KMY24D. Koukoulopoulos, J. Maynard and D.D. Yang.
An almost sharp quantitative version of the Duffin-Schaeffer conjecture.
(2024), arXiv: 2404.14628
KN74 L. Kuipers and H. Niederreiter.
Uniform Distribution of Sequences.
New York-London-Sydney: Wiley-Interscience, John Wiley & Sons, 1974.
LP09 L. Łaba and M. Pramanik.
Arithmetic progressions in sets of fractional dimension.
Geom. Funct. Anal. 19 (2009), no. 2, 429–456.
LWX23 B.X. Li, B.W. Wang and J. Xu.
Hausdorff dimension of Dirichlet non-improvable set versus well-approximable set.
Ergodic Theory Dynam. Systems 43 (2023), no. 8, 2707–2731.
M95 P. Mattila.
Geometry of sets and measures in Euclidean spaces.
Fractals and rectifiability (Cambridge Studies in Advanced Mathematics, 44). Cambridge University Press, Cambridge, 1995.
M15P. Mattila.
Fourier analysis and Hausdorff dimension.
Cambridge Studies in Advanced Mathematics, 150. Cambridge University Press, Cambridge, 2015.
P67 W. Philipp.
Some metrical theorems in number theory.
Pacific J. Math. 20 (1967), 109–127.
PVZZ22 A.D. Pollington, S. Velani, A. Zafeiropoulos and E. Zorin.
Inhomogeneous Diophantine approximation on M_0-sets with restricted denominators.
Int. Math. Res. Not. IMRN (2022), no. 11, 8571–8643.
P22A. Pyörälä.
The scenery flow of self-similar measures with weak separation condition.
Ergodic Theory Dynam. Systems 42 (2022), no. 10, 3167–3190.
QR03M. Queffélec and O. Ramaré.
Fourier analysis of continued fractions with bounded special quotients.
Enseign. Math. (2) 49 (2003), no. 3-4, 335–356.
S51R. Salem.
On singular monotonic functions whose spectrum has a given Hausdorff dimension.
Ark. Mat. 1 (1951), 353–365.
S60 W.M. Schmidt.
On normal numbers.
Pacific J. Math. 10 (1960), 661–672.
S80 W.M. Schmidt,
Diophantine approximation.
Lecture Notes in Mathematics, 785. Springer, Berlin, 1980.
SS18 P. Shmerkin and V. Suomala.
Spatially independent martingales, intersections, and applications.
Mem. Amer. Math. Soc. 251 (2018), no. 1195, v+102 pp.
S58 P. Szüsz.
Über die metrische Theorie der Diophantischen Approximation.
Acta Math. Acad. Sci. Hungar. 9 (1958), 177–193.
V85 J.D. Vaaler.
Some extremal functions in Fourier analysis.
Bull. Amer. Math. Soc. (N.S.) 12 (1985), no. 2, 183–216.
W12 M. Waldschmidt .
Recent advances in Diophantine approximation. Number theory, analysis and
geometry, 659–704, Springer, New York, 2012.
Wall50 D.D. Wall.
Normal numbers.
Thesis (Ph.D.)-University of California, Berkeley. 1950.
Weyl16 H. Weyl.
Über die Gleichverteilung von Zahlen mod. Eins.
Math. Ann. 77 (1916), no. 3, 313–352.
Y19H. Yu.
A Fourier-analytic approach to inhomogeneous Diophantine approximation.
Acta Arith. 190 (2019), no. 3, 263–292.
Y21 H. Yu.
On the metric theory of inhomogeneous Diophantine approximation: an Erdös-Vaaler type result.
J. Number Theory 224 (2021), 243–273.
|
http://arxiv.org/abs/2409.02760v1 | 20240904143620 | An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting | [
"Zhuolin Li",
"Zhen Zhang",
"Witold Pedrycz"
] | cs.AI | [
"cs.AI"
] |
top=2cm,bottom=2.8cm
theoremTheorem
lemmaLemma
propositionProposition
§ ACKNOWLEDGEMENTS
toc
tocsectionAcknowledgements
definition
definitionDefinition
algorithmAlgorithm
propertyProperty
remarkRemark
exampleExample
approachApproach
Elsevier
|
http://arxiv.org/abs/2409.03724v1 | 20240905172923 | Probing the chirality of a single microsphere trapped by a focused vortex beam through their orbital period | [
"Kainã Diniz",
"Tanja Schoger",
"Arthur L. Fonseca",
"Rafael S. Dutra",
"Diney S. Ether Jr",
"Gert-Ludwig Ingold",
"Felipe A. Pinheiro",
"Nathan B. Viana",
"Paulo A. Maia Neto"
] | physics.optics | [
"physics.optics"
] |
fancy
plain
plain
iblabel[1]#1
akefntext[1]
[0pt][r]thefnmark #1
1.125
*
§
0pt4pt4pt
*
§.§
0pt15pt1pt
[LO,RE]
< g r a p h i c s >
[CO]
< g r a p h i c s >
[CE]
< g r a p h i c s >
[RO]1–LastPage
[LE] 1–LastPage
[
\begin@twocolumnfalse
< g r a p h i c s >
0pt[0pt][0pt]
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
Probing the chirality of a single microsphere trapped by a focused vortex beam through their orbital period
Kainã Diniz,^a,b^† Tanja Schoger,^c † Arthur L. Fonseca,^a,b,
Rafael S. Dutra,^d
Diney S. Ether Jr,^a,b Gert-Ludwig Ingold,^c
Felipe A. Pinheiro,^a
Nathan B. Viana,^a,b
and Paulo A. Maia Neto^∗ a,b
< g r a p h i c s >
When microspheres are illuminated by tightly focused vortex beams, they can be trapped in a non-equilibrium steady state where they orbit around the optical axis.
By using the Mie-Debye theory for optical tweezers, we demonstrate that the orbital period strongly depends on the particle's chirality index.
Taking advantage of such sensitivity, we put forth a method to experimentally characterize with high precision the chiroptical response of individual optically trapped particles. The
method allows for an enhanced precision at least one order of magnitude larger than that of similar existing enantioselective approaches. It is particularly suited to probe the chiroptical response of individual particles, for which light-chiral matter
interactions are typically weak.
\end@twocolumnfalse
]
§
^† These authors contributed equally to this work.
^a Instituto de Física, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, Rio de Janeiro, 21941-972, Brazil; Email: [email protected]
^b CENABIO - Centro Nacional de Biologia Estrutural e Bioimagem, Universidade Federal do Rio de Janeiro,
Rio de Janeiro, Rio de Janeiro, 21941-902, Brazil
^c Institut für Physik, Universität Augsburg, 86135 Augsburg, Germany
^d LISComp-IFRJ, Instituto Federal de Educação, Ciência e Tecnologia, Rua Sebastão de Lacerda, Paracambi, Rio de Janeiro, 26600-000, Brasil
§ INTRODUCTION
Chiral discrimination plays a crucial role in many areas of science such as Chemistry, Molecular Biology and Pharmaceutics (see e.g. Ref. for review).
Over the years, various methods to separate molecules and particles based on their chiral properties were developed.
There exist chemical processes to separate enantiomers from each other (see e.g. Refs. for reviews), which, however, have the disadvantage that they are usually developed for specific chiral particles and tend to be invasive. In addition, they usually probe only the average chiral response of an ensemble of chiral particles or molecules, rather than that of individual particles, for which such response is typically small. <cit.> To circumvent this limitation, plasmonic nanostructures have been used in enantioselective schemes due to their ability to enhance chiroptical properties based on localized surface plasmon resonance. <cit.>
Recently, all-optical chiral discrimination methods have received significant attention due to their potential as noninvasive alternatives<cit.>, and because they are particularly suited to characterize the chiral response of single, isolated chiral nanoparticles <cit.>.
These methods are possible because chiral particles respond differently to left- and right-circularly polarized light.<cit.> This has been exploited, for instance, in the context of optical tweezers, <cit.> with several methods being introduced in recent years to trap and characterize single chiral particles using tightly focused beams. <cit.>
The proposal in Refs. is based on an optical torque which the particle experiences when displaced from its equilibrium position on the optical axis by an external force.
Due to focusing, the spin angular momentum associated with polarization can be exchanged with the trapped particle as orbital angular momentum<cit.>, generating a torque that is sensitive to the chirality of the particle.
In addition to spin, light can also carry intrinsic orbital angular momentum, which is associated with the field's phase distribution in space.<cit.> Paraxial beams that carry this type of angular momentum are called vortex beams.
An important class of vortex beams are the Laguerre-Gaussian modes, usually denoted by LG_pℓ, where p is a positive integer which determines the number of radial nodes, and ℓ is an integer called topological charge.
In addition to spin angular momentum associated with polarization, such modes carry an orbital angular momentum of ℓħ per photon related to their helix-shaped wavefront, with the sign of ℓ determining the direction of the twist of the helices.
Upon interaction with such paraxial fields, a chiral dipole cannot discriminate between different topological charges. <cit.>
An experiment with tightly focused vortex beams showed also no response of chiral molecules on beams with different topological charges. <cit.>
However, more recent studies revealed that chiral materials indeed respond in a discriminatory way to the handedness and magnitude of light's orbital angular momentum because of quadrupole contributions.<cit.>
If the field becomes strongly focused, the spin and orbital degrees of freedom become coupled <cit.>, and a chiral particle’s response will be different for different topological charges.<cit.>
In the context of optical trapping, focusing of vortex beams with ℓ≠ 0 leads to a ring-shaped focal spot.
If a particle is small compared to the diameter of the ring, it can be trapped in a non-equilibrium steady state where it orbits around the optical axis.<cit.>
For brevity, we refer to this type of state as the ring-trapping regime in the following.
Li et al. <cit.> found that, for a particle confined to the focal plane, both the radius of the orbit and the optical torque that drives the particle depend on its chirality. Also, it has recently been shown that optical tweezers with vortex beams with ℓ≠ 0 exert an enhanced torque upon trapped objects, and that this effect can be used to characterize material properties of microspheres. <cit.>
Here we propose to use the period of particles in the ring regime as a probe for their chirality. Beyond the usual discussion about enantioselectivity, we present a proposal to quantify microsphere's chirality while estimating the resolution that could be achieved.
Additionally, by calculating the radius of the orbit and its location along the axis from the conditions of
vanishing axial and radial force components, we provide a more realistic model when compared to the ones which consider the azimuthal force only in the focal plane.
We also demonstrate that, in our scenario, analyzing the period yields a higher chiral resolution than doing so with just the orbital radius. This result is particularly suited for enantioselection of individual particles, where chiroptical response is typically small, and for this reason our proposal singles out with respect to other existing enantioselective methods for single chiral particles.
§ MIE-DEBYE THEORY FOR CHIRAL NANOSPHERES TRAPPED BY A VORTEX BEAM
To describe the response of a chiral particle to an electromagnetic field, we use the following set of constitutive equations<cit.>
[ 𝐃; 𝐁 ] =
[ ϵ_0 ϵ i κ/c; - iκ/c μ_0 μ ][ 𝐄; 𝐇 ] ,
where ϵ and μ are the relative permittivity and permeability,
c = 1/√(ϵ_0 μ_0) is the vacuum speed of light, and κ is a pseudo-scalar known as the chirality parameter.
Although these equations assume a homogeneous and isotropic response, particles whose chirality arises from their geometry can also be considered in terms of an effective chirality parameter.<cit.>
Notice that κ accounts for an electro-to-magnetic and magneto-to-electric coupling.
To describe the trapping of a chiral spherical particle of radius R by a tightly focused vortex beam, we have developed a version of the Mie-Debye theory for optical tweezers with Laguerre-Gaussian modes <cit.> that includes chiral scatterers <cit.>.
The field before focusing is assumed to be a circularly polarized (σ = ± 1) Laguerre-Gaussian beam LG_0ℓ with one intensity node and topological charge ℓ.
The angular spectrum representation of the electric field resulting from the focusing of such a beam by an objective is given by <cit.>
𝐄^(σ, ℓ)(𝐫) = -ikf E_0 e^-ikf/2π(√(2)f/w_0)^|ℓ|∫_0^2π dφ e^iℓφ
×∫_0^θ_0 dθ sinθ√(cosθ)sin^|ℓ|(θ)
e^- (fsin(θ)/w_0)^2
×
e^i𝐤·𝐫ϵ̂_σ (θ, φ) .
The integral covers the direction of all wave vectors 𝐤 = 𝐤(k, θ, φ) within the medium of refractive index n_w surrounding the sphere, up to a maximal angle defined by sin(θ_0) = NA/n_w, where NA is the numerical aperture of the objective that performs the focusing.
The wave number k= 2π n_w/λ_0 is defined in terms of the vacuum wavelength λ_0 of the beam. E_0 denotes the field amplitude, while f defines the focal length and w_0 the beam waist at the entrance of the objective.
The polarization unit vector is given by ϵ̂_σ(θ, φ) = e^iσφ(θ̂
+ iσφ̂)/√(2) where θ̂ and φ̂ refer to the unit vectors in spherical coordinates.
We obtain the scattered field by applying Mie theory.
The optical force 𝐅 exerted by the total field can be calculated by integrating the time-averaged Maxwell's stress tensor over a closed surface around the spherical scatterer.
In the context of the Mie-Debye theory, rather than working directly with the force, it is convenient to define the dimensionless quantity 𝐐 called efficiency factor<cit.>
𝐐 = 𝐅/(n_w/c)P ,
where P is the power on the sample.
The efficiency factor quantifies the force exerted by the field upon the particle per unit power.
Due to the axial symmetry of the vortex beam, it is convenient to express the optical force in cylindrical coordinates 𝐐 = Q_ρρ̂ + Q_ϕϕ̂ + Q_z ẑ.
The component Q_z defines the axial force along the propagation direction of the beam, while Q_ρ and Q_ϕ are the transverse force components in the radial and azimuthal direction, respectively.
Furthermore, we also define the position (ρ, z, ϕ) of the sphere with respect to the focus in cylindrical coordinates.
The explicit force expressions for a trapped dielectric sphere can be found in Refs. .
For a chiral sphere, the electric and magnetic Mie scattering coefficients a_j and b_j of multipole order j have to be replaced by
a_j → a_j + iσ d_j ,
b_j → b_j - iσ c_j .
The scattering coefficients for a size parameter x=kR are given by
a_j(x) = Δ^-1_j(x) [V_j^(x) A_j^(x) + V_j^(x) A_j^(x)] ,
b_j(x) = Δ^-1_j(x) [W_j^(x) B_j^(x) + W_j^(x) B_j^(x) ] ,
c_j(x) = - d_j(x) = iΔ^-1_j(x) [W_j^(x) A_j^(x) - W_j^(x) A_j^(x)] ,
where we used the following auxiliary functions
Δ_j(x) = W_j^ (x) V_j^ (x) + W_j^ (x) V_j^ (x) ,
W_j^ (x) = Mψ_j(N_ x) ξ'_j(x) - ξ_j(x)ψ'_j(N_ x) ,
V_j^(x) = ψ_j(N_ x) ξ'_j(x) - M ξ_j(x)ψ'_j(N_ x) ,
A_j^(x) = Mψ_j(N_ x) ψ'_j(x) - ψ_j(x)ψ'_j(N_ x) ,
B_j^(x) = ψ_j(N_ x) ψ'_j(x) - Mψ_j(x)ψ'_j(N_ x)
with the relative impedance M= n_w√(μ/ϵ) and the relative refractive index N_ = (n ±κ)/n_w, n = √(ϵμ) for left (L) and right (R) polarized waves.
Note that we adapted the notation from Ref. , where similar Mie coefficients were obtained, but for different constitutive equations than the ones presented in Eq. (<ref>).
The Mie coefficients for the scattered field are expressed in terms of the Riccati-Bessel functions ψ_j(z) = z j_j(z) and ξ_j(z) = z h_j^(1)(z), where j_j(z) and h_j^(1)(z) are the spherical Bessel and Hankel functions of the first kind, respectively.
Due to the reciprocity of chiral materials, the polarization-mixing coefficients fulfill c_j = -d_j.
If the chirality parameter vanishes, the coefficients reduce to the usual Mie coefficients a_j = A_j/W_j, b_j = B_j/V_j and c_j =0 = d_j.
§ RESULTS AND DISCUSSION
Using the Mie-Debye theory for chiral particles outlined above, we examine the period of stably trapped objects.
A particle in a steady-state orbit around the optical axis is in equilibrium in the axial and radial directions, which means that the optical force components in those two directions must vanish, as illustrated in Fig. <ref>.
To find the coordinates of the circular orbit ρ_ and z_, we use the Mie-Debye theory to simultaneously solve the equations
Q_z(ρ_, z_) = 0 ,
Q_ρ(ρ_, z_) = 0 .
We also require that the derivatives ∂_ρ Q_ρ and ∂_z Q_z at
(ρ_,z_) are negative to ensure that the orbit is stable.
As the particle is typically immersed in some fluid, it experiences a drag force proportional to its speed. <cit.>
The particle will perform a uniform circular motion whose speed v_ϕ is such that the drag force and the azimuthal component of the optical force cancel each other. Thus, using the definition (<ref>), we find the following relation for the orbiting speed
v_ϕ = n_wP/c γ Q_ϕ(ρ_, z_) ,
where γ is the Stokes drag coefficient, i.e., the proportionality constant between the particle speed and the drag force.
Together with the relation v_ϕ = ρ_ω between the velocity and angular velocity ω, the period T=2π/ω can be expressed as
T = 2πρ_γ/(n_w P/c) Q_ϕ(ρ_, z_) .
We characterize the liquid by a viscosity η and account for the influence of the walls of the sample by applying the Faxén correction to the drag coefficient of a spherical particle
<cit.>
γ = 6πη R/1 - 9/16R/h
+ 1/8(R/h)^3
- 45/256(R/h)^4
- 1/16(R/h)^5 ,
where h is the distance of the sphere's center from the interface.
We analyze the period, as given by Eq. (<ref>), and its dependence on the chirality index of a sphere for vortex beams with various topological charges.
The beam is assumed to be left-circularly (σ =1) polarized.
For all numerical results discussed below, we assume an objective with numerical aperture NA = 1.2 and back aperture radius R_ obj = 2.8 mm, values that are typical for commercially available objectives.
To make fair comparisons between different modes, one must ensure that they have similar filling conditions at the objective entrance port.
Thus, for each Laguerre-Gaussian mode, except when ℓ=0, we compute the beam waist such that the ratio between the radius of the ring of maximal intensity and the objective equals 0.8, as it is described in detail in Ref. .
This implies that the beam waist is given by w_0(ℓ) = 0.8 R_ obj√(2/|ℓ|) for ℓ≠ 0.
We note that this type of dynamic waist control can be performed with light modulation devices<cit.>, the same that can generate vortex beams.
For the Gaussian mode, we set w_0(ℓ=0)=2.2 mm.
The microsphere center of mass is always taken to be h= 2 above the coverslip.
We consider a non-magnetic scatterer with a refractive index n = √(ϵ) = 1.57 for a vacuum wavelength λ_0 = 1064 nm, so as to emulate a polystyrene microsphere. <cit.>
The refractive index of water is n_w=1.32. <cit.>
For the calculation of the Stokes drag force, we use the viscosity of water at 20 ^∘C, which is η = 1.0016 m Pa · s.
The power at the sample is set to P=10 mW.
As we will see later, the choice of P in our method is a matter of experimental convenience, and the theoretical resolution for κ measurements is not directly affected by it.
Fig. <ref>(a) depicts the period of chiral and non-chiral spheres as a function of their radius for Laguerre-Gaussian beams of topological charges ℓ = ±4 and ℓ = ±5. These values were chosen so that the period could be shown for a variety of sphere radii. If the topological charge is too small, larger particles will be trapped on the beam axis. <cit.>
For all cases, the period for spheres with chirality index κ = 0.01 (dashed lines) and κ = -0.01 (dotted lines) is shown, as well as the period for a non-chiral sphere (solid lines).
The shaded area between the dotted and dashed curves accounts for the period of spheres with a chirality index in the interval -0.01< κ < 0.01.
Notice that the curves for topological charges with the same absolute value but different signs are not the same.
This happens because we are considering a left-circularly polarized beam before focusing (σ = 1), thus breaking the symmetry between the ±ℓ cases even for achiral spheres.
Indeed, when the topological charge is positive, the orbital angular momentum has the same sign as the spin angular momentum, while for negative topological charges, the sign is opposite.
Independently of the topological charge, all the curves exhibit the same general behavior.
For radii R smaller than about 0.35m, the period monotonically increases as the radius decreases, while for larger radii, it exhibits oscillations.
This can be understood in terms of a decomposition of the optical force into a conservative and a non-conservative component.
The conservative component, usually called the gradient force, pulls the particle towards the region of maximum intensity.
On the other hand, the non-conservative component, usually called the scattering force, arises from radiation pressure and from the field's non-uniform helicity <cit.>.
Since the beam before focusing is circularly polarized, the azimuthal component Q_ϕ
does not depend on ϕ, by azimuthal symmetry.
Thus, the line integral of the optical force along a closed circle around the optical axis is proportional to Q_ϕ,
showing that this component is non-conservative.
When the particle radius is small compared to the wavelength of the light (R≪λ),
its scattering can be well described in the Rayleigh limit. In this limit, the conservative component is proportional to the gradient of the electric energy density and dominates the non-conservative one, which explains the strong suppression of Q_ϕ and the resulting increase in the period as the radius decreases.
On the other hand, the non-conservative contribution builds up as the radius increases and becomes comparable to the wavelength (R≈λ_0/n_ w) in the Mie scattering regime, giving rise to an azimuthal force component that drives the particle on its orbit.
The oscillations shown in Fig. <ref>(a) are a consequence of interference effects inside the sphere.<cit.>
In spite of the overall similar behavior discussed above, Fig. <ref>(a) shows a clear split between the curves corresponding to κ=-0.01 (dotted) and κ=0.01 (dashed).
The rotation period decreases monotonically with the chirality index as is exemplified in Figure <ref>(b) for a sphere of radius R = 0.3.
An approximately linear relationship exists between the chirality parameter and the period for a fixed topological charge.
Variations in the chirality index δκ_ℓ are thus directly proportional to variations of the period δ T_ℓ, i.e.
δκ_ℓ = |b_ℓ| δ T_ℓ ,
where b_ℓ is the slope of the linear fit of the κ(T)-curves as illustrated in Fig. <ref>(b).
Notice that we have added an index ℓ to the error in period measurements δ T_ℓ. Since the period monotonically increases with |ℓ| <cit.>, a fixed uncertainty would mean that the precision at higher topological charges is greater than that at smaller ones. Then, any gain in the resolution δκ_ℓ could be considered as an artifact of assuming a progressively smaller relative uncertainty. In order to allow for a fair comparison between different modes, we assume that the period is measured with the same relative uncertainty ξ for all modes and define
δ T_ℓ = ξT_ℓ ,
where T_ℓ is the average period for the mode ℓ in the considered κ interval from -0.01 to 0.01.
Using the definition (<ref>), we have investigated quantitatively the κ-resolution that could be achieved through period measurements.
Figure <ref> displays δκ_ℓ as a function of the topological charge ℓ for beads of radii 0.15, 0.25 and 0.35 scaled by the relative error of the period ξ.
In each case, we plot all the values of ℓ for which a well-defined on-ring position exists.
For values of ℓ below those displayed in the set of points corresponding to each radius, the respective particle would be trapped on the optical axis.
On the other hand, for values of ℓ greater than those presented, no point in space satisfies Eqs. (<ref>) and (<ref>) simultaneously for the given radius values, and no optical trapping is possible.
Notice that this upper limit for the available ℓ values can only exist in the Mie scattering regime, where the scattering component of the optical force plays an important role.
Indeed, in the Rayleigh regime the gradient force will necessarily pull the particle towards the ring of maximum intensity.
This is in accordance with the fact that the number of available topological charges decreases as the particle becomes larger.
For R = 0.15, the precision in κ measurements monotonically increases with |ℓ|.
The lowest δκ_ℓ value achieved was δκ_-14≈ 0.24 ξ.
On the other hand, the results for R = 0.25 and R = 0.35 show that δκ_ℓ finds its minima for much lower values of the topological charge.
This can be advantageous, since the period for these modes is much smaller at the same power, allowing for good statistics with less acquisition time.
The lowest values of δκ_ℓ obtained for R = 0.25 and R = 0.35 were δκ_2≈ 0.16 ξ and δκ_1≈ 0.19 ξ, respectively.
Hence, for a relative uncertainty ξ=10^-3 of the period measurement<cit.>, we would find a precision of the order of 10^-4 for the chirality measurement.
Due to the linear relation (<ref>), improving the precision of period measurements would lead to a proportional enhancement in the chiral resolution of our method.
It is worth noting that a change in radius appears to displace the points globally, i. e., it either enhances or diminishes precision across all ℓ values for the examples shown in Fig. <ref>.
Also, there is no monotonic relationship between δκ_ℓ and R.
The general κ-resolution enhances from R = 0.15 to R = 0.25, but worsens from the latter to R = 0.35.
This suggests the existence of an optimal radius for which the period is most sensitive to κ.
In order to find such a value, we have performed a calculation of δκ_ℓ/ξ as a function of the particle radius for fixed topological charges ℓ = ± 4 and ℓ=±5, and the results are depicted in Fig. <ref>.
The curves exhibit global minima near R ≈ 0.28 , which
seems to be the radius allowing for the most sensitive chirality measurement.
In the region R < 0.28, the resolution progressively worsens as the microsphere becomes smaller.
On the other hand, in the region R > 0.28, δκ_ℓ exhibits an oscillatory behavior, meaning that the resolution of the measurement is of the same order of magnitude for particles within that region.
We would like to highlight an interesting aspect of Eq. (<ref>).
Since the
period scales with power as T∼ 1/P,
the slope b_ℓ = (∂κ/∂ T)_κ = 0 goes as b_ℓ∼ P,
and then δκ_ℓ/ξ, as defined by
Eqs. (<ref>) and (<ref>),
is power-independent, and so are the arguments developed throughout this section.
Hence, in a real implementation, the power can be chosen such that the precision of the period measurement is maximized.
For example, when using large values of |ℓ|, one may freely increase the power in order to reduce the period and thus reduce the data acquisition time necessary to perform a good statistical analysis.
In addition, increasing the power also reduces the effect of the particle's Brownian fluctuations, allowing for more precise determinations of periods, and thus providing a greater chiral resolution.
In Ref. , the authors show that, for particles confined to the focal plane, the average orbital radius depends on κ.
Inspired by their work, we have also investigated the possibility of characterizing a particle's chirality through the orbital radius, rather than using the period.
In Fig. <ref>(a), the orbit coordinates ρ_ and z_ as well as the azimuthal force efficiency in the ring regime Q_ϕ = Q_ϕ(ρ_, z_) are shown as functions of κ, normalized by their value at κ = 0. In the represented interval, all quantities exhibit linear behavior, but it can be observed that the azimuthal force varies more rapidly than the orbital radius. This fact is not just a particularity of the chosen radius, as can be seen in Fig. <ref>(b), where we plot the same relative quantities as functions of the sphere radius for different chirality indices. From Eq. <ref>, we see that the period is proportional to ρ_ and inversely proportional to Q_ϕ, and therefore the dependence of the period on κ is mainly due to Q_ϕ. Thus, a measure of κ based solely on the orbital radius, even if done with the same precision as period measurements, would necessarily have lower chiral resolution than a measurement made through the period. The stronger variation of the azimuthal force with the chirality index also explains why the period shown in Fig. <ref> decreases with increasing κ.
Moreover, it should be noted that highly precise measurements are easier to perform for the period than for the radius of the orbit.
By measuring the back<cit.> or forward-scattered <cit.> light and Fourier transforming the signal to produce a power spectrum, one can extract the orbital frequency.
In contrast with the radius, the period depends on externally tunable parameters like the beam power, the Stokes drag coefficient, and the topological charge, thus allowing for optimizing the measurement.
§ CONCLUSIONS
In conclusion, we have introduced a method to measure the chirality index of micro-sized particles with a precision up to 10^-4.
The method is based on measuring the period of a particle trapped by a focused vortex beam.
The resolution does not depend on the power of the beam.
Compared to similar existing proposals <cit.> our method offers a gain of at least one order of magnitude in precision. This result is of particular interest for probing and characterizing the chiroptical response of single, individual particles that typically exhibit weak light-chiral matter interactions. The precision of this method can be further improved if period measurements with a relative error below 10^-3 can be achieved. Our findings may have applications in enantioselection of particles with very small chiral indexes, such as particles made of naturally occurring materials.
§ CONFLICTS OF INTEREST
The authors declare no conflicts of interest.
§ ACKNOWLEDGEMENTS
We are grateful to Cyriaque Genet, Guilherme Moura, Leonardo Menezes, and Paula Monteiro for fruitful discussions.
P. A. M. N., N. B. V., and F. A. P. acknowledge funding from the Brazilian agencies Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq–Brazil), Coordenação de Aperfeiçamento de Pessoal de Nível Superior (CAPES–Brazil), Instituto Nacional de Ciência e Tecnologia de Fluidos Complexos (INCT-FCx), and the Research Foundations of the States of Rio de Janeiro (FAPERJ) and São Paulo (FAPESP).
rsc
|
http://arxiv.org/abs/2409.03142v1 | 20240905003827 | Causal Temporal Representation Learning with Nonstationary Sparse Transition | [
"Xiangchen Song",
"Zijian Li",
"Guangyi Chen",
"Yujia Zheng",
"Yewen Fan",
"Xinshuai Dong",
"Kun Zhang"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
Geometrical Nonlinear Hall Effect Induced by Lorentz Force
Wenhui Duan
September 9, 2024
==========================================================
§ ABSTRACT
Causal Temporal Representation Learning (Ctrl) methods aim to identify the temporal causal dynamics of complex nonstationary temporal sequences. Despite the success of existing Ctrl methods, they require either directly observing the domain variables or assuming a Markov prior on them. Such requirements limit the application of these methods in real-world scenarios when we do not have such prior knowledge of the domain variables. To address this problem, this work adopts a sparse transition assumption, aligned with intuitive human understanding, and presents identifiability results from a theoretical perspective. In particular, we explore under what conditions on the significance of the variability of the transitions we can build a model to identify the distribution shifts. Based on the theoretical result, we introduce a novel framework, Causal Temporal Representation Learning with Nonstationary Sparse Transition (), designed to leverage the constraints on transition sparsity and conditional independence to reliably identify both distribution shifts and latent factors. Our experimental evaluations on synthetic and real-world datasets demonstrate significant improvements over existing baselines, highlighting the effectiveness of our approach.
§ INTRODUCTION
Causal learning from sequential data remains a fundamental yet challenging task <cit.>.
Discovering temporal causal relations among observed variables has been extensively studied in the literature <cit.>.
However, in many real-world scenarios such as video understanding <cit.>, observed data are generated by causally related latent temporal processes or confounders rather than direct causal edges.
This leads to the task of causal temporal representation learning (Ctrl), which aims to build compact representations that concisely capture the data generation processes by inverting the mixing function that transforms latent factors into observations and identifying the transitions that govern the underlying latent causal dynamics. This learning problem is known to be challenging without specific assumptions <cit.>.
The task becomes significantly more complex with nonstationary transitions, which are often characterized by multiple distribution shifts across different domains, particularly when these domains or shifts are also unobserved.
Recent advances in unsupervised representation learning, particularly through nonlinear Independent Component Analysis (ICA), have shown promising results in identifying latent variables by incorporating side information such as class labels and domain indices <cit.>. For time-series data, historical information is widely utilized to enhance the identifiability of latent temporal causal processes <cit.>. However, existing studies primarily derive results under stationary conditions <cit.> or nonstationary conditions with observed domain indices <cit.>.
These methods are limited in application as general time series data are typically nonstationary and domain information is difficult to obtain. Recent studies <cit.> have adopted a Markov structure to handle nonstationary domain variables and can infer domain indices directly from observed data. (More related work can be found in Appendix <ref>.) However, these methods face significant limitations; some are inadequate for modeling time-delayed causal relationships in latent spaces, and they rely on the Markov property, which cannot adequately capture the arbitrary nonstationary variations in domain variables. This leads us to the following important yet unresolved question:
How can we establish identifiability of nonstationary nonlinear ICA for
general sequence data without prior knowledge of domain variables?
Relying on observing domain variables or known Markov priors to capture nonstationarity seems counter-intuitive, especially considering how easily humans can identify domain shifts given sufficient variation on transitions, such as video action segmentation <cit.> and recognition <cit.> tasks.
In this work, we theoretically investigate the conditions on the significance of transition variability to identify distribution shifts.
The core idea is transition clustering, assuming transitions within the same domain are similar, while transitions across different domains are distinct.
Building on this identification theorem, we propose Causal Temporal Representation Learning with Nonstationary Sparse Transition (), to identify both distribution shifts and latent temporal dynamics. Specifically, we constrain the complexity of the transition function to identify domain shifts. Subsequently, with the identified domain variables, we learn the latent variables using conditional independence constraints. These two processes are jointly optimized within a VAE framework.
The main contributions of this work are as follows: (1) To our best knowledge, this is the first identifiability result that handles nonstationary time-delayed causally related latent temporal processes without prior knowledge of the domain variables. (2) We present , a principled framework for recovering both nonstationary domain variables and time-delayed latent causal dynamics. (3) Experiments on synthetic and real-world datasets demonstrate the effectiveness of the proposed method in recovering latent variables and domain indices.
§ PROBLEM FORMULATION
§.§ Nonstationary Time Series Generative Model
r0.3
< g r a p h i c s >
Graphical model for nonstationary causally related time-delayed time-series data with unobserved domain variables u_t.
We first introduce a nonstationary time-series generative model. Our observational dataset is 𝒟 = {𝐱_t}_t=1^T, where 𝐱_t∈ℝ^n is produced from causally related, time-delayed latent components 𝐳_t ∈ℝ^n through an invertible mixing function 𝐠:
𝐱_t = 𝐠(𝐳_t).
In the nonstationary setting, transitions within the latent space vary over time. Define u as the domain or regime index variable, with u_t corresponding to time step t. Assuming U distinct regimes, i.e., u_t ∈{1, 2, …, U}, each regime exhibits unknown distribution shifts. Those regimes are characterized by U different transition functions {𝐦_u}_u=1^U, which were originally explored in <cit.> through change factors to capture these distribution shifts in transition dynamics.
The i-th component of 𝐳_t, is then generated via i-th component of 𝐦:
z_t,i = m_i(u_t, {z_t',j| z_t',j∈𝐏𝐚(z_t,i)}, ϵ_t,i),
where 𝐏𝐚(z_t,i) represents the set of latent factors directly influencing z_t,i, which may include any subset of 𝐳_<t = {z_τ,i|τ∈{1,2,…,t-1}, i∈{1,2,…,n}}.
For analytical simplicity, we assume that the parents in the causal graph are restricted to elements in 𝐳_t-1. Extensions to higher-order cases, which involve multistep, time-delayed causal relations, are discussed in Appendix S1.5 of <cit.>. These extensions are orthogonal to our contributions and are therefore omitted here for brevity.
Importantly, in a nonstationary context, 𝐏𝐚(·) may also be sensitive to the domain index u_t, indicating that causal dependency graphs vary across different domains or regimes, which will be revisited in our discussion on identifiability. We assume that the generation process for each i-th component of 𝐳_t is mutually independent, conditioned on 𝐳_<t and u_t, and we assume that the noise terms ϵ_t,i are mutually independent across dimensions and over time, thereby excluding instantaneous causal interactions among latent causal factors. Figure <ref> illustrates the graphical model for this setting.
§.§ Identifiability of Domain Variables and Latent Causal Processes
We introduce the identifiability for both domain variables and time-delayed latent causal processes in Definitions <ref> and <ref>, respectively. If the estimated latent processes are identifiable at least up to a permutation and component-wise invertible transformations, then the latent causal relationships are also immediately identifiable. This is because conditional independence relations comprehensively characterize the time-delayed causal relations within a time-delayed causally sufficient system, in which there are no latent causal confounders in the causal processes. Notably, invertible component-wise transformations on latent causal processes preserve their conditional independence relationships. We now present definitions related to observational equivalence, the identifiability of domain variables and latent causal processes.
Formally, consider {𝐱_t}_t=1^T as a sequence of observed variables generated by true temporally causal latent processes specified by (𝐦, 𝐮, p(ϵ), 𝐠) given in Eqs. (<ref>) and (<ref>). Here, 𝐦 and ϵ denote the concatenated vector form across n dimensions in the latent space. Similarly 𝐮 for timestep 1 to T. A learned generative model (𝐦̂, 𝐮̂, p̂(ϵ), 𝐠̂) is observationally equivalent to the ground truth one (𝐦, 𝐮, p(ϵ), 𝐠) if the model distribution p_𝐦̂, 𝐮̂, p̂_ϵ, 𝐠̂({𝐱_t}_t=1^T) matches the data distribution p_𝐦, 𝐮, p_ϵ, 𝐠({𝐱_t}_t=1^T) everywhere.
Domain variables are said to be identifiable up to label swapping if observational equivalence (Def. <ref>) implies identifiability of domain variables up to label swapping or a permutation σ for domain indices:
p_𝐦̂, 𝐮̂, p̂_ϵ, 𝐠̂({𝐱_t}_t=1^T) = p_𝐦, 𝐮, p_ϵ, 𝐠({𝐱_t}_t=1^T)
⇒û_t = σ(u_t), ∀ t ∈{1,2,…,T}.
The latent causal processes are said to be identifiable if observational equivalence (Def. <ref>) leads to the identifiability of latent variables up to a permutation π and component-wise invertible transformation 𝒯:
p_𝐦̂, 𝐮̂, p̂_ϵ, 𝐠̂({𝐱_t}_t=1^T) = p_𝐦, 𝐮, p_ϵ, 𝐠({𝐱_t}_t=1^T)
⇒𝐠̂^-1(𝐱_t) = 𝒯∘π∘𝐠^-1(𝐱_t), ∀𝐱_t ∈𝒳,
where 𝒳 denotes the observation space.
§ IDENTIFIABILITY THEORY
In this section, we demonstrate that under mild conditions, the domain variables u_t are identifiable up to label swapping and the latent variables 𝐳_t are identifiable up to permutation and component-wise transformations. We partition our theoretical discussion into two sections: (1) identifiability of nonstationary discrete domain variables u_t and (2) identifiability of latent causal processes.
We slightly extend the usage of supp(·) to define the square matrix support and the support of a square matrix function as follows:
The support (set) of a square matrix 𝐀∈ℝ^n × n is defined using the indices of non-zero entries as:
supp(𝐀) {(i,j) |𝐀_i,j≠ 0}.
The support (set) of a square matrix function 𝐀 : Θ→ℝ^n × n is defined as:
supp(𝐀(Θ)) {(i,j) |∃θ∈Θ, 𝐀(θ)_i,j≠ 0 }.
For brevity, let ℳ and ℳ denote the n × n binary matrices reflecting the support of the Jacobian 𝐉_𝐦(𝐳_t) and 𝐉_𝐦̂(𝐳̂_t) respectively. The (i,j) entry of ℳ is 1 if and only if (i,j) ∈supp(𝐉_𝐦). And we can further define the transition complexity using its Fréchet norm, |ℳ| = ∑_i,jℳ_i,j, and similarly for ℳ. In the nonstationary context, this support matrix is a function of the domain index u, denoted as ℳ_u and ℳ_u. Additionally, we introduce the concept of weakly diverse lossy transitions for the data generation process:
The set of transition functions described in Eq. (<ref>) is said to be diverse lossy if it satisfies the following conditions:
* (Lossy) For every time and indices tuple (t,i,j) with edge z_t-1,i→ z_t,j representing a causal link defined with the parents set 𝐏𝐚(z_t,j) in Eq. <ref>, transition function m_j is a lossy transformation w.r.t. z_t-1,i i.e., there exists an open set S_t,i,j, changing z_t-1,i within this set will not change the value of m_j, i.e. ∀ z_t-1,i∈ S_t,i,j, ∂ m_j/∂ z_t-1,i = 0.
* (Weakly Diverse) For every element z_t-1,i of the latent variable 𝐳_t-1 and its corresponding children set 𝒥_t,i = {j | z_t-1,i∈𝐏𝐚(z_t,j), j∈{1,2,…,n}}, transition functions {m_j}_j ∈𝒥_t,i are weakly diverse i.e., the intersection of the sets S_t,i = ∩_j∈𝒥_t,i S_t,i,j is not empty, and such sets are diverse, i.e., S_t,i≠∅, and S_t,i,j∖ S_t,i≠∅, ∀ j∈𝒥_t,i.
§.§ Identifiability of Domain Variables
Suppose that the dataset 𝒟 are generated from the nonstationary data generation process as described in Eqs. (<ref>) and (<ref>). Suppose the transitions are weakly diverse lossy (Def. <ref>)
and the following assumptions hold:
* (Mechanism Separability) There exists a ground truth mapping 𝒞: 𝒳×𝒳→𝒰 determined the real domain indices, i.e., u_t = 𝒞(𝐱_t-1, 𝐱_t).
* (Mechanism Sparsity) The estimated transition complexity on dataset 𝒟 is less than or equal to ground truth transition complexity, i.e., 𝔼_𝒟 | ℳ_û | ≤𝔼_𝒟 | ℳ_u |.
* (Mechanism Variability) Mechanisms are sufficiently different. For all u≠ u', ℳ_u≠ℳ_u' i.e. there exists index (i,j) such that [ℳ_u]_i,j≠[ℳ_u']_i,j.
Then the domain variables u_t is identifiable up to label swapping (Def. <ref>).
Theorem <ref> states that if we successfully learn a set of estimated transitions {𝐦̂_u}_u=1^U, the decoder 𝐠̂, and the domain clustering assignment 𝒞̂, where 𝐦̂_u corresponds to the estimation of Eq. (<ref>) for a particular regime or domain u, and the system can fit the data as follows:
𝐱̂_t = 𝐠̂∘𝐦̂_û_t∘𝐠̂^-1(𝐱_t-1) and û_t = 𝒞̂(𝐱_t-1, 𝐱_t),
assuming that the transition complexity is kept low (as per Assumption <ref>). Then the estimated domain variables û_t must be the true domain variables u_t up to a permutation.
Proof sketch
The core idea of this proof is to demonstrate that the global minimum of transition complexity can only be achieved when the domain variables u_t are correctly estimated. (1) First, when we have an optimal decoder estimation 𝐠̂^* which is a component-wise transformation of the ground truth, incorrect estimations of u_t will strictly increase the transition complexity, i.e., 𝔼_𝒟 |ℳ^*_û| > 𝔼_𝒟 |ℳ^*_u|. (2) Second, with arbitrary estimations û_t, the transition complexity for non-optimal decoder estimation 𝐠̂ will be equal to or higher than that for the optimal 𝐠̂^*, i.e., 𝔼_𝒟 |ℳ_û| ≥𝔼_𝒟 |ℳ^*_û|. Thus, the global minimum of transition complexity can only be achieved when u_t is optimally estimated, which must be a permuted version of the ground truth domain variables u_t. For a detailed proof, see appendix <ref>.
§.§ Remark on Mechanism Variability
The assumption of mechanism variability, as outlined in Assumption <ref>, requires the Jacobian support matrices across domains must be distinct, which means that the causal graph linking past states (𝐳_t-1) to current states (𝐳_t) varies by at least one edge. But addressing scenarios where the causal graphs remain constant but the functions behind the edges change is challenging without additional assumptions; more detailed discussion on why this is in general challenging can be found in the Appendix <ref>.
To effectively address these scenarios, we extend the concept of the Jacobian support matrix by incorporating higher-order derivatives. This extension provides a more detailed characterization of the variability in transition functions across different domains. We now present the following definition to formalize this concept:
The k-th order partial derivative support matrix for transition 𝐦 denoted as ℳ^k is a binary n× n matrix with
[ℳ^k]_i,j = 1 ∃𝐳∈𝒵, ∂^k m_j/∂ z_i^k≠ 0.
We utilize the variability in the higher-order partial derivative support matrix to extend the identifiability results of Theorem <ref>. This extension applies to cases where the causal graphs remain identical across two domains, yet the transition functions take different forms.
Suppose the data 𝒟 is generated from the nonstationary data generation process described in (<ref>) and (<ref>). Assume the transitions are weakly diverse lossy (Def. <ref>), and the mechanism separability assumption <ref> along with the following assumptions hold:
* (Mechanism Function Variability) Mechanism Functions are sufficiently different. There exists K ∈ℕ such that for all u≠ u', there exists k ≤ K, ℳ_u^k ≠ℳ_u'^k i.e. there exists index (i,j) such that [ℳ_u^k]_i,j≠[ℳ_u'^k]_i,j.
* (Higher Order Mechanism Sparsity) The estimated transition complexity on dataset 𝒟 is no more than ground truth transition complexity,
𝔼_𝒟∑_k=1^K | ℳ_û^k | ≤𝔼_𝒟∑_k=1^K | ℳ_u^k |.
Then the domain variables u_t are identifiable up to label swapping (Def. <ref>).
We utilize the fact that for two distinct domains, there exists an edge in the causal graph, and its k-th order partial derivative supports are different, making the two domains separable. The detailed proof of this extension is provided in Appendix <ref>.
§.§ Identifiability of Latent Causal Process
Once the identifiability of u_t is achieved, the problem reduces to a nonstationary temporal nonlinear ICA with an observed domain index.
Leveraging the sufficient variability approach proposed in <cit.>, we demonstrate full identifiability. This sufficient variability concept is further incorporated into the following lemma, adapted from Theorem 2 in <cit.>:
Suppose that the data 𝒟 are generated from the nonstationary data generation process as described in Eqs. (<ref>) and (<ref>). Let η_kt(u) denote the logarithmic density of k-th variable in 𝐳_t, i.e., η_kt(u)≜log p(z_t,k | 𝐳_t-1, u), and there exists an invertible function 𝐠̂ that maps 𝐱_t to 𝐳̂_t, i.e., 𝐳̂_t = 𝐠̂(𝐱_t)
such that the components of 𝐳̂_t are mutually independent conditional on 𝐳̂_t-1.
(Sufficient variability) Let
𝐯_k,t(u) ≜(∂^2 η_kt(u)/∂ z_t,k∂ z_t-1,1, ∂^2 η_kt(u)/∂ z_t,k∂ z_t-1,2, ..., ∂^2 η_kt(u)/∂ z_t,k∂ z_t-1,n)^⊺,
𝐯_k,t(u) ≜(∂^3 η_kt(u)/∂ z_t,k^2 ∂ z_t-1,1, ∂^3 η_kt(u)/∂ z_t,k^2 ∂ z_t-1,2, ..., ∂^3 η_kt(u)/∂ z_t,k^2 ∂ z_t-1,n)^⊺.
𝐬_kt ≜( 𝐯_kt(1)^⊺, ...,
𝐯_kt(U)^⊺,
∂^2 η_kt(2)/∂ z_t,k^2 -
∂^2 η_kt(1)/∂ z_t,k^2 , ...,
∂^2 η_kt(U)/∂ z_t,k^2 -
∂^2 η_kt(U-1)/∂ z_t,k^2 )^⊺,
𝐬_kt ≜( 𝐯_kt(1)^⊺, ...,
𝐯_kt(U)^⊺,
∂η_kt(2)/∂ z_t,k -
∂η_kt(1)/∂ z_t,k, ...,
∂η_kt(U)/∂ z_t,k -
∂η_kt(U-1)/∂ z_t,k)^⊺.
Suppose 𝐱_t = 𝐠(𝐳_t) and that the conditional distribution p(z_k,t | 𝐳_t-1) may change across m domains. Suppose that the components of 𝐳_t are mutually independent conditional on 𝐳_t-1 in each context. Assume that the components of 𝐳̂_t produced by 𝐠̂ are also mutually independent conditional on 𝐳̂_t-1.
If the 2n function vectors 𝐬_k,t and 𝐬_k,t, with k=1,2,...,n, are linearly independent, then 𝐳̂_t is a permuted invertible component-wise transformation of 𝐳_t.
Then, in conjunction with Theorem <ref>, complete identifiability is achieved for both the domain variables u_t and the independent components 𝐳_t. See detailed proof in Appendix <ref>.
Suppose that the data 𝒟 are generated from the nonstationary data generation process as described in Eqs. (<ref>) and (<ref>), which satisfies the conditions in both Theorem <ref> and Lemma <ref>, then the domain variables u_t are identifiable up to label swapping (Def. <ref>) and latent causal process 𝐳_t are identifiable up to permutation and a component-wise transformation (Def. <ref>).
Discussion on Assumptions
The proof of Theorem <ref> relies on several key assumptions which align with human intuition for understanding of domain transitions.
Firstly, separability states that if human observers cannot distinguish between two domains, it is unlikely that automated systems can achieve this distinction either.
Secondly, variability requires that the transitions across domains are significant enough to be noticeable by humans, implying that there must be at least one altered edge in the causal graph across the domains.
The mechanism sparsity is a standard assumption that has been previously explored in <cit.> using sparsity regularization to enforce the sparsity of the estimated function.
The assumption of weakly diverse lossy transitions is a mild and realistic condition in real-world scenarios, allowing for identical future latent states with differing past states.
The sufficient variability in Theorem <ref> is widely explored and adopted in nonlinear ICA literature <cit.>. For a more detailed discussion of the feasibility and intuition behind these assumptions, we refer the reader to the Appendix <ref>.
§ THE FRAMEWORK
§.§ Model Architecture
r0.4
< g r a p h i c s >
Illustration of with (1) Sparse Transition, (2) Prior Network, (3) Encoder-Decoder Module.
Our framework builds on VAE <cit.> architecture, incorporating dedicate modules to handle nonstationarity. It enforces the conditions discussed in Sec. <ref> as constraints. As shown in Fig. <ref>, the framework consists of three primary components: (1) Sparse Transition, (2) Prior Network, and (3) Encoder-Decoder.
Sparse Transition
The transition module in our framework is designed to estimate transition functions {𝐦̂_u}_u=1^U and a clustering function 𝒞̂ as specified in Eq. (<ref>). As highlighted in Sec. <ref>, the primary objective of this module is to model the transitions in the latent space and minimize the empirical transition complexity. To achieve this, we implemented U different transition networks for various 𝐦̂(û_t,·) and added sparsity regularization to the transition functions via a sparsity loss. A gating function with a (hard)-Gumbel-Softmax function was used to generate û_t, which was then employed to select the corresponding transition network 𝐦̂_û_t. This network was further used to calculate the transition loss, which is explained in detail in Sec. <ref>.
Prior Network
The Prior Network module aims to effectively estimate the prior distribution p(ẑ_t,i | 𝐳̂_t-1, û_t). This is achieved by evaluating p(ẑ_t | 𝐳̂_t-1, û_t) = p_ϵ_i(m̂_i^-1(û_t,ẑ_t,i, 𝐳̂_t-1))|∂m̂_i^-1/∂ẑ_t,i|, where m̂_i^-1(û_t,·) is the learned holistic inverse dynamics model.
To ensure the conditional independence of the estimated latent variables, p(𝐳̂_t | 𝐳̂_t-1), we utilize an isomorphic noise distribution for ϵ and aggregate all estimated component densities to obtain the joint distribution p(𝐳̂_t | 𝐳̂_t-1, û_t) as shown in Eq. (<ref>). Given the lower-triangular nature of the Jacobian, its determinant can be computed as the product of its diagonal terms. Detailed derivations is provided in Appendix <ref>.
log p(𝐳̂_t |𝐳̂_t-1, û_t) = ∑_i=1^n log p(ϵ̂_i |û_t)_Conditional independence + ∑_i=1^n log| ∂m̂^-1_i/∂ẑ_t,i|_Lower-triangular Jacobian
Encoder-Decoder
The third component is an Encoder-Decoder module that utilizes reconstruction loss to enforce the invertibility of the learned mixing function 𝐠̂. Specifically, the encoder fits the demixing function 𝐠̂^-1 and the decoder fits the mixing function 𝐠̂.
§.§ Optimization
The first training objective of is to fit the estimated transitions with minimum transition complexity according to Eq. (<ref>):
ℒ_sparse≜𝔼_𝒟 L(𝐦̂_û_t(𝐳̂_t-1), 𝐳̂_t)_Transition loss + 𝔼_𝒟 | ℳ_û |_Sparsity loss,
where L(·,·) is a regression loss function to fit the transition estimations, and the sparsity loss is approximated via L_2 norm of the parameter in the transition estimation functions.
Then the second part is to maximize the Evidence Lower BOund (ELBO) for the VAE framework, which can be written as follows (complete derivation steps are in Appendix <ref>):
ELBO≜𝔼_𝐳_t∑_t=1^Tlog p_data(𝐱_t|𝐳_t)_-ℒ_Recon
+ ∑_t=1^Tlog p_data(𝐳_t|𝐳_t-1,u_t)
- ∑_t=1^Tlog q_ϕ(𝐳_t|𝐱_t)_-ℒ_KLD
We use mean-squared error for the reconstruction likelihood loss ℒ_Recon.
The KL divergence ℒ_KLD is estimated via a sampling approach since with a learned nonparametric transition prior, the distribution does not have an explicit form.
Specifically, we obtain the log-likelihood of the posterior, evaluate the prior log p(𝐳̂_t |𝐳̂_t-1, û_t) in Eq. (<ref>), and compute their mean difference in the dataset as the KL loss: ℒ_KLD = 𝔼_ẑ_t ∼ q(ẑ_t |𝐱_t)log q(ẑ_t|𝐱_t) - log p(𝐳̂_t |𝐳̂_t-1, û_t).
§ EXPERIMENTS
We assessed the identifiability performance of on both synthetic and real-world datasets.
For synthetic datasets, where we control the data generation process completely, we conducted a comprehensive evaluation. This evaluation covers the full spectrum of unknown nonstationary causal temporal representation learning, including metrics for both domain variables and the latent causal processes.
In real-world scenarios, was employed in video action segmentation tasks. The evaluation metrics focus on the accuracy of action estimation for each video frame, which directly reflects the identifiability of domain variables.
§.§ Synthetic Experiments on Causal Representation Learning
Experiment Setup
For domain variables, we assessed the clustering accuracy (Acc) to estimate discrete domain variables u_t. As the label order in clustering algorithms is not predetermined, we selected the order or permutation that yielded the highest accuracy score.
For the latent causal processes, we computed the mean correlation coefficient (MCC) between the estimated latent variables 𝐳̂_t and the ground truth 𝐳_t. The MCC, a standard measure in the ICA literature for continuous variables, assesses the identifiability of the learned latent causal processes. We adjusted the reported MCC values in Table <ref> by multiplying them by 100 to enhance the significance of the comparisons.
Baselines We compared our method with identifiable nonlinear ICA methods:
(1) BetaVAE <cit.>, which ignores both history and nonstationarity information.
(2) i-VAE <cit.> and TCL <cit.>, which leverage nonstationarity to establish identifiability but assume independent factors.
(3) SlowVAE <cit.> and PCL <cit.>, which exploit temporal constraints but assume independent sources and stationary processes.
(4) TDRL <cit.>, which assumes nonstationary causal processes but with observed domain indices.
(5) HMNLICA <cit.>, which considers the unobserved nonstationary part in the data generation process but does not allow any causally related time-delayed relations.
(6) NCTRL <cit.>, which extends HMNLICA to an autoregressive setting to allow causally related time-delayed relations in the latent space but still assumes a Markov chain on the domain variables.
Result and Analysis We generate synthetic datasets that satisfy our identifiability conditions in Theorems <ref> and <ref>, detailed procedures are in Appendix <ref>. The primary findings are presented in Table <ref>.
Note: the MCC metric is consistently available in all methods; however, the Acc metric for u_t is only applicable to methods capable of estimating domain variables u_t.
In the first row of Table <ref>, we evaluated a recent nonlinear temporalr0.7
Experiment results of synthetic dataset on baseline models and the proposed . All experiments were conducted using three different random seeds to calculate the average and standard deviation. The best results are highlighted in bold.
u_t Method 𝐳_t MCC u_t Acc (%)
Ground Truth TDRL(GT) 96.93 ± 0.16 -
6*N/A TCL 24.19 ± 0.85 6*-
PCL 38.46 ± 6.85
BetaVAE 42.37 ± 1.47
SlowVAE 41.82 ± 2.55
i-VAE 81.60 ± 2.51
TDRL 53.45 ± 1.31
3*Estimated HMNLICA 17.82 ± 30.87 13.67 ± 23.67
NCTRL 47.27 ± 2.15 34.94 ± 4.20
96.74 ± 0.17 98.21 ± 0.05
ICA method, TDRL, providing ground truth u_t to establish an upper performance limit for the proposed framework.
The high MCC (> 0.95) indicates the model's identifiability.
Subsequently, the table lists six baseline methods that neglect the nonstationary domain variables, with none achieving a high MCC. The remaining approaches, including our proposed , are able to estimate the domain variables u_t and recover the latent variables. In particular, HMNLICA exhibits instability during training, leading to considerable performance variability. This instability stems from HMNLICA’s inability to allow time-delayed causal relationships among hidden variables 𝐳_t, leading to model training failure when the actual domain variables deviate from the Markov assumption. In contrast, NCTRL, which extends TDRL under the same assumption, demonstrates enhanced stability and performance over HMNLICA by accommodating transitions in 𝐳_t. However, since they use incorrect assumption on the nonstationary domain variables, the performance of those methods can be even worse than methods which do not include the domain information. Nevertheless, considering the significant nonstationarity and deviation from the Markov properties, those methods struggled to robustly estimate either the domain variables or the latent causal processes. Compared to all baselines, our proposed reliably recovers both u_t (MCC > 0.95) and 𝐳_t (Acc > 95%), and the MCC is on par with the upper performance bound when domain variables are given, justifying it effectivess.
l0.4
< g r a p h i c s >
Visualization of three phase training process of .
Detailed Training Analysis To further validate our theoretical analysis, we present a visualization of the entire training process for in Figure <ref>. It consists of three phases: (1) In Phase 1, the initial estimations for both u_t and 𝐳_t are imprecise. (2) During Phase 2, the accuracy of the estimation of u_t continues to improve, although the quality of the estimation of 𝐳_t remains relatively unchanged compared with Phase 1. (3) In Phase 3, as u_t becomes clearly identifiable, the MCC of 𝐳_t progressively improves, ultimately achieving full identifiability. This three-phase process aligns perfectly with our theoretical predictions. According to Theorem <ref>, phases 1 and 2 should exhibit suboptimal 𝐳_t estimations, while sparsity constraints can still guide training and improve the accuracy for domain variables u_t. Once the accuracy of u_t approaches high, Theorem <ref> drives the improvement in MCC for 𝐳_t estimations, leading to the final achievement of full identifiability of both latent causal processes for 𝐳_t and domain variables u_t.
§.§ Real-world Application on Weakly Supervised Action Segmentation
Experiment Setup
Our method was tested on the video action segmentation task to estimate actions (domain variables u_t). Following <cit.>, we use the same weakly supervised setting utilizing meta-information, such as action order. The evaluation included several metrics: Mean-over-Frames (MoF), the percentage of correctly predicted labels per frame;
Intersection-over-Union (IoU), defined as |I ∩ I^*|/|I ∪ I^*|; and Intersection-over-Detection (IoD), |I ∩ I^*| / |I|, I^* and I are the ground-truth segment and the predicted segment with the same class. Our evaluation used two datasets: Hollywood Extended <cit.>, which includes 937 videos with 16 daily action categories, and CrossTask <cit.>, focusing on 14 of 18 primary tasks related to cooking <cit.>, comprising 2552 videos across 80 action categories.
Model Design Our model is build on top of ATBA <cit.> method which uses multi-layer transformers as backbone networks. We add our sparse transition module with the sparsity loss function detailed in Sec. <ref>. Specifically, we integrated a temporally latent transition layer into ATBA's backbone, using a transformer layer across time axis for the Hollywood dataset and an LSTM for the CrossTask dataset. To enforce sparsity in the latent transitions, L_2 regularization is applied to the weights of the temporally latent transition layer.
l0.62
Real-world experiment result on action segmentation task. We use the reported value for the baseline methods from <cit.>. Best results are highlighted in bold.
Dataset Method MoF IoU IoD
6*Hollywood HMM+RNN <cit.> - 11.9 -
CDFL <cit.> 45.0 19.5 25.8
TASL <cit.> 42.1 23.3 33
MuCon <cit.> - 13.9 -
ATBA <cit.> 47.7 28.5 44.9
(l)2-5
(Ours) 52.9±3.1 32.7±1.3 52.4±1.8
6*CrossTask NN-Viterbi <cit.> 26.5 10.7 24.0
CDFL <cit.> 31.9 11.5 23.8
TASL <cit.> 40.7 14.5 25.1
POC <cit.> 42.8 15.6 -
ATBA <cit.> 50.6 15.7 24.6
(l)2-5
(Ours) 54.0±0.9 15.7±0.5 23.6±0.8
Result and Analysis
The primary outcomes for real-world applications in action segmentation are summarized in Table <ref>. Traditional methods based on hidden Markov models, such as HMM+RNN <cit.> and NN-Viterbi <cit.>, face challenges in these real-world scenarios. This observation corroborates our previous discussions on the limitations of earlier identifiability methods <cit.>, which depend on the Markov assumption for domain variables. Our approach significantly outperforms the baselines in both the Hollywood and CrossTask datasets across most metrics. Especially in the Hollywood dataset, our method outperforms the base ATBA model by quite a large margin. Notably, the Mean-over-Frames (MoF) metric aligns well with our identifiability results for domain variables u_t. Our method demonstrates substantial superiority in this metric. For Intersection-over-Union (IoU) and Intersection-over-Detection (IoD), our results are comparable to those of the baseline methods in the CrossTask dataset and show its superiority in the Hollywood dataset. Furthermore, our proposed sparse transition module aligns with human intuition and is easily integrated into existing methods, further enhancing its impact in real-world scenarios. Additional discussion with visualization in Appendix <ref>.
r0.45
Ablation study on sparse transition module in Hollywood dataset.
Method MoF IoU IoD
52.9 32.7 52.4
- Complexity 50.5 31.5 51.5
- Module 47.7 28.5 44.9
Abalation Study Furthermore, we conducted an ablation study on the sparse transition module, as detailed in Table <ref>. In this study, “- Complexity” refers to the configuration where we retain the latent transition layers but omit the sparse transition complexity regularization term from these layers, and “- Module” indicates the removal of the entire sparse transition module, effectively reverting the model to the baseline ATBA model. The comparative results in Table <ref> demonstrate that both the dedicated design of the sparse transition module and the complexity regularization term significantly enhance the performance.
§ CONCLUSION
In this study, we developed a comprehensive identifiability theory tailored for general sequential data influenced by nonstationary causal processes under unspecified distributional changes. We then introduced , a principled approach to recover both latent causal variables with their time-delayed causal relations, as well as determining the values of domain variables from observational data without relying on distributional or structural prior knowledge. Our experimental results demonstrate that can reliably estimate the domain indices and recover the latent causal process. And such module can be easily adapted to handle real-world scenarios such as action segmentation task.
unsrtnat
Supplement to
“Causal Temporal Representation Learning with
Nonstationary Sparse Transition”
Appendix organization:
§ IDENTIFIABILITY THEORY
§.§ Proof for Theorem <ref>
We divide the complete proof into two principal steps:
* Firstly, assuming access to the optimal mixing function estimation 𝐠̂^*, we demonstrate that under the conditions in our theorem, the estimated clustering result will align with the ground truth up to label swapping. This alignment is due to the transition complexity with optimal û^*_t and 𝐠̂^* being strictly lower than that with non-optimal û_t but still optimal 𝐠̂^*.
* Secondly, we generalize the results of the first step to cases where the mixing function estimation 𝐠̂ is suboptimal. We establish that for any given clustering assignment, whether optimal or not, a suboptimal mixing function estimation 𝐠̂ can not result in a lower transition complexity. Thus, the transition complexity in scenarios with non-optimal 𝐠̂ will always be at least as high as in the optimal case.
From those two steps, we conclude that the global minimum transition complexity can only be attained when the estimation of domain variables û_t is optimal, hence ensuring that the estimated clustering must match the ground truth up to label swapping. It is important to note that this condition alone does not guarantee the identifiability of the mixing function 𝐠. Because a setting with optimal û^*_t and a non-optimal 𝐠̂ may exhibit equivalent transition complexity to the optimal scenario, but it does not compromise our proof for the identifiability of domain variables u_t. Further exploration of the mixing function's identifiability 𝐠 is discussed in Theorem <ref> in the subsequent section.
§.§.§ Identifiability of C under optimal g
r0.37
< g r a p h i c s >
Illustration of 𝒞̂ incorrectly assigning two different domain subsets of inputs A and B into the same û. The black lines represent the ground truth partition of 𝒞 and the orange line represent the incorrect domain partition for set A and B.
We fist introduce a lemma for this case when we can access an optimal mixing function estimation 𝐠̂^*.
In addition to the assumptions in Theorem <ref>, assume that we can also access an optimal estimation of 𝐠, denoted by 𝐠̂^*, in which the estimated 𝐳̂_t is an invertible, component-wise transformation of a permuted version of 𝐳_t. Then the estimated clustering 𝒞̂ must match the ground truth up to label swapping.
In the first case we deal with optimal estimation 𝐠̂^* in which the estimated 𝐳̂_t is an invertible, component-wise transformation of a permuted version of 𝐳_t, but inaccurate estimated version of 𝒞̂, Consider the following example (Figure <ref>):
With slight abuse of notation, we use 𝒞(A) to represent the domain assigned by 𝒞 to all elements in A, and all elements in A have the same assignment. The same argument applies to B.
Then, for an estimated 𝒞̂, if it incorrectly assigns two subsets of input A and B to the same û (Figure <ref> orange circle), i.e.,
𝒞(A) = i ≠ j = 𝒞(B) but 𝒞̂(A) = Ĉ(B) = k.
Note that if the ground truth 𝒞 gives a consistent assignment for A and B but estimated 𝒞̂ gives diverse assignments, i.e.
𝒞̂(A) = i ≠ j = 𝒞̂(B) but 𝒞(A) = 𝒞(B) = k,
it is nothing but further splitting the ground truth assignment in a more fine-grained manner.
This scenario does not break the boundaries of the ground truth assignments. Consider two cases in the estimation process:
* If the number of allowed regimes or domains exceeds that of the ground truth, such more fine-grained assignment is allowed. The ground truth can then be easily recovered by merging domains that share identical Jacobian supports.
* If the number of regimes or domains matches the ground truth, it can be shown that the inconsistent scenario outlined in Eq. (<ref>) must occur.
Given that these considerations do not directly affect our approach, they are omitted from further discussion for brevity.
Then considering the case in Eq. <ref>, the estimated transition must cover the functions from both A and B, then the learned transition 𝐦̂_k must have Jacobian 𝐉_𝐦̂_k with support matrix ℳ_k = ℳ_i + ℳ_j which is the binary addition of ℳ_i and ℳ_j, such that for all indices in ℳ_i, ℳ_j if any of these two is 1, then the corresponding position in ℳ_k must be 1. That is because if that is not the case, for example, the (a,b)-th location for ℳ_i, ℳ_j, and ℳ_k are 1, 0, and 0. Then we can easily find an input region for the (a,b)-th location such that a small perturbation can lead to changes in 𝐦_i but not in 𝐦_j nor 𝐦̂_k, which makes 𝐦̂_k unable to fit all of the transitions in A∪ B which cause contradiction. See the three matrices in Figure <ref> for an illustrative example.
By assumption <ref>, since all those support matrix differ at least one spot, which means the estimated version is not smaller than the ground truth.
| ℳ_k | ≥ | ℳ_j | and | ℳ_k | ≥ |ℳ_i |,
and the equality cannot hold true at the same time.
Then from Assumption (<ref>), the expected estimated transition complexity can be expressed as:
𝔼_𝒟 |ℳ_û| = ∫_𝒳×𝒳 p_𝒟(𝐱_t-1, 𝐱_t) · | ℳ_𝒞̂(𝐱_t-1, 𝐱_t) | d𝐱_t-1 d𝐱_t.
Similarly for ground truth one:
𝔼_𝒟 |ℳ_u| = ∫_𝒳×𝒳 p_𝒟(𝐱_t-1, 𝐱_t) · | ℳ_𝒞(𝐱_t-1, 𝐱_t) | d𝐱_t-1 d𝐱_t.
Let us focus on the integral of the region A∪ B, the subset of 𝒳×𝒳 mentioned above.
If for some area p_𝒟(𝐱_t-1, 𝐱_t) = 0, then the clustering under this area is ill defined since there is no support from data. Hence we only need to deal with supported area.
For area that p_𝒟(𝐱_t-1, 𝐱_t) >0 and from Eq. (<ref>) the equality cannot hold true at the same time, then the estimated version of the integral is strictly larger than the ground-truth version for any inconsistent clustering as indicated in Eq. (<ref>).
For the rest of regions in 𝒳×𝒳, any incorrect cluster assignment will further increase the ℳ with same reason as discussed above, then the estimated complexity is strictly larger than the ground truth complexity:
𝔼_𝒟 | ℳ_û | > 𝔼_𝒟 | ℳ_u |.
But assumption (<ref>) requires that the estimated complexity be less than or equal to the ground truth. Contradiction! Hence, the estimated 𝒞̂ must match the ground truth up to label swapping.
§.§.§ Identifiability of C under arbitrary g
Now we can leverage the conclusion in Lemma <ref> to show the identifiability of domain variables under arbitrary mixing function estimation.
Suppose that the data 𝒟 are generated from the nonstationary data generation process as described in Eqs. (<ref>) and (<ref>). Suppose the transitions are weakly diverse lossy (Def. <ref>) and the following assumptions hold:
* (Mechanism Separability) There exists a ground truth mapping 𝒞: 𝒳×𝒳→𝒰 determined the real domain indices, i.e., u_t = 𝒞(𝐱_t-1, 𝐱_t).
* (Mechanism Sparsity) The estimated transition complexity on dataset 𝒟 is less than or equal to ground truth transition complexity, i.e., 𝔼_𝒟 | ℳ_û | ≤𝔼_𝒟 | ℳ_u |.
* (Mechanism Variability) Mechanisms are sufficiently different. For all u≠ u', ℳ_u≠ℳ_u' i.e. there exists index (i,j) such that [ℳ_u]_i,j≠[ℳ_u']_i,j.
Then the domain variables u_t is identifiable up to label swapping (Def. <ref>).
To demonstrate the complete identifiability of 𝒞, independent of the estimation quality of 𝐠, we must show that for any arbitrary estimation 𝒞̂≠σ(𝒞), the induced ℳ̂_û for inaccurate estimation 𝐠̂ has a transition complexity at least as high as in the optimal 𝐠̂^* case. If this holds, from Lemma <ref>, we can conclude that the transition complexity of optimal 𝒞̂^* = σ(𝒞) and optimal 𝐠̂^* is strictly smaller than any non-optimal 𝒞̂ and any 𝐠̂.
Suppose the estimated decoder and corresponding latent variables are 𝐠̂ and 𝐳̂_t, respectively, then the following relation holds:
𝐠̂^*(𝐳_t) = 𝐠̂(𝐳̂_t).
Since 𝐠̂ is invertible, by composing 𝐠̂^-1 on both sides, we obtain:
𝐠̂^-1∘𝐠̂^*(𝐳_t) = 𝐠̂^-1∘𝐠̂(𝐳̂_t).
Let
𝐡𝐠̂^-1∘𝐠̂^*,
we then have:
𝐡(𝐳̂^*_t) = 𝐳̂_t.
We aim to demonstrate that under this transformation, if 𝐡 is not a permutation and component-wise transformation, the introduced transition complexity among estimated 𝐳̂ will not be smaller than the optimal 𝐠̂^*.
Suppose |ℳ| < |ℳ^*|, then for any permutation σ mapping the indices of the dimensions from ℳ to ℳ^*, there must exist an index pair (i,j) such that ℳ_i,j = 0 and ℳ^*_σ(i),σ(j) = 1.
An intuitive explanation for this proposition involves the construction of a directed graph G_ℳ^* = (V_ℳ^*, E_ℳ^*), where V_ℳ^* = {1, 2, …, n} and E_ℳ^* = {(i,j) |ℳ^*_i,j = 1}. A similar construction can be made for G_ℳ. It is straightforward that |ℳ^*| = |E_ℳ^*|, which represents the number of edges. Consequently, |ℳ| < |ℳ^*| implies that G_ℳ^* has more edges than G_ℳ. Since there is no pre-defined ordering information for the nodes in these two graphs, if we wish to compare their edges, we need to first establish a mapping. However, if |E_ℳ| < |E_ℳ^*|, no matter how the mapping σ is constructed, there must be an index pair (i,j) such that (i,j) ∉ E_ℳ but (σ(i),σ(j)) ∈ E_ℳ^*. Otherwise, if such an index pair does not exist, the total number of edges in G_ℳ would necessarily be greater than or equal to that in G_ℳ^*, contradicting the premise that |ℳ| < |ℳ^*|.
Suppose transitions are weakly diverse lossy as defined in Def. <ref> and an invertible transformation 𝐡 maps the optimal estimation 𝐳̂^*_t to the estimated 𝐳̂_t, and it is neither a permutation nor a component-wise transformation. Then, the transition complexity on the estimated 𝐳̂_t is not lower than that on the optimal 𝐳̂^*_t, i.e.,
|ℳ| ≥ |ℳ^*|.
The entire proof is based on contradiction. In Figure <ref>, we provide an illustrative example. Note that the mapping from ground truth 𝐳_t to optimal estimation 𝐳̂^*_t is a permutation and element-wise transformation, it does not include mixing, and hence e_i exists if and only if ê^*_i exists. Therefore, |ℳ^*| = |ℳ|. The core of the proof requires us to demonstrate that |ℳ| < |ℳ| cannot be true.
Suppose the transitions are weakly diverse lossy as defined in Def. <ref>, then for each edge z_t,i→ z_t+1,j in the transition graph, there must be a region of z_t,i such that only z_t+1,j is influenced by z_t,i. Consequently, the corresponding ẑ_t+1,j and ẑ_t+1,i are not independent, since no mixing process can cancel the influence of z_t,i. Therefore, the edge ẑ_t,i→ẑ_t+1,j in the estimated graph must exist.
Note that without the weakly diverse lossy transition assumption, this argument may not hold. For example, if ẑ_t+1,j can be expressed as a function that does not depend on z_t,i, then even though the edge z_t,i→ z_t+1,j exists, the estimated edge ẑ_t,i→ẑ_t+1,j may not exist. This could occur if, after the transformation 𝐡, the influences in different paths from z_t,i to ẑ_t+1,j cancel out with each other.
Necessity Example
An example that violates the assumption is as follows:
z_t+1,i = z_t,i + ϵ_t+1,i
z_t+1,j = z_t,i + ϵ_t+1,j
ẑ_i = z_i
ẑ_j = z_i - z_j
Here, the mapping from 𝐳 to 𝐳̂ is invertible. Writing down the mapping from 𝐳̂_t to 𝐳̂_t+1, particularly for ẑ_t+1, j, yields:
ẑ_t+1, j = (z_t,i + ϵ_t+1,i) - (z_t,i + ϵ_t+1,j)
= ϵ_t+1,i - ϵ_t+1,j
Clearly, this is independent of ẑ_t,i. Hence, in this scenario, the edge on the estimated graph does not exist. This explains the necessity for the weakly diverse lossy transition assumption. Furthermore, it can be seen that violating the weakly diverse lossy transition assumption would require a very specific design, such as the transition in an additive noise case and the transition on z being linear, which is usually not the case in real-world scenarios. Generally, this requires that the influences from different paths from z_t,i to ẑ_t+1,j cancel each other out, a condition that is very challenging to fulfill in practical settings.
Permutation Indexing
One may also ask about the permutation of the index between 𝐳_t and 𝐳̂_t. Since the transformation 𝐡 is invertible, the determinant of the Jacobian should be nonzero, implying the existence of a permutation σ such that
(i,σ(i)) ∈supp(𝐉_𝐡), ∀ i ∈ [n].
Otherwise, if there exists an i such that [𝐉_𝐡]_i,· = 0 or [𝐉_𝐡]_·,i = 0, such a transformation cannot be invertible. We can utilize this permutation σ to pair the dimensions in 𝐳_t and 𝐳̂_t.
Since each ground-truth edge is preserved in the estimated graph, by Proposition <ref>, the inequality |ℳ| < |ℳ^*| cannot hold true. Thus, the lemma is proved.
Then, according to this lemma, the transition complexity |ℳ_û| of the learned 𝐦̂_û should be greater than or equal to |ℳ^*_û|, which is the complexity when using an accurate estimation of 𝐠̂^*. This relationship can be expressed as follows:
| ℳ_û | ≥ |ℳ^*_û| .
By Lemma <ref>, the expected complexity of the estimated model 𝔼_𝒟 |ℳ̂^*_û | is strictly larger than that of the ground truth 𝔼_𝒟 | ℳ_u |. This implies the following inequality:
𝔼_𝒟 | ℳ_û | ≥𝔼_𝒟 |ℳ^*_û| > 𝔼_𝒟 | ℳ_u | .
However, Assumption (<ref>) requires that the estimated complexity must be less than or equal to the ground-truth complexity, leading to a contradiction. This contradiction implies that the estimated 𝒞̂ must match the ground truth up to label swapping. Consequently, this supports the conclusion of Theorem <ref>.
§.§ Proof of Corollary <ref>
Suppose the data 𝒟 is generated from the nonstationary data generation process described in (<ref>) and (<ref>). Assume the transitions are weakly diverse lossy (Def. <ref>), and the mechanism separability assumption <ref> along with the following assumptions hold:
* (Mechanism Function Variability) Mechanism Functions are sufficiently different. There exists K ∈ℕ such that for all u≠ u', there exists k ≤ K, ℳ_u^k ≠ℳ_u'^k i.e. there exists index (i,j) such that [ℳ_u^k]_i,j≠[ℳ_u'^k]_i,j.
* (Higher Order Mechanism Sparsity) The estimated transition complexity on dataset 𝒟 is no more than ground truth transition complexity,
𝔼_𝒟∑_k=1^K | ℳ_û^k | ≤𝔼_𝒟∑_k=1^K | ℳ_u^k |.
Then the domain variables u_t are identifiable up to label swapping (Def. <ref>).
With a strategy similar to the proof of Theorem <ref>, we aim to demonstrate that using an incorrect cluster assignment 𝒞̂ will result in ∑_k=1^K |ℳ^k_û| being higher than the ground truth, thereby still enforcing the correct u_t.
Differing slightly from the approach in Theorem <ref>, in this setting, we will first demonstrate that under any arbitrary 𝒞̂ assignment, the estimated complexity is no lower than the complexity in the ground truth, i.e., ∑_k=1^K|ℳ^k| ≥∑_k=1^K|ℳ^k|.
First, we address the scenario where two different domains have the same transition graph but with different functions, as otherwise, the previous lemma <ref> still applies. In cases where the same transition causal graph exists but the functions differ, assumption <ref> indicates that there exists an integer k such that ℳ^k_u≠ℳ^k_u', meaning the ground truth support matrices are different. However, due to incorrect clustering, the learned transition must cover both cases. To substantiate this claim, we need to first introduce an extension of the non-decreasing complexity lemma.
Suppose there exists an invertible transformation 𝐡 which maps the ground truth 𝐳_t to the estimated 𝐳̂_t, and it is neither a permutation nor a component-wise transformation. Then, the transition complexity on the estimated 𝐳̂_t is not lower than that on the ground truth 𝐳_t, i.e.,
∑_k=1^K|ℳ^k| ≥∑_k=1^K|ℳ^k|.
We can extend the notation of the edges e to the higher-order case e^k to represent the existence of a non-zero value for the k-th order partial derivative ∂^k m_i/∂ z_j^k. Under the weakly diverse lossy transition assumption, it is always possible to find a region where the influence in e^k cannot be canceled in ê^k. In this region, the mapping from z_i to ẑ_i can be treated as a component-wise transformation, since the influence of 𝐳 other than z_i is zero due to the lossy transition assumption. It is important to note that there is also an indexing permutation issue between z_i and ẑ_i; the same argument in the permutation indexing part of the proof of lemma <ref> applies.
Since ℳ^k represents the support of the k-th order partial derivative, this implies that [ℳ^k]_i,j =1 implies [ℳ^k']_i,j =1 for all k' ≤ k. We aim to show that if for the transition behind edge z_t,j→ z_t+1,i, there exists a K such that [ℳ^k_u]_i,j are different for two domains, then one of them must be a polynomial with order K-1. For this domain, [ℳ^k_u]_i,j = 1 when k = 1, 2, …, K-1 and [ℳ^k_u]_i,j = 0 when k ≥ K.
To demonstrate that the non-decreasing complexity holds, we need to show that after an invertible transformation h to obtain the estimated version, [ℳ^k_u]_i,j cannot be zero for k < K-1, which can be shown with the following proposition.
Suppose f is a polynomial of order k with respect to x. Then, for any invertible smooth function h, the transformed function f̂ h^-1∘ f ∘ h cannot be expressed by a polynomial of order k', when k' < k.
Let's prove it by contradiction. Suppose f̂ h^-1∘ f ∘ h can be expressed as a polynomial of order k' < k. It follows that the function f̂(x) = C has k' roots (repeated roots are allowed), since h is invertible. Therefore, h ∘f̂(x) = h(C) also has the same number of k' roots. By definition, h ∘f̂ = f ∘ h, which means f ∘ h = h(C) has k' roots. However, since h is invertible, or equivalently it is monotonic, the equation f ∘ h = h(C) having k' roots implies that f(x) = C' has roots k'. Yet, since f is a polynomial of order k, it must have k roots, contradicting the fundamental theorem of algebra, which means that they cannot have the same number of roots. Hence, the proposition holds.
The advantage of support matrix analysis is that, provided there exists at least one region where the support matrix is non-zero, the global version on the entire space will also be non-zero. Based on the definition of diverse lossy transition in Def. <ref>, it is always possible to identify such a region where for an edge z_t,i→ z_t+1,j, the mapping from z_t+1,j to z_t+1,j can be treated as a component-wise relationship. This is because no other variables besides z_t+1,j change in conjunction with z_t,i to cancel the effect. Therefore, proposition <ref> applies, and as a result, the complexity is nondecreasing. Thus, the lemma is proved.
With this lemma, we have shown that for an arbitrary incorrect domain partition result, the induced ground-truth transition complexity is preserved after the invertible transformation 𝐡. This partition effectively combines two regions, as illustrated in Figures <ref> and <ref>. Consequently, the transition complexity has the following relationship:
ℳ^k_û = ℳ^k_u + ℳ^k_u'.
Here, u and u' represent the ground-truth values of the domain variables, and û denotes the estimated version, defined as the binary addition of the two ground truths.
By assumption <ref>, the two ground truth transitions' complexity ℳ^k_u, ℳ^k_u' are different, then with the same arguments in the proof of the lemma <ref>, we can show that the expected transition complexity with wrong domain assignment over the whole dataset is strictly larger than the ground truth complexity with correct domain assignment. And it is easy to see that when the estimated latent variables are equal to the ground truth, 𝐳̂_t = 𝐳_t then the lower bound is achieved when the estimated domains are accurate. Note that this argument is not a sufficient condition to say that the estimated 𝐳̂_t is exactly the ground truth 𝐳_t or an optimal estimation of it, since there can be other formats of mapping from 𝐳_t to 𝐳̂_t that generate the same complexity. But this is sufficient to prove that by pushing the complexity to small, the domain variables u_t must be recovered up to label swapping. This concludes the proof.
§.§ Proof of Theorem <ref>
Suppose that the data 𝒟 is generated from the nonstationary data generation process (<ref>), (<ref>), which satisfies the conditions in Theorem <ref> and Lemma <ref>, then the domain variables u_t are identifiable up to label swapping (Def. <ref>) and latent variables 𝐳_t are identifiable up to permutation and a component-wise transformation (Def. <ref>).
From Theorem <ref>, the domain variables u_t are identifiable up to label swapping, and then use the estimated domain variables in Lemma <ref>, the latent causal processes are also identifiable, that is, 𝐳_t are identifiable up to permutation and a component-wise transformation.
§.§ Discussion on Assumptions
§.§.§ Mechanism Separability
Note that we assume that there exists a ground truth mapping 𝒞: 𝒳×𝒳→𝒰, gives a domain index based on 𝐱_t-1, 𝐱_t. The existence of such mapping means that the human can tell what the domain is based on two consecutive observations. If two observations are not sufficient, then it can be modified to have more observation steps as input, for example 𝐱_≤ t or even full sequence 𝐱_[1:T]. If the input has future observation, which means that 𝐱_>t is included, then this is only valid for sequence understanding tasks in which the entire sequence will be visible to the model when analyzing the time step t. For prediction tasks or generation tasks, further assumptions on 𝒞 such as the input only contains 𝐱_<t should be made, which will be another story. Those variants are based on specific application scenarios and not directly affect our theory, for brevity, let us assume the two-step case.
§.§.§ Mechanism Sparsity
This is a rather intuitive assumption in which we introduce some form of sparsity in the transitions, and our task is to ensure that the estimated transition maintains this sparsity pattern. This requirement is enforced by asserting an equal or lower transition complexity as defined in Assumption <ref>. Similar approaches, grounded in the same intuition, are also explored in the reinforcement learning setting, as discussed in works by Lachapelle et al. <cit.> and Hwang et al. <cit.>. The former emphasizes the identifiability result of the independent components, which necessitates additional assumptions. In contrast, the latter focuses on the RL scenario, requiring the direct observation of the latent variables involved in the dynamics, which leaves significant challenges in real-world sequence-understanding tasks, where the states are latent. And it is also extensively discussed in the nonlinear ICA literature <cit.>, in which such a sparsity constraint was added to the mixing function.
§.§.§ Mechanism Variability
The assumption of mechanism variability requires that causal dynamics differ between domains, which requires at least one discrete edge variation within the causal transition graphs. This assumption is typically considered reasonable in practical contexts; humans identify distinctions between domains only when the differences are substantial, which often involves the introduction of a new mechanism or the elimination of an existing one. Specifically, this assumption requires a minimal alteration, a single edge change in the causal graph, to be considered satisfied. Consequently, as long as there are significant differences in the causal dynamics among domains, this criterion is fulfilled.
§.§.§ Mechanism Function Variability
r5.5cm
< g r a p h i c s >
Correct domain separation 𝒞 and incorrect domain separation 𝒞̂ of 𝐳_t+1, given a fixed 𝐳_t.
In this section, we will further discuss the mechanism function variability introduced in Corollary <ref>. One might question the necessity of this assumption. To illustrate this issue, we claim that if we only assume that the mechanism functions differ across domains but without this extended version of the variability assumption, i.e., for u ≠ u', 𝐦_u ≠𝐦_u', then under this proposed framework, the domain variables u_t are generally unidentifiable.
In Figure <ref>, we present a simple example of the space of 𝐳_t+1 given a fixed 𝐳_t. For the sake of brevity, assume that there are two domains. By the mechanism separability assumption <ref>, the space 𝒵 of 𝐳_t+1 is divided into two distinct parts, each corresponding to one domain. In this illustration, 𝒞 denotes the partition created by the ground truth transition function:
𝐳_t+1 =
𝐦_1(𝐳_t,ϵ) if u_t+1 = 1,
𝐦_2(𝐳_t,ϵ) if u_t+1 = 2.
Then the question arises: when the domain assignment is incorrect, that is, 𝒞̂≠𝒞, can we still get the same observational distributions, or equivalently, can we obtain the same distribution for 𝐳_t+1?
The answer is yes. For the ground truth transition, 𝐦_1(𝐳_t, ϵ) ∈ A ∪ C and 𝐦_2(𝐳_t, ϵ) ∈ B ∪ D. In the case of an incorrect partition 𝒞̂, it is sufficient to have 𝐦̂_1(𝐳_t, ϵ) ∈ A ∪ B and 𝐦̂_2(𝐳_t, ϵ) ∈ C ∪ D. Ensuring that the conditional distribution p(𝐳_t+1|𝐳_t) is matched everywhere, we can create two different partitions on domains, yet still obtain exactly the same observations. That makes the domain variables u_t unidentifiable in the general case.
How does the previous mechanism variability assumption work?
In the assumption of mechanism variability (Assumption <ref>), the support matrices of the Jacobian of transitions across different domains differ. Consider a scenario where the ground truth partition is 𝒞, denoted by A, C | B, D. If an incorrect estimation occurs, where our estimated partition is 𝒞̂, represented as A, B | C, D, then the estimated transition in domain one should cover the transitions in both A and B, and similarly for the second domain. This leads to an increase in complexity within the estimated Jacobian support matrix, as discussed in the previous sections. Consequently, this complexity forces the sets B and C to be empty, resulting in 𝒞̂ converging to 𝒞.
How about mechanism function variability?
Roughly speaking, and as demonstrated in our experiments, the mechanism variability assumption previously discussed is already sufficient to identify domain changes in both synthetic and real-world settings. This sufficiency arises because the assumption only requires a single differing spot, even though some transition functions behind some edges may persist across different domains. As long as there is one edge spot that can separate the two domains, this condition is met. In the relatively rare case where all edges in the causal dynamic transition graphs are identical across two different domains and only the underlying functions differ, we can still demonstrate identifiability in this scenario by examining differences in the support of the higher-order partial derivative matrices.
§.§.§ Weakly Diverse Lossy Transition
The weakly diverse lossy transition assumption requires that each variable in the latent space can potentially influence a set of subsequent latent variables, and such transformations are typically non-invertible. This implies that given the value of 𝐳_t+1, it is generally challenging to precisely recover the previous 𝐳_t; equivalently, this mapping is not injective. Although this assumption requires some explanation, it is actually considered mild in practice. Often in real-world scenarios, different current states may lead to identical future states, indicating a loss of information. The “weakly diverse” of this assumption suggests that the way information is lost varies between different dimensions, but there is some common part among them, hence the term “weakly diverse”. In the visualization example shown in Figure <ref>, we can clearly see this pattern, in which the scene is relatively simple and it is very likely that in two different frames, the configuration of the scene or the value of the latent variables are the same but their previous states are completely different.
§ EXPERIMENT SETTINGS
§.§ Synthetic Dataset Generation
r5.8cm
Synthetic Dataset Statistics
Property Value
Number of State 5
Dimension of 𝐳_t 8
Dimension of 𝐱_t 8
Number of Samples 32,000
Sequence Length 15
The synthetic dataset is constructed in accordance with the conditions outlined in Theorems <ref> and <ref>. Transition and mixing functions are synthesized using multilayer perceptrons (MLPs) initialized with random weights. The mixing functions incorporate the LeakyReLU activation function to ensure invertibility. The dataset features five distinct values for the domain variables, with both the hidden variables 𝐳_t and the observed variables 𝐱_t set to eight dimensions. A total of 1,000 sequences of domain variables were generated. These sequences exhibit high nonstationarity across domains, which cannot be captured with a single Markov chain. This was achieved by initially generating two distinct Markov chains to generate two sequences of domain indices. Subsequently, these sequences were concatenated, along with another sequence sampled from a discrete uniform distribution over the set {1, 2, 3, 4, 5}, representing the domain indices.
For each sequence of domain variables, we sampled a batch size of 32 sequences of hidden variables 𝐳_t beginning from a randomly initialized initial state 𝐳_0. These sequences were generated using the randomly initialized multilayer perceptron (MLP) to model the transitions. Observations 𝐱_t were subsequently generated from 𝐳_t using the mixing function as specified in Eq. <ref>. Both the transition functions in the hidden space and the mixing functions were shared across the entire dataset. A summary of the statistics for this synthetic dataset is provided in Table <ref>. For detailed implementation of this data generation process, please refer to our accompanying code in Sec. <ref>.
§.§ Real-world Dataset
Hollywood Extended <cit.>
The Hollywood dataset contains 937 video clips with a total of 787,720 frames containing sequences of 16 different daily actions such as walking or sitting from 69 Hollywood movies. On average, each video comprises 5.9 segments, and 60.9% of the frames are background.
CrossTask <cit.>
The CrossTask dataset features videos from 18 primary tasks. According to <cit.>, we use the selected 14 cooking-related tasks, including 2552 videos with 80 action categories. On average, each video in this subset has 14.4 segments, with 74.8% of the frames classified as background.
§.§ Mean Correlation Coefficient
MCC, a standard metric in the ICA literature, is utilized to evaluate the recovery of latent factors. This method initially computes the absolute values of the correlation coefficients between each ground truth factor and every estimated latent variable. Depending on the presence of component-wise invertible nonlinearities in the recovered factors, either Pearson’s correlation coefficients or Spearman’s rank correlation coefficients are employed. The optimal permutation of the factors is determined by solving a linear sum assignment problem on the resultant correlation matrix, which is executed in polynomial time.
§ IMPLEMENTATION DETAILS
§.§ Prior Likelihood Derivation
Let us start with an illustrative example of stationary latent causal processes consisting of two time-delayed latent variables, i.e., 𝐳_t = [z_1,t, z_2,t], i.e., z_i,t = m_i(𝐳_t-1, ϵ_i,t) with mutually independent noises, where we omit the u_t since it is just an index to select the transition function m_i. Let us write this latent process as a transformation map 𝐦 (note that we overload the notation m for transition functions and for the transformation map):
[ z_1,t-1; z_2,t-1; z_1,t; z_2,t; ]
=𝐦(
[ z_1,t-1; z_2,t-1; ϵ_1,t; ϵ_2,t ]).
By applying the change of variables formula to the map 𝐦, we can evaluate the joint distribution of the latent variables p(z_1,t-1, z_2,t-1, z_1,t,z_2,t) as:
p(z_1,t-1, z_2,t-1, z_1,t,z_2,t) = p(z_1,t-1, z_2,t-1, ϵ_1,t,ϵ_2,t) / |𝐉_𝐦|,
where 𝐉_𝐦 is the Jacobian matrix of the map 𝐦, which is naturally a low-triangular matrix:
𝐉_𝐦 =
[ 1 0 0 0; 0 1 0 0; ∂ z_1,t/∂ z_1,t-1 ∂ z_1,t/∂ z_2,t-1 ∂ z_1,t/∂ϵ_1,t 0; ∂ z_2,t/∂ z_1,t-1 ∂ z_2,t/∂ z_2,t-1 0 ∂ z_2,t/∂ϵ_2,t ].
Given that this Jacobian is triangular, we can efficiently compute its determinant as ∏_i ∂ z_i,t/∂ϵ_i,t. Furthermore, because the noise terms are mutually independent, and hence ϵ_i,t⊥ϵ_j,t for j ≠ i and ϵ_t ⊥𝐳_t-1, we can write the RHS of Eq. <ref> as:
p(z_1,t-1, z_2,t-1, z_1,t,z_2,t) = p(z_1,t-1, z_2,t-1) × p(ϵ_1,t,ϵ_2,t) / |𝐉_𝐦| (because ϵ_t ⊥𝐳_t-1)
= p(z_1,t-1, z_2,t-1) ×∏_i p(ϵ_i,t) / |𝐉_𝐦| (because ϵ_1,t⊥ϵ_2,t)
Finally, by canceling out the marginals of the lagged latent variables p(z_1,t-1, z_2,t-1) on both sides, we can evaluate the transition prior likelihood as:
p( z_1,t,z_2,t| z_1,t-1, z_2,t-1) = ∏_i p(ϵ_i,t) / |𝐉_𝐦| = ∏_i p(ϵ_i,t) ×|𝐉_𝐦^-1|.
Now we generalize this example and derive the prior likelihood below.
Let {m̂^-1_i}_i=1,2,3... be a set of learned inverse transition functions that take the estimated latent causal variables, and output the noise terms, i.e., ϵ̂_i,t = m̂^-1_i(u_t, ẑ_i,t, 𝐳̂_t-1).
Design transformation 𝐀→𝐁 with low-triangular Jacobian as follows:
[ 𝐳̂_t-1,
𝐳̂_t ]^⊤_𝐀 mapped to [ 𝐳̂_t-1,
ϵ̂_t ]^⊤_𝐁,
with 𝐉_𝐀→𝐁 =
[ 𝕀_n 0; diag(∂ m^-1_i,j/∂ẑ_jt) ].
Similar to Eq. <ref>, we can obtain the joint distribution of the estimated dynamics subspace as:
log p(𝐀) = log p(𝐳̂_t-1) + ∑_j=1^n log p(ϵ̂_i,t)_Mutually independent noise + log(|(𝐉_𝐀→𝐁) |) .
log p(𝐳̂_t |𝐳̂_t-1, u_t) = ∑_i=1^n log p(ϵ̂_i,t| u_t)+ ∑_i=1^n log| ∂ m^-1_i/∂ẑ_i,t|.
§.§ Derivation of ELBO
Then the second part is to maximize the Evidence Lower BOund (ELBO) for the VAE framework, which can be written as:
ELBO≜ log p_data({𝐱_t}_t=1^T) - D_KL(q_ϕ({𝐳_t}_t=1^T|{𝐱_t}_t=1^T)|| p_data({𝐳_t}_t=1^T|{𝐱_t}_t=1^T))
= 𝔼_𝐳_tlog p_data({𝐱_t}_t=1^T|{𝐳_t}_t=1^T) - D_KL(q_ϕ({𝐳_t}_t=1^T|{𝐱_t}_t=1^T)|| p_data({𝐳_t}_t=1^T|{𝐱_t}_t=1^T))
= 𝔼_𝐳_tlog p_data({𝐱_t}_t=1^T|{𝐳_t}_t=1^T) - 𝔼_𝐳_t(log q_ϕ({𝐳_t}_t=1^T|{𝐱_t}_t=1^T) - log p_data({𝐳_t}_t=1^T) ]
= 𝔼_𝐳_t(log p_data({𝐱_t}_t=1^T|{𝐳_t}_t=1^T) + log p_data({𝐳_t}_t=1^T) - log q_ϕ({𝐳_t}_t=1^T|{𝐱_t}_t=1^T) )
= 𝔼_𝐳_t(∑_t=1^Tlog p_data(𝐱_t|𝐳_t)_-ℒ_Recon
+ ∑_t=1^Tlog p_data(𝐳_t|𝐳_t-1,u_t)
- ∑_t=1^Tlog q_ϕ(𝐳_t|𝐱_t)_-ℒ_KLD)
§.§ Reproducibility
All experiments are performed on a GPU server with 128 CPU cores, 1TB memory, and one NVIDIA L40 GPU. For synthetic experiments, we run the baseline methods with implementation from <https://github.com/weirayao/leap> and <https://github.com/xiangchensong/nctrl>.
For real-world experiments, the implementation is based on <https://github.com/isee-laboratory/cvpr24_atba>.
Our code is also available via <https://github.com/xiangchensong/ctrlns>.
§.§ Hyperparameter and Train Details
For synthetic experiments, the models were implemented in . We trained the VAE network using the AdamW optimizer with a learning rate of 5 × 10^-4 and a mini-batch size of 64. Each experiment was conducted using three different random seeds and we reported the mean performance along with the standard deviation averaged across these seeds. The coefficient for the L_2 penalty term was set to 1 × 10^-4, which yielded satisfactory performance in our experiments. We also tested L_1 penalty or L_2 penalty with larger coefficients, the setting we used in this paper provided the best stability and performance.
All other hyperparameters of the baseline methods follow their default values from their original implementation. For real-world experiments, we follow the same hyperparameter setting from the baseline ATBA method. In the Hollywood dataset, we used the default 10-fold dataset split setting and calculated the mean and standard derivation from those 10 runs. For the CrossTask dataset, we calculate the mean and standard derivation using five different random seeds.
§ VISUALIZATION ON ACTION SEGMENTATION
We visualize some examples from the Hollywood dataset. As shown in Figure <ref> we can see that our can estimate the actions more accurate than baseline method.
§ EXTENDED RELATED WORK
§.§ Causal Discovery with Latent Variables
Various studies have focused on uncovering causally related latent variables. For example, <cit.> use vanishing Tetrad conditions <cit.> or rank constraints to detect latent variables in linear-Gaussian models, whereas <cit.> rely on non-Gaussianity in their analyses of linear, non-Gaussian models.
Additionally, some methods seek to identify structures beyond latent variables, leading to hierarchical structures.
Certain hierarchical model-based approaches assume tree-like configurations, as seen in <cit.>, while other methods consider a more general hierarchical structure <cit.>.
Nonetheless, these approaches are restricted to linear frameworks and encounter increasing difficulties with complex datasets, such as videos.
§.§ Causal Temporal Representation Learning
In the context of sequence or time series data, recent advances in nonlinear Independent Component Analysis (ICA) have leveraged temporal structures and nonstationarities to achieve identifiability. Time-contrastive learning (TCL) <cit.> exploits variability in variance across data segments under the assumption of independent sources. Permutation-based contrastive learning (PCL) <cit.> discriminates between true and permuted sources using contrastive loss, achieving identifiability under the uniformly dependent assumption. The i-VAE <cit.> uses Variational Autoencoders to approximate the joint distribution over observed and nonstationary regimes. Additionally, (i)-CITRIS <cit.> utilizes intervention target information to identify latent causal factors. Other approaches such as LEAP <cit.> and TDRL <cit.> leverage nonstationarities from noise and transitions to establish identifiability. CaRiNG <cit.> extended TDRL to handle non-invertible generation processes by assuming sequence-wise recoverability of the latent variables from observations.
All the aforementioned methods either assume stationary fixed temporal causal relations or that the domain variables controlling the nonstationary transitions are observed. To address unknown or unobserved domain variables, HMNLICA <cit.> integrates nonlinear ICA with a hidden Markov model to automatically model nonstationarity. However, this method does not account for the autoregressive latent transitions between latent variables over time. IDEA <cit.> combines HMNLICA and TDRL by categorizing the latent factors into domain-variant and domain-invariant groups. For the variant variables, IDEA adopts the same Markov chain model as HMNLICA, while for the invariant variables, it reduces the model to a stationary case handled by TDRL. Both iMSM <cit.> and NCTRL <cit.> extend this Markov structure approach by incorporating transitions in the latent space but continue to assume that the domain variables follow a Markov chain.
§.§ Weakly-supervised Action Segmentation
Weakly-supervised action segmentation techniques focus on dividing a video into distinct action segments using training videos annotated solely by transcripts <cit.>. Although these methods have varying optimization objectives, many employ pseudo-segmentation for training by aligning video sequences with transcripts through techniques like Connectionist Temporal Classification (CTC) <cit.>, Viterbi <cit.>, or Dynamic Time Warping (DTW) <cit.>. For instance, <cit.> extends CTC to consider visual similarities between frames while evaluating valid alignments between videos and transcripts. Drawing inspiration from speech recognition, <cit.> utilize the Hidden Markov Model (HMM) to link videos and actions. <cit.> initially produces uniform segmentations and iteratively refines boundaries by inserting repeated actions into the transcript. <cit.> introduces an alignment objective based on explicit context and length models, solvable via Viterbi, to generate pseudo labels for training a frame-wise classifier. Similarly, <cit.> and <cit.> propose novel learning objectives but still rely on Viterbi for optimal pseudo segmentation. Both <cit.> use DTW to align videos to both ground-truth and negative transcripts, emphasizing the contrast between them. However, except for <cit.>, these methods require frame-by-frame calculations, making them inefficient. More recently, alignment-free methods have been introduced to enhance efficiency. <cit.> learns from the mutual consistency between frame-wise classification and category/length pairs of a segmentation. <cit.> enforces the output order of actions to match the transcript order using a novel loss function. Although POC <cit.> is primarily set-supervised, it can be extended to transcript supervision, making its results relevant for comparison.
§ LIMITATIONS
As noted in Sec. <ref>, our main theorem relies on the condition that causal graphs among different domains must be distinct. Although our experiments indicate that this assumption is generally sufficient, there are scenarios in which it may not hold, meaning that the transition causal graphs are identical for two different domains, but the actual transition functions are different. We have addressed this partially through an extension to the mechanism variability assumption to higher-order cases (Corollary <ref>). However, dealing with situations where transition graphs remain the same across all higher orders remains a challenge. We acknowledge this as a limitation and suggest it as an area for future exploration.
We also observed that the random initialization of the nonlinear ICA framework can influence the total number of epochs needed to achieve identifiability, as illustrated in Figure <ref>. Also, for the computational efficiency, the TDRL framework we adopted involves a prior network that calculated each dimension in the latent space one by one, thus making the training efficiency suboptimal. Since this is not directly related to major claim which is our sparse transition design, we acknowledge this as a limitation and leave it for future work.
§ BOARDER IMPACTS
This work proposes a theoretical analysis and technical methods to learn the causal representation from time-series data, which
facilitate the construction of more transparent and interpretable models to understand the causal effect in the real world.
This could be beneficial in a variety of sectors, including healthcare, finance, and technology.
In contrast, misinterpretations of causal relationships could also have significant negative implications in these fields, which must be carefully done to avoid unfair or biased predictions.
|
http://arxiv.org/abs/2409.02529v1 | 20240904084242 | Sample what you cant compress | [
"Vighnesh Birodkar",
"Gabriel Barcik",
"James Lyon",
"Sergey Ioffe",
"David Minnen",
"Joshua V. Dillon"
] | cs.LG | [
"cs.LG",
"cs.CV"
] |
[
[
=====
§ ABSTRACT
For learned image representations, basic autoencoders often produce blurry results. Reconstruction quality can be improved by incorporating additional penalties such as adversarial (GAN) and perceptual losses. Arguably, these approaches lack a principled interpretation. Concurrently, in generative settings diffusion has demonstrated a remarkable ability to create crisp, high quality results and has solid theoretical underpinnings (from variational inference to direct study as the Fisher Divergence). Our work combines autoencoder representation learning with diffusion and is, to our knowledge, the first to demonstrate the efficacy of jointly learning a continuous encoder and decoder under a diffusion-based loss.
We demonstrate that this approach yields better reconstruction quality as compared to GAN-based
autoencoders while being easier to tune.
We also show that the resulting representation is easier to model
with a latent diffusion model as compared to the representation obtained from a state-of-the-art GAN-based loss.
Since our decoder is stochastic, it can generate details not encoded in the otherwise deterministic latent representation; we therefore name our approach “Sample what you can't compress”, or for short.
[1]Primary contact.
[2]Work done while at Google.
§ INTRODUCTION
Image autoencoders ultimately necessitate a pixel-level loss to measure and minimize distortion. A common choice is to use mean squared error (MSE). This is a problem for image and video models because MSE favors low frequencies over high frequencies
<cit.>. Although generalized robust loss functions have been developed <cit.>, they are insufficient on their own for avoiding blurry reconstructions. A popular fix is to augment a pixel-level loss with additional penalties. Typically, MSE is still used because it is easy to optimize due to its linear gradient.
For example, <cit.> use a combination of MSE, perceptual
loss, and adversarial loss. <cit.> noted that an adversarial loss
helps them get high-quality images with realistic textures.
Unfortunately GANs remain challenging to train; which was most
recently noted by <cit.>, when they couldn't naively scale up
their architecture.
The diversity of their outputs is also limited, because modern GAN based decoders
are deterministic, and thus lack the capacity to sample multiple different possibilities.
0.47
< g r a p h i c s >
figureReconstruction distortion (lower is better) as a function of compression for and GAN based auto-encoders.
0.47
< g r a p h i c s >
figureClass conditional generation quality (lower is better) as a function of compression for and GAN based auto-encoders.
As an alternative, this paper describes a technique for using a diffusion loss to learn an autoencoder. The diffusion loss is sensible because it is a proper scoring rule with favorable theoretical properties such as being formally connected to KL divergence.
It has proven itself capable of generating crisp results with high perceptual quality as reflected by human evaluation studies <cit.>.
To demonstrate its simplicity, we take a popular encoder architecture
<cit.> and marry it with a U-net decoder, a popular denoising
architecture for diffusion <cit.>.
With some additional details outlined in <ref>,
we show that this approach achieves lower distortion at all
compression levels as measured by the CMMD metric <cit.>.
Because our decoder is able to sample details at test-time that are not encoded in the latents, we call our approach “Sample what you can't compress” or for short.
This work will show that,
* SWYCC achieves lower reconstruction distortion at all tested compression levels vs SOTA GAN-based autoencoders (<ref>).
* SWYCC representations enable qualitatively better latent diffusion generation results at alltested compression levels vs SOTA GAN-based autoencoders (<ref>).
* Splitting the decoder into two parts improves training dynamics (<ref> and <ref>).
§ METHOD
Eliding its various parametrizations for brevity, the standard diffusion loss is characterized by the Monte Carlo approximation of the following loss,
ℓ(x) _ε∼(0,I_h· w· 3), t∼[0,1][w_t(⃗x) - D(α_t (⃗x)+σ_tε, t)_2^2].
Herein x ∈^h× w× 3 denotes a natural image and D is a neural network fit using gradient descent and which serves to denoise the corrupted input x_tα_t x + σ_t ε at a given noise-level t.
Let the corruption process be the cosine schedule <cit.>, σ_t^21-α_t^2 and α_t^2(-2logtan(at+b)) where aarctan(e^-1/2(-20)) - b and barctan(e^-1/2 (20)).
We extend this definition to the task of autoencoding by simply allowing the denoising function to take an additional argument, (E(x)), itself having access to the uncorrupted input x but only through the bottlenecking function E. The result is,
ℓ_AE(x) _ε∼(0,I_h· w· 3), t∼[0,1][w_tx - (α_t x+σ_tε, t, (E(x))) _2^2].
As the notation suggests E is an encoder which, notably, is learned
jointly with “diffusion decoder” and secondary decoder :→^h× w× 3. The specification of is largely a convenience but also merits secondary advantages. By mapping z=E(x) back into x-space, we can simply concatenate the corrupted input x_t and its “pseudo reconstruction,” (z). Additionally, we find that directly penalizing (z), as described below, speeds up training.
§.§ Architectural Details
Encoder: We use a fully convolutional encoder in all of our
experiments, whose specifics we borrow from MaskGIT<cit.>.
The encoder consists of multiple ResNet<cit.> blocks stacked on top
of each other, with GeLU<cit.> for its non-linearities and GroupNorm<cit.> for training stability.
The ResNet blocks are interspersed with strided convolutions
with stride 2 which achieves a 2 × downsampling
by itself.
To get the 8×8 patch size, we use 4 ResNet blocks with 3 downsampling
blocks.
The encoder architecture is common for all of our experiments,
and we only change the number of channels at the output layer
to achieve the desired compression ratio.
Decoder: For the decoder in the GAN baseline and we use an architecture that is
the reverse of the encoder. For up-sampling,
we use the depth-to-space operation.
Just like the encoder, we have
4 ResNet blocks interspersed with
3 depth-to-space operations.
For we use a U-Net as defined
by <cit.>.
The U-Net has 4 ResNet blocks for downsampling
and corresponding 4 ResNet blocks for upsampling with residual connections
between blocks of the same resolution.
After 4 downsampling stages, when
resolution is 16×16, we
use a self-attention block to give
the network additional capacity.
§.§ Speeding-Up Training
We find that additional direct penalization of (E(x) leads to faster training and improved CMMD and FID. This was achieved by minimizing a composite loss
containing terms with favorable Hessian (<ref>) and perceptual characteristics (<ref>),
ℓ_Totalℓ_AE + λ_pℓ_Perceptual
+ λ_mℓ_MSE
where,
ℓ_MSE x - (E(x))_2^2
and,
ℓ_Perceptualf_Frozen(x) - f_Frozen( (E(x)))_2^2.
The function f_Frozen is an unlearnable standard ResNet, itself trained on
ImageNet and used for both the baseline and . We found the best hyper-parameter setting for <ref> is λ_m = 1 and λ_p = 0.1
Setting λ_p > 0 was particularly important to be competitive at reconstruction with GAN based methods in Table <ref>.
The visual impact of perceptual loss is shown in Figure <ref>. We maintain
that the main advantage of using auxiliary losses is to speed-up training,
because given enough data and training time, our
diffusion decoder should converge to the true distribution.
For generating reconstructions (recall that the decoder is stochastic) we used classifier-free guidance during inference <cit.>; for the unconditional model the U-Net was trained with (E(x)) dropped-out, i.e., randomly zeroed out on 10% of training instances.
§ EXPERIMENTS
In this section we explore how the GAN-based loss compares to our approach.
Without loss of generality, we define the relative compression
ratio of 1 to be a network that maps 8 × 8 RGB patches to an 8 dimensional
latent vector. Effectively, this means for our encoder
E if and x∈^256 × 256 × 3,
then E(x) ∈^32 × 32 × C
where C=8.
In general, to achieve a relative compression ratio of
K we set C = 8/K. The effect of increasing
the compression ratio is plotted in Figure <ref>.
Observe that distortion degrades much more rapidly for the
GAN based auto-encoder as measured by CMMD <cit.> which
Imagen-3 <cit.> showed better correlates with
human perception.
Not only is our approach better at all compression levels; the gap between the GAN based
autoencoder and widens as we increase the relative compression ratio.
Using the much simpler diffusion formulation under Equation <ref>,
we are able to reconstruct crisp looking images with detailed textures
(See Figure <ref>). Our method has the added benefit that
we do not need to tune any GAN related hyper-parameters, and can scale up effectively
using the large body of diffusion literature (<cit.>, <cit.>).
§.§ Impact of
Observing Equation <ref> and Figure <ref>, we note
that the output of is an intermediate tensor that is not strictly
required for the diffusion loss or for generating the output. We
show using Table <ref> that this piece is crucial for
achieving performance comparable to the GAN based autoencoder.
The perceptual loss term in particular has a large impact in
reducing distortion. Visual examples are shown in Figure <ref>.
§.§ Analysis of sampling in
In Figure <ref> we show the qualitative difference number of
sampling steps makes to reconstructions. We can see that even with just
2 steps the high level structure of the image is present. In Figure <ref>
we study the impact of number of sampling steps by using the CMMD metric.
Figure <ref> shows what sampling in actually ends up changing.
We can see that only regions with high-frequency components and detailed textures
are changed between samples, while regions containing similar colors over large
areas are left untouched.
Figure <ref> studies the effect of classifier-free guidance <cit.>
as used in . We ablate the guidance with a model trained at a relative compression
factor of 4 (See Section <ref> for definition) and find
that a guidance scale of 0.5 works the best. This is not to be
confused with the guidance scale of the latent diffusion model that may
be trained on top of our autoencoder, which is a completely separate parameter
to be tuned independently.
§.§ Modeling latents for diffusion
We use a DiT model <cit.> to model the latent space of our models
for the task of class-conditional image generation.
In Figure <ref> we compare our latents with those of a GAN based autoencoder.
Crucially, note that despite using an identical encoder architecture,
our approach can improve FID <cit.> by over 12%.
We hypothesize that our latent space is easier to model
because our encoder does not need to model high-frequency texture
information because the decoder can independently sample those details.
§.§ Exploring better perceptual losses
In Table <ref> we showed the large impact perceptual loss
has on reconstruction quality. This begs the question; are there
better auxiliary losses we can use?
We compare the perceptual loss as described by in VQGAN <cit.> and replace it with the DISTS <cit.>.
DISTS loss differs from perceptual loss in 2 important ways. a)
It uses SSIM <cit.> instead of mean squared error and b) It
uses features are multiple levels instead of using only
the activation's from the last layer.
The results are shown in Figure <ref>.
At lower relative compression ratios, DISTS loss helps GANs
and . But at higher relative compression ratios,
GAN based autoencoders perform even worse than the perceptual
loss based baseline. We think this is an avenue for future exploration.
We perform all other experiments with perceptual loss <cit.>,
since it is more prevalent in literature and it helps
GAN based auto-encoders at higher relative compression ratios.
§.§ Architecture and hyper-parameters
Autoencoder training hyper-parameters
We train all of our models on the ImageNet dataset resized at 256 × 256
resolution. During training, we resize the image such that the shorter side measures
256 pixels and take a random crop in that image of size 256 × 256. For measuring reference statistics on the validation set, we take the largest
possible center square crop. All of our models are trained at a batch size of 256 for
10^6 steps which roughly equals 200 epochs.
GAN baseline:
We use the popular convolutional encoder-decoder architecture popularized
by MaskGIT <cit.> in our GAN-based baselines. This autoencoder design is used by many image models, including FSQ <cit.> and GIVT <cit.>, and was extended to the video domain by MAGVIT-v2 <cit.>, VideoPoet <cit.>, and WALT <cit.>. We take advantage of decoder improvements developed by MAGVIT-v2 (see Section 3.2 in <cit.>) to improve reconstruction quality.
Note that while the video models enhance the base autoencoder architecture with 3D convolution to integrate information across time, the discriminator and perceptual loss are still applied on a per-frame basis and thus are essentially unchanged in our model.
: In our experiments we keep the architecture of identical
to the decoder used in the GAN baseline. For we use the U-Net architecture
as parameterized by <cit.>. We borrow the U-Net 256 architecture
and make the following modifications:
We use the Adam optimizer to learn our parameters. The learning rate is warmed up
for 10^4 steps from 0 to a maximum value of 10^-4 and cosine decayed
to 0. We use gradient clipping with global norm set to 1.
Latent Diffusion:
We train for 2 × 10^6 steps with a batch size of 256.
We use a classifier-free guidance scale of 1 and dropout the class embedding
10% of the time during training.
All of our experiments are done with a DiT-XL architecture.
§ RELATED WORK
Autoencoders for 2-stage generation: For discrete representation learning, <cit.> showed the usefulness of the 2-stage modeling approach. In this broad category, the first stage fits an autoencoder to the training data with the goal of learning a compressed representation useful for reconstructing images. This is followed by a second stage where the encoder is frozen and a generative model is trained to predict the latent representation based on a conditioning signal. This approach regained popularity when <cit.> showed that it can be used for zero-shot
text generation, and is now the dominant approach for image and video generation <cit.>.
Adversarial losses: <cit.> extended the autoencoder from <cit.> with
two important new losses, the perceptual loss and the adversarial loss,
taking inspiration from the works of <cit.>
and <cit.>. The perceptual loss is usually defined as the L2 loss between a latent representation of the original and reconstructed image.
The latent representation, for example, can be extract from
the final layer activations of a ResNet optimized to classify ImageNet images.
The adversarial loss is a patch-based discriminator that uses a discriminator
network to predict at a patch-level whether it is real or fake.
This encourages the decoder to produce realistic looking textures.
Latent Diffusion: <cit.> popularized text-to-image
generation using latent diffusion models. They kept the autoencoder
from <cit.> intact and simply removed the quantization layer.
This accelerated diffusion model research in the community owing
to the fact that the latent space was much smaller than the
pixel space, which allows fast training and inference compared
to diffusion models like Imagen that sample pixels directly <cit.>.
Compression and diffusion: <cit.>
showed that diffusion models can be used for compression. Crucially,
compared to our approach they use a frozen autoencoder, and do not
train their autoencoder end-to-end. They also use an objective based on modified flow matching. In contrast, we did not modify the loss or the sampling algorithm.
In a similar context, <cit.> developed an end-to-end optimized compression model using a diffusion decoder. They show improved perceptual quality compared to earlier GAN-based compression methods at the expense of higher distortion (pixel-level reconstruction accuracy). Different from our approach, they use a discrete latent space, which is required for state-of-the-art compression rates achieved via entropy coding. This limits the reconstruction quality but is required for a compression model that ultimately seeks to minimize a rate-distortion objective, not just a reconstruction and sampling quality objective.
§ CONCLUSION
We have described a general autoencoder framework that uses a diffusion based
decoder. Compared to decoders that use GANs, our system is much more easier
to tune and has the same theoretical underpinnings as diffusion models.
We showed our method produces sigificantly less distortions as compared
to GAN based autoencoders in Figure <ref> and are better
behaved as latent spaces for diffusion in Figure <ref>.
In Section <ref> and <ref> we studied
the hyper-parameter settings on the 2 major components of our decoder,
and .
Possible extensions: The autoencoder technique we describe is fairly
general and can be extended to any other continuous modality like audio, video or
3D-point clouds. In addition, all improvements to diffusion algorithms like those
by <cit.> can be carried over.
Limitations: The main limitation of our method is the increase in inference
cost during decoding. This can be partly mitigated by using fewer steps like in Figure <ref>. In addition, techniques used to improve diffusion
sampling time like Progressive distillation<cit.> and Instaflow<cit.>
are also prudent.
iclr2025_conference
§ APPENDIX
§.§ Visual examples
|
http://arxiv.org/abs/2409.03579v1 | 20240905143430 | Disjoint Compatibility via Graph Classes | [
"Oswin Aichholzer",
"Julia Obmann",
"Pavel Paták",
"Daniel Perz",
"Josef Tkadlec",
"Birgit Vogtenhuber"
] | cs.CG | [
"cs.CG"
] |
1]Oswin Aichholzer
1]Julia Obmann
2]Pavel Paták
3]Daniel Perz
4]Josef Tkadlec
1]Birgit Vogtenhuber
[1]Graz University of Technology, Institute of Software Technology
[2]Czech Technical University in Prague, Department of Applied Mathematics
[3]University of Perugia, Department of Engineering
[4]Charles University Prague, Computer Science Institute
Disjoint Compatibility via graph classesResearch on this work was initiated at the 6th Austrian-Japanese-Mexican-Spanish Workshop on Discrete Geometry and continued during the 16th European Geometric Graph-Week, both held near Strobl, Austria.
We are grateful to the participants for the inspiring atmosphere. We especially thank Alexander Pilz for bringing this class of problems to our attention.
D.P. was partially supported by the FWF grant I 3340-N35 (Collaborative DACH project Arrangements and Drawings).
The research stay of P.P. at IST Austria was funded by the project CZ.02.2.69/0.0/0.0/17_050/0008466 Improvement of internationalization in the field of research and development at Charles University, through the support of quality projects MSCA-IF.
[
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Two plane drawings of graphs on the same set of points are called disjoint compatible if their union is plane and they do not have an edge in common.
Let S be a convex point set of 2n ≥ 10 points and let ℋ be a family of plane drawings on S.
Two plane perfect matchings M_1 and M_2 on S (which do not need to be disjoint nor compatible) are disjoint ℋ-compatible if there exists a drawing in ℋ which is disjoint compatible to both M_1 and M_2.
In this work, we consider the graph which has all plane perfect matchings as vertices and where two vertices are connected by an edge if the matchings are disjoint ℋ-compatible.
We study the diameter of this graph when ℋ is the family of all plane spanning trees, caterpillars or paths.
We show that in the first two cases the graph is connected with constant and linear diameter, respectively, while in the third case it is disconnected.
[l]0.28
< g r a p h i c s >
[l][0,5cm]0.83
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 734922.
§ INTRODUCTION
In this work we study straight-line drawings of graphs.
Two plane drawings of graphs on the same set S of points are called compatible if their union is plane. The drawings are disjoint compatible if they are compatible and do not have an edge in common. For a fixed class G, e.g. matchings, trees, etc., of plane geometric graphs on S the (disjoint) compatibility graph of S has the elements of G as the set of vertices and an edge between two elements of G if the two graphs are (disjoint) compatible. For example, it is well known that the (not necessarily disjoint) compatibility graph of plane perfect matchings is connected <cit.>.
Moreover, in <cit.> it is shown that there always exists a sequence of at most Ø(log n) compatible (but
not necessarily disjoint) perfect matchings between any two plane perfect matchings of a set
of 2n points in general position, that is, the graph of perfect matchings is connected
with diameter Ø(log n).
On the other hand, Razen <cit.> provides an example of a point set where this diameter is Ω(log n/ loglog n).
Disjoint compatible (perfect) matchings have been investigated in <cit.> for sets of 2n points in general position. The authors showed that for odd n there exist
perfect matchings which are isolated vertices in the disjoint compatibility graph and posed the following conjecture:
For every perfect matching with an even number of edges there exists a disjoint
compatible perfect matching. This conjecture was answered in
the positive by Ishaque, Souvaine and Tóth <cit.> and it was mentioned that for even n it remains an open problem whether the disjoint compatibility graph is always connected.
In <cit.> it was shown that for sets of 2n ≥ 6 points in convex position
this disjoint compatibility graph is (always) disconnected.
Both concepts, compatibility and disjointness, are also used in combination with different geometric graphs. For example, in <cit.> it was shown that the flip-graph of all triangulations that admit a (compatible) perfect matching, is connected[In the flip-graph, two triangulations are connected if they differ by a single edge.]. It has also been shown that for every graph with an outerplanar embedding there exists a compatible plane perfect matching <cit.>. Considering plane trees and simple polygons, the same work provides bounds on the minimum number of edges a compatible plane perfect matching must have in common with the given graph. For simple polygons, it was shown in <cit.> that it is -hard to decide whether there exist a perfect matching which is disjoint compatible to a given simple polygon. See also the survey <cit.> on the related concept of compatible graph augmentation.
In a similar spirit we can define a bipartite disjoint compatibility graph, where the two sides of the bipartition represent two different graph classes.
Let one side be all plane
perfect matchings of S while the other side consists of all plane spanning trees of S.
Edges represent the pairs of matchings and trees which are disjoint compatible.
Considering connectivity of this bipartite graph, there trivially exist isolated
vertices on the tree side – consider a spanning star, which can not have any disjoint compatible matching. Thus, the question remains whether there exists a bipartite connected subgraph which contains all vertices representing plane perfect matchings.
This point of view leads us to a new notion of adjacency for perfect matchings.
For a given set S of 2n points and a family ℋ of drawings on S,
two plane perfect matchings M_1 and M_2 (which do not need to be disjoint nor compatible) are disjoint ℋ-compatible if there exists a drawing D in ℋ which is disjoint compatible to both M_1 and M_2; see <ref> for an example.
The disjoint ℋ-compatibility graph (ℋ) has all plane perfect matchings of S as vertices.
We have an edge between the vertices corresponding to M_1 and M_2 if M_1 and M_2 are disjoint ℋ-compatible.
In other words, they are two steps apart in the corresponding bipartite disjoint compatibility graph.
Rephrasing the above question, we ask whether (ℋ) is connected.
Recall that the disjoint compatibility graph for perfect matchings alone is not connected (see <cit.>).
In this work we study the case where S is a set of 2n points in convex position and consider the cases where
ℋ is the family of all plane spanning trees,
the family of all plane spanning caterpillars, or
the family of all plane spanning paths.
We show that () and () are connected if
2n ≥ 10.
In that case the diameter of () is either 4 or 5, independent of n,
and the diameter of () is Ø(n).
For n=2, () and () are also connected.
While for 4≤ n ≤ 10, () and () are disconnected. This was verified by computer.
On the other hand we show that () is disconnected.
From here on, if not said otherwise, all matchings, trees, caterpillars and paths are on point sets in convex position and are plane.
Hence, we omit the word 'plane' for these drawings.
Further, all matchings considered in this work are perfect matchings.
This work is partially based on the master's thesis of the second author <cit.>.
§ PRELIMINARIES
Throughout this article let S be a set of 2n points in the plane in convex position.
The edges of a drawing on S can be classified in the following way.
We call edges, that are spanned by two neighboring points of S, perimeter edges; all other edges spanned by S are called diagonals.
We call matchings without diagonals perimeter matchings.
Note that there are exactly two perfect perimeter matchings.
We label the perimeter edges alternately even and odd.
The even perimeter matching consists of all even perimeter edges.
The odd perimeter matching consists of all odd perimeter edges.
Looking at a matching M on S, the edges of M split the convex hull of S into regions, such that no edge of M crosses any region.
More formally, we call a set X ⊂ M of k≥ 2 matching edges a k-semicycle if no edge of M intersects the interior of the convex hull of X.
Further, we call the boundary of the convex hull of X a k-cycle, denoted by X.
If X contains at least two diagonals of S, then we call X an inside k-semicycle.
Otherwise, we call X a k-semiear (this includes perimeter matchings); see <ref>.
Analogously, we denote cycles as inside k-cycles or k-ears, respectively.
Consider a perfect matching M and a semicycle X of M. We say that we rotate X if we take all edges of M and replace X by X\ X, which gives us a perfect matching M'.
So the symmetric difference of M and M' is exactly X.
§ DISJOINT COMPATIBILITY VIA SPANNING TREES
In this section we show that for convex point sets S of 2n ≥ 10 points, the disjoint compatibility graph () is connected.
We further prove that the diameter is upper bounded by 5. The idea is that any matching on S has small distance to one of the two perimeter matchings and those themselves are close to each other in ().
First we show that arbitrarily many inside cycles can be simultaneously rotated in one step.
lemmalemmaInsideCycle
Let M and M' be two matchings whose symmetric difference is a union of disjoint inside cycles. Then M and M' are -compatible to each other.
First we focus on one inside cycle C.
Let u_1v_1 and u_2v_2 be two diagonals of X, labeled as in <ref>.
Note that v_1 and u_2 could be the same point.
We take the edges from u_1 to any point between v_1 and u_2 and from u_2 to any point between v_2 and u_1 including u_1.
This yields a tree T_1 on the points of X except v_1 and v_2.
If we have multiple disjoint inside cycles, we construct a tree B'_j in this way for every inside cycle C_j.
Note that the inside cycles splits the convex hull of S into multiple parts.
We denote each part with A_i ⊂ S where each A_i is chosen maximal in the sense that it also contains the vertices of the bounding diagonals, see <ref>.
In other words, the A_is are chosen such that the intersection of any inside cycle and any A_i contains at most two vertices, any two distinct A_i have at most a point in common
and for any diagonal of an inside cycle, there exists exactly one index i such that A_i contains the vertices of the diagonal.
Further, let M_i be the induced matching of M on A_i. Note that M_i is also the induced matching of M' on A_i.
For each index i we add edges B_i on A_i which do not cross any edge in M_i such that M_i and B_i do not have an edge in common and B_1 ∪ M_i is a triangulation of A_i.
We claim that B_i spans all points in A_i.
Clearly, B_i ∪ M_i spans all points in A_i.
Let e be an edge of M_i and Δ be a triangle of B_1 ∪ M_i that contains e.
Since M_i is a matching, Δ and M_i have exactly e in common.
Hence, B_i contains the other two edges of Δ.
Removing any edge e from B_i ∪ M_i does not lead to a disconnected drawing.
Therefore, B_i spans all points in A_i.
Merging all the drawings B_i and B'_j we get a spanning drawing on S.
Breaking cycles one by one we eventually obtain a spanning tree.
We next consider sufficiently large ears. The following lemma states that such ears can be rotated in at most three steps; see <ref> for a sketch of this sequence of rotations, whose proof uses <ref>.
Note that <ref> also implies that the two perimeter matchings have distance at most 3 in ().
lemmalemmaLargeCycles
Let M and M' be two matchings whose symmetric difference is a k-ear with k≥ 6. Then M and M'
have distance at most 3 in ().
The idea of the proof is to perform three rotations of inside cycles. Each rotation can be done in one step due to Lemma <ref>.
We proceed as in <ref>:
First we find 4 points A, B, C, D of the ear such that each of the four arcs AB, BC, CD, DA of the ear contain a positive even number of points in its interior. Without loss of generality the points A, B are matched inside AB and C, D are matched inside CD. We do the following three steps: Rotate ABCD, rotate a 2-cycle ABCD, rotate BCDA. Since each arc initially contained at least two points, each step rotates an inside cycle and it is easily checked that this transforms M into M'.
theoremthmUpperBound
For 2n≥ 10, the graph () is connected with diameter (())≤ 5.
For 2n=10, the statement follows from checking all pairs of matchings.
<Ref> gives a schematic depiction of the whole graph () in this case.
If we want to find a path between rotated version of some nodes,
we just need to find a walk in the picture along which
the rotations compose to the desired value.
Now assume 2n≥ 12.
We color the perimeter alternately in blue and red and refer to the odd (resp. even) perimeter matching as the blue perimeter matching B (resp. red perimeter matching R). Moreover, for a fixed matching M, let (M)=min{(̣M,B),(̣M,R)} and (M)=max{(̣M,B),(̣M,R)} be the distance from M to the closer and the further perimeter matching, respectively.
Since by <ref> we have , it suffices to show that the non-perimeter matchings can be split into three classes A_1, A_2, A_3 with the following properties (see <ref>):
1. ∀ M∈ A_1 we have (M)≤ 1 (and hence (M)≤ 1+3=4);
2. ∀ M∈ A_2 we have (M)≤ 2 and (M)≤ 3;
3. ∀ M∈ A_3 we have (M)≤ 3 and ∀ M,M'∈ A_3 we have .
Fix a matching M. It consists of a number (possibly zero) of diagonals, odd perimeter edges (shown in blue), and even perimeter edges (shown in red).
The convex hull of S is split by the diagonals into several polygons, each of them corresponding to a cycle. The dual graph D(M) of these polygons is a tree. Its leaves correspond to ears and the interior nodes correspond to inside cycles.
Since the diagonals of M split the perimeter into (possibly empty) arcs that alternately consist of only red and only blue sides, the nodes of the tree can be properly two-colored in blue and red by the color of the perimeter edges of the corresponding polygons (see <ref>).
Now we distinguish four cases based on what the dual tree D(M) looks like. Let b and r be the number of leaves in D(M) colored blue and red, respectively. Without loss of generality we assume that b≥ r. Remember that by Lemma <ref> we can rotate any number of disjoint inside cycles in one step.
∙b≥ 1,r=0: If b=1 then M=B. Otherwise, we simultaneously rotate all red inside cycles. This removes all diagonals, we reach B in 1 step and we put M into A_1.
∙b≥ 2, r≥ 2: We can get to B in 2 steps: First, simultaneously rotate all blue inside cycles (this removes all diagonals except the ones separating blue leaves of D(M)). Then rotate the (only, red) inside cycle. Similarly, we can reach R in 2 steps, hence M can go to A_2. (This case can only occur when 2n≥ 16.)
∙b≥ 2, r=1: See <ref>. In the first step, rotate all blue inside cycles to get b≥ 2 blue leaves and one (red) inside cycle. To get to B, rotate the inside cycle (≤ 2 steps total). To get to R, note that the original diagonal that cut off the red leaf disappeared in the first step, hence it was rotated out and we must now have at least 1+1+1≥ 3 consecutive red sides, say e, f, g.
Rotate the inside without e and g and then rotate the inside. This gets to R in 3 steps, hence M can go to A_2. (This case can only occur when 2n≥ 14.)
∙b=1,r=1: In the first step, rotate all blue inside cycles and push the diagonal that cuts off the blue leaf to a side, if it is not there yet, by rotating the whole blue ear without one blue perimeter edge (see <ref>(a)). Since 2n≥ 10, we have at least 3 consecutive red edges and, as in the previous case, we can thus reach R in two more steps (for a total of 3 steps). Likewise for B, hence we aim to put M into A_3.
For that, we need to check that any two such matchings are distance at most 4 apart. To that end, it suffices to check that any two matchings N, N' with one diagonal that cuts off a single blue perimeter edge are in distance at most 4-1-1=2 apart. This is easy (see <ref>(b)): Label the n red perimeter edges by e_1,…,e_n and for each i=1,…,n, denote by M_i the matching with one diagonal that cuts off the perimeter edge e_i. We claim that some M_i is adjacent to both N and N'. In fact, we claim that N is adjacent to at least n-2 of the n matchings M_i. Indeed, for any of the n-2 red sides e_i present in N, we can rotate the (inside) cycle consisting of the red leaf of D(N) without e_i. The same holds for N'. Since for 2n≥ 10, we have (n-2)+(n-2)>n, there is a matching M_i adjacent to both N and N'.
§.§ A lower bound for the diameter of ()
Since the diameter of () has a constant upper bound, is seems reasonable to also ask for a best possible lower bound.
To do so, we first identify structures which prevent that two matchings are -compatible.
Let M and M' be two matchings in S. A boundary area with k points is an area within the convex hull of S containing k points of S that is bounded by edges in M and M' such that these edges form at least one crossing and such that all points of S on the boundary of the area form a sequence of consecutive points of S along the boundary of the convex hull of S; see <ref>.
We next define two special matchings.
A 2-semiear matching is a matching on a set of 4k points consisting of exactly k 2-semiears and an inside k-semicycle (with all its edges being diagonals).
Similarly, a near-2-semiear matching is a matching on a set of 4k+2 points consisting of exactly k 2-semiears and an inside (k+1)-semicycle;
see <ref>.
As for perimeter matchings, we distinguish between odd and even 2-semiear matchings.
If the perimeter edges of the 2-semiears are labeled 'even' then we call the (near-)2-semiear matching even, otherwise we call it odd.
lemmalemmaEarAndBoundaryArea
Let M, M' be two matchings whose symmetric difference is an ear or a boundary area with at least three points. Then M and M' are not -compatible to each other.
We consider two matchings M and M' creating a k-ear and we call the respective polygon P (cf. <ref>). The proof for a boundary area with at least three points works in a similar way.
If the two matchings are -compatible, we can draw an edge-disjoint tree in S.
Let p_1 and p_2 be the two endpoints of the diagonal in the ear. Any other point in P cannot be directly connected to a point outside P via a tree edge, therefore at least k-2 tree edges need to lie within P (if p_1 and p_2 are connected to each other outside P; otherwise even k-1 tree edges are needed).
However, by planarity there can be at most k-3 edges in a polygon spanned by k points, a contradiction.
lemmalemmaEarMatchingCompatible
Let M be a matching that is -compatible to an even (odd) 2-semiear-matching. Then M contains no odd (even) perimeter edge.
We prove the statement by contradiction and assume that there exists a -compatible matching M which contains at least one odd perimeter edge.
This matching edge connects one endpoint of a perimeter edge with its neighboring vertex (matched by a diagonal in the 2-semiear matching), see <ref>.
We distinguish between the cases where the other endpoint of the perimeter edge is matched to (in M). If it is matched with the same diagonal of the 2-semiear, the two matchings create an ear, a contradiction to <ref> (cf. <ref> on the left).
Otherwise, this matching edge intersects with the diagonal of the 2-semiear matching, thus creates a boundary area with three points, a contradiction to <ref> (cf. <ref> on the right).
The following lemma can be proven in a similar way.
lemmalemmaNearEarMatchingCompatible
Let M be a matching that is -compatible to a near-2-semiear-matching M' consisting of k even (odd) and one odd (even) perimeter edge. Then M contains at most one odd (even) perimeter edge, which is the one in M'.
Let M' be a matching -compatible to a near-2-semiear matching M as defined in the statement.
All but one of the odd perimeter edges would connect an even perimeter edge with its diagonal in an 2-semiear, therefore cannot be contained in M' as shown in the proof of <ref>.
Consequently, there is at most one odd perimeter edge in M' (which is exactly the odd perimeter edge in M).
lemmalemmaEars
Let M and M' be two -compatible matchings. Then M and M' have at least two perimeter edges in common.
Let M' be a matching -compatible to M.
First of all, we consider the case that M is a perimeter matching. Without loss of generality, M is the even perimeter matching.
Our claim is that M' has no odd semiear.
Assume to the contrary that M' has odd semiears.
Any odd semiear in M' creates an ear with M, thus the two matchings cannot be -compatible to each other, a contradiction to the assumption.
Therefore we can conclude that our statement holds for perimeter matchings since every non-perimeter matching contains at least two semiears. (Consider the dual graph where the areas defined by matching edges correspond to points and two points are connected if and only if the two areas are separated by a matching edge. This graph forms a tree where semiears in the matching correspond to leaves in the tree.)
All other matchings have at least two semiears and we distinguish different cases.
Case 1: There exist two semiears of size ≥3 in M
Our claim is that at least one of the perimeter edges of each semiear lies in M'. We consider one semiear and assume to the contrary that none of the perimeter edges of this semiear lies .
Without loss of generality we assume that the semiear is even.
Thus by assumption, every vertex of this semiear is either matched by an odd perimeter edge or by a diagonal in M'.
If all points in the semiear are matched by odd perimeter edges in M', we get an ear contradicting <ref> (cf. <ref> (a)).
If two points in the semiear are matched with each other by a diagonal (in M'), the other points (in the semiear) are separated into two sets. Those on the side with just perimeter edges have to be matched with each other in M', otherwise M' would intersect itself. We can iteratively shrink this side, until the remaining points are all matched by odd perimeter edges. This again creates an ear (cf. <ref> (b)).
Otherwise, at least one diagonal in M' intersects the diagonal (in M) of the semiear, starting at an endpoint of an even perimeter edge. If the other endpoint of this edge is matched by an odd perimeter edge in M', we get a boundary area with at least four points, therefore no spanning tree can be drawn and the matchings are not -compatible (cf.
If the other endpoint of the even perimeter edge is also matched by a diagonal in M', we get a so called 'blocking structure', i.e., the two endpoints of the perimeter edge cannot be connected directly by a spanning tree.
Since we already excluded diagonals within the semiear, the vertex neighboring this perimeter edge also has to be matched by a diagonal in M' (and it exists since we assumed that the size of the semiear is at least 3). We again consider the other endpoint of this even perimeter edge and either construct a boundary area with at least three points (cf. <ref> (d)), which again leads to a contradiction or we get a second blocking structure (cf. <ref> (e)). However, this concludes this case as well, since the points inbetween the two blocking structures are separated from the other points and cannot be connected with them by any spanning tree.
It follows that at least one of the perimeter edges in the semiear of M also lies in M'.
Analogously we can apply the argument for the other semiear.
Case 2: All but one semiear in M is of size 2
For simplicity we assume without loss of generality that there exists an even 2-semiear in M. Matching the points of this semiear by odd perimeter edges yields an ear, a contradiction by <ref> (cf. <ref> (a)).
If one of the endpoints of the even perimeter edge is matched by an odd perimeter edge in M' and the other one is matched by a diagonal, we get a boundary area with three points, contradicting <ref> (cf. <ref> (b)).
Therefore, both endpoints of the even perimeter edge are matched by diagonals of M' which intersect the diagonal of the 2-semiear (cf. <ref> (c)).
We can assume that this holds for all semiears of size two in M, otherwise we apply one of the arguments above.
Out of the 2-semiears of M we choose the one with no further semiear of M (also not the one of larger size) on one side of a diagonal d in M', where d is incident to a point of the perimeter edge of that ear. This is possible since the number of semiears is finite and the diagonals in M' cannot intersect each other, therefore there is an ordering of the 2-semiears in M (and only one semiear of larger size).
Without loss of generality there is no semiear of M left of d.
It is easy to see that the diagonal d induces a semiear in M' left of d. If this semiear is of size 2 and two diagonals d'_1 and d'_2 in M are intersecting the diagonal of the semiear, we get another blocking structure.
(Otherwise we can apply one of the other arguments above to the semiear in M' and again end up with a perimeter edge lying in both matchings.) It follows that d'_1 and d'_2 have to intersect the diagonals in M' that intersect the even 2-semiear. Otherwise another semiear in M to the left of d would be induced, a contradiction. However, this separates at least three points from the rest and it is not possible to find a common compatible spanning tree (cf. <ref>).
Case 3: All semiears in M are of size 2
This case works similar to the second case. If the cases (a) or (b) in <ref> can be applied to two 2-semiears, we are done since both perimeter edges also lie in M'. If we can apply one of those cases to at least one 2-semiear, we treat this semiear like the semiear of larger size in Case 2 and proceed as before.
Otherwise, all 2-semiears in M are as depicted in <ref> (c). Again there is an ordering of those 2-semiears and now we can choose two of them such that there is no further semiear of M on one side of a diagonal in M'. (Once there is no further semiear on the 'left' side, once there is no semiear on the 'right' side.) It follows that two distinct semiears in M' are induced. The arguments in Case 2 can be applied separately to both of them, therefore we end up with at least two perimeter edges which lie in both M and M'.
corollarycorLowerBound
Let S be of size 2n≥ 10.
For even n, the distance between an even 2-semiear matching and an odd 2-semiear matching is at least 4.
For odd n, let M be a near-2-semiear matching with a single even perimeter edge e and let M' be a near-2-semiear matching with a single even perimeter edge e'
that shares a vertex with e.
Then the distance between M and M' is at least 4.
n is even:
By <ref> we know that for every matching -compatible to an even 2-semiear matching all perimeter edges are even. Now by <ref> all matchings which are -compatible to them contain at least two of their even perimeter edges.
Analogoulsy, in every matching -compatible to an odd 2-semiear matching all perimeter edges are odd, and all matchings -compatible to those contain at least two odd perimeter edges (in particular any matching with no odd perimeter edge is not -compatible).
Combining this results shows that there are at least three intermediate matchings between an even and an odd 2-semiear matching in the disjoint -compatible graph.
n is odd:
By <ref> every matching -compatible to M contains at most one odd perimeter edge, namely the same as in M, say o_1. Analogously, every matching -compatible to M' contains no even perimeter edge other than the one in M', say e_1.
As before we can apply <ref> and deduce that all matchings -compatible to those with at most one odd or even perimeter edge, respectively, contain at least two perimeter edges. However, since o_1 and e_1 are incident, they cannot both appear in any of the -compatible matchings at the same time, thus the two sets of all -compatible matchings is disjoint which implies a total lower bound of four for the distance of M and M'.
§ DISJOINT CATERPILLAR-COMPATIBLE MATCHINGS
A natural question is what happens if we do not take the set of all plane spanning trees, but a smaller set.
A caterpillar
(from p to q) is a tree which consists of a path (from p to q, also called spine) and edges with one endpoint on the path.
These latter edges
are also called the legs of the caterpillar.
We denote the set of all plane spanning caterpillars by .
Furthermore, a one-legged caterpillar is a caterpillar where every vertex of the spine is incident to at most one leg.
We denote the family of all plane spanning one-legged caterpillars in S by _3.
Note that every vertex of a one-legged caterpillar has degree at most 3.
Hence, one-legged caterpillars are special instances of trees with maximum degree 3.
lemmalemmaCompCaterpillar
For any edge e=pq of a matching M there exists a plane one-legged caterpillar compatible to M from p to q which spans all points between p and q along the boundary of the convex hull of S (on either side of e). This caterpillar has vertices of at most degree 3 and p and q are vertices of its spine.
We construct the caterpillar C in a greedy way from p to q.
At the start, let C be the point p.
Assume we have a one-legged caterpillar C from p to a point x, and C' contains each point between p and x.
Let y and z be the next two points from x to q.
If xy is not an edge of M, we add xy to C and continue from y.
Otherwise, if xy is an edge of M, then xz and yz are not edges of M.
We add xz and yz to C and continue from z.
By construction, every spine vertex has at most one leg.
Further, every point between p and q is in C by construction.
So we constructed a one-legged caterpillar.
An example is depicted in <ref>.
Note that every matching M contains a perimeter edge and by <ref> there also exists a caterpillar which is disjoint compatible to M.
Further, by construction p is incident to only one edge.
lemmalemmaCatInside
Let M and M' be two matchings whose symmetric difference is an inside cycle. Then M and M' are disjoint -compatible.
Let K be the inside cycle which is the symmetric difference of two perfect matchings M and M'.
For every diagonal of K we construct a caterpillar as in <ref> and merge caterpillars which have a point in common.
If every edge of K is a diagonal, then merging all those caterpillars gives a cycle consisting of the spine edges. By deleting one spine edge we obtain a spanning caterpillar.
Otherwise, the merging yields a set C_1, …, C_r of caterpillars whose endpoints are points of the diagonals.
We label the two endpoint of C_i with s_i and t_i in clockwise direction; cf <ref>.
Note that s_it_i might not be a diagonal.
Further note that every point of S, which is not on K, is in one of C_1, …, C_r.
We connect these caterpillars in the following way.
We add the edges t_is_r+1-i for 1 ≤ i≤r/2, and s_i-1t_r+1-i for 2 ≤ i≤r/2.
This gives us a caterpillar C which contains all C_i.
Note that C starts at s_1 and ends at s_r/2 if r is even, and at t_(r+1)/2 if r is odd.
For any point p between t_r and s_1 we add the edge t_1p to C.
If r is odd, then we add the edge s_r+1-ip to C for any point p between t_i and s_i+1, where 1≤ i ≤ r-1.
C ends at point t_(r+1)/2 since r is odd. Hence, all these edges are on the inside of K.
If r is even, then we add the edge s_r+1-ip to C for any point p between t_i and s_i+1, where 1≤ i ≤ r-1 and i≠r/2.
For any point p between t_r/2 and s_r/2 +1, we add the edge s_r/2 +1p to C.
After adding all these edges to C, any point of K is contained in C.
Hence, C is a spanning caterpillar.
Note that <ref> is a sufficient condition for -compatibility of matchings similar to <ref> for -compatibility.
Adapting the proof of <ref> to rotate only one cycle (instead of several) per step, and noting that the number of cycles is O(n), we get the following theorem.
theoremlemmaCaterpillarConnected
For 2n≥ 10, the graph () is connected with diameter (())=Ø(n).
For 2n=10, <ref> gives a schematic depiction of (). Note that either only one inside cycle is rotated or the indicted tree is a caterpillar.
Hence, <ref> is also a schematic depiction of () for 2n=10.
In the proof of <ref> we showed for 2n≥ 12 that (())≤ 5 by rotating multiple inside cycles at once.
Hence, for any two matchings M and M' there exist a sequence of matchings M=M_0, M_1, M_2, M_3, M_4, M_5=M' such that the symmetric difference ofM_i and M_i+1 is a set of inside cycles for 0≤ i ≤ 4.
Now consider two matchings M_i and M_i+1 whose symmetric difference is a set of inside cycles.
Note that the number of inside cycles is at most n/2 since each of these cycles contains at least 4 points.
By <ref> we can rotate one inside cycle in one step.
Therefore, M_i and M_i+1 are at distance at most n/2 in () for any 0≤ i ≤ 4.
It follows that (())≤ 5n/2.
Next we consider disjoint _3-compatible matchings.
As before, we first find a sufficient condition for their compatibility.
lemmalemmaOneLeggedCatInside
Let M and M' be two matchings whose symmetric difference is an inside 2-cycle. Then M and M' are disjoint _3-compatible.
We have four cases depending on how many edges of the 2-cycle are diagonals and their relative position.
The four cases are depicted in <ref>.
In the leftmost case, where
exactly two diagonals share a point, we take two one-legged caterpillars from q_2 to q_1 and q_3, respectively, constructed as in the proof of <ref>.
Note that each such caterpillar has degree 1 at its start point. Hence, together with the edge q_2q_4 they form a one-legged caterpillar which is disjoint compatible to both M and M'.
If we have two diagonals which are not adjacent, we take two one-legged caterpillars from q_2 to q_3 and from q_1 to q_4, constructed as in the proof of <ref>.
We connect these two caterpillars with the edge q_1q_3 and obtain a spanning one-legged caterpillar.
If we have three diagonals, we take the one-legged caterpillars from q_1 to q_2, from q_2 to q_3 and from q_3 to q_4, constructed as in the proof of <ref>.
This is already a spanning one-legged caterpillar.
If we have four diagonals, we take the one-legged caterpillars from q_1 to q_2, from q_2 to q_3, from q_3 to q_4 and from q_4 to q_1, constructed as in the proof of <ref>.
The spines of these caterpillars form a cycle. Deleting any of the spine edges yields a spanning one-legged caterpillar.
With this, we can show the following theorem.
theoremlemmaOneLegCaterpillarConnected
For 2n≥ 10, the graph (_3) is connected and has diameter ((_3))=Ø(n).
We first show that any two matchings M and M' whose symmetric difference is a single inside cycle K are connected in (_3).
Consider an inside cycle K and label its points with u_0, u_1, … u_x, v_y, v_y-1, … v_0 such that u_0v_0 and u_xv_y are two diagonals of K.
Note that since K has an even number of edges the parity of x and y is the same.
We split K into interior-disjoint 2-cycles K_i in the following way; cf <ref>.
If x is even, then K_i is the 2-cycle v_0u_2iu_2i+1u_2i+2 for 0≤ i ≤x/2-1.
For x/2≤ i ≤x+y/2-1, let K_i be the 2-cycle u_xv_2i-xv_2i-x+1v_2i-x+2.
If x is odd, then K_i is the 2-cycle v_0u_2iu_2i+1u_2i+2 for 0≤ i ≤x-1/2.
For x+1/2≤ i ≤x+y/2-1, let K_i be the 2-cycle u_xv_2i-x-1v_2i-xv_2i-x+1.
Further let K_(x+y)/2 be u_x-1u_xv_yv_y-1.
Note that every edge of K is in exactly one of K_1, …, K_r.
Further, every edge of K_i, 1 ≤ i ≤ r, in the interior of K is in exactly two of K_1, …, K_r.
Let M_0, …, M_r be matchings such that the symmetric difference of M_i-1 and M_i is K_i for i=1, … r.
Then by <ref>, M_i-1 and M_i are disjoint _3-compatible.
Further, the symmetric difference of M_0 and M_r is K, implying that M and M' are connected in (_3).
Combining this result with the proof of <ref>, it follows that (_3) is connected.
The bound on the diameter then follows from the bound on diameter of () in combination with the fact that
any set of disjoint inside cycles can be split into Ø(n) disjoint inside 2-cycles.
§ DISJOINT PATH-COMPATIBLE MATCHINGS
Let be the family of all spanning paths on S.
Note that paths are special instances of trees and caterpillars.
The following proposition states that in contrast to trees and caterpillars, () is disconnected.
propositionpropPathThreeEars
Let M be a plane matching on S with at least three semiears. Then there is no spanning path on S which is disjoint compatible to M, that is, M is an isolated vertex in ().
We claim that any semiear in a matching M has to contain an end of a disjoint compatible path. By contradiction, we assume the contrary. We consider a k-semiear and label the 2k vertices of the semiear along the boundary of the convex hull by v_i, that is, the path enters the semiear at vertex v_1 and leaves it at v_2k.
Observe that once the path reaches vertex v_i, it can only visit either vertices with smaller or larger index (cf. <ref>). Therefore we need to go along them in ascending order. However this is not possible since there is an edge v_2v_3 and we demand disjoint compatibility.
This proves that in every semiear of M any disjoint compatible path has to start or end there, thus no matching with three or more semiears contains such a path.
From <ref> it follows that () contains isolated vertices if S is a set of at least 12 points.
Note that there are also matchings with two semiears that are not compatible to any spanning path.
On the other hand, one might ask whether all matchings which are disjoint -compatible to some other matching are in one connected component of ().
The following proposition gives a negative answer to that question.
propositionpropPerimeterDisconnected
The two perimeter matchings are not connected in ().
consider the even perimeter matching.
We claim that every matching that is in the component of () containing the even perimeter matching has only even semiears[Even (odd) semiears have only even (odd) perimeter edges.].
Assume to the contrary that there exist two disjoint -compatible matchings M and M' such that M has only even semiears and M' has an odd semiear.
Let X_1 and X_2 be two even semiears of M and let X' be an odd semiear of M'.
Let D=X_1 ∪ X_2 ∪ X' be the union of the three semiears.
We have four cases.
Case 1. The interior of (X_1), (X_2) and (X') are disjoint.
If X_1 ∩ X'=∅ and X_2 ∩ X' = ∅, then D is a plane matching with three ears.
Hence, we have a contradiction by <ref>.
Case 2. (X') is in (X_1) or in (X_2).
If (X') is in (X_1) or in (X_2), then D contains the boundary X' of (X') since X' only has odd perimeter edges and X_1, X_2 only have even perimeter edges.
Hence, D contains an ear which gives a contradiction to the assumption by <ref>.
Case 3. (X_1) is in (X') or (X_2) is in (X').
This case gives a contradiction in a similar way as Case 2.
Case 4. The interiors of (X') and (X_1) intersect or the interiors of (X') and (X_2) intersect.
(X') and (X_1) intersect.
Note that X' and X_1 do not have any edge in common.
Let B be the boundary of (X')∩(X_1).
Let S_B be the points of S that are in B.
Every point of S_B is incident to two edges of D since X' and X_1 are semiears and they do not have any edge in common.
This means that every point of S_B is incident to two perimeter edges except the two point that are incident to a diagonal.
If S_B contains at least three points, then B is a boundary area which gives a contradiction to the assumption by <ref>.
If S_B contains only two points, then the perimeter edge of X', that is incident to a point of S_B, and the perimeter edge of X_1, that is incident to a point of S_B, are both odd perimeter edges or are both even perimeter edges.
Hence, X' and X_1 are both odd semiears or even semiears, which is impossible since X_1 is an even semiear and X' is an odd semiear.
This means that a component of () containing a matching with only even semiears, does not contain a matching which has an add semiear.
So the even perimeter matching is in a component of () containing only matchings with only even semiears, while the odd perimeter matching is in a component of () containing only matching with only even semiears.
We remark that several more observations on () can be found in <cit.>.
§ CONCLUSION AND DISCUSSION
We have shown that the diameter of the disjoint -compatible graph ()
for point sets S of 2n points in convex position is 4 or 5 when 2n≥ 10.
We conjecture that the diameter of () is 4 for all 2n≥ 18.
An open question is the computational complexity of determining whether two given matchings have distance 3 in ().
For () and (_3), we showed that their diameters are both in Ø(n).
Determining whether those two diameters are (asymptotically) the same, and what their precise values are, remains open.
Regarding spanning paths we showed that () is disconnected, with no connection between the two perimeter matchings and many isolated vertices.
Further natural open questions include determining whether () is connected for general point sets, and whether there exist point sets S such that () is connected.
We remark that our main approach for bounding diameters was to rotate inside semicycles.
A similar approach has also been used in a different setting of flip graphs of matchings.
A difference is that in that flip graph setting, semiears can be flipped, which is not possible in the disjoint -compatible setting.
On the other hand, one can flip only one semicycle, or even only two edges at a time.
A recent related work on flip graphs
is <cit.>.
There, so-called centered flips in matchings on convex point sets are considered.
A centered flip is the rotation of an empty quadrilateral that contains the center of the point set.
This operation is more restrictive than our rotation of quadrilaterals for (_3),
as can also be seen by the fact that the flip graph of matchings with centered flips is sometimes disconnected.
plain
|
http://arxiv.org/abs/2409.02388v1 | 20240904023153 | Gaussian Rate-Distortion-Perception Coding and Entropy-Constrained Scalar Quantization | [
"Li Xie",
"Liangyan Li",
"Jun Chen",
"Lei Yu",
"Zhongshan Zhang"
] | cs.IT | [
"cs.IT",
"cs.LG",
"math.IT"
] |
Gaussian Rate-Distortion-Perception Coding and Entropy-Constrained Scalar Quantization
Li Xie, Liangyan Li, Jun Chen, Lei Yu,
and Zhongshan Zhang
September 9, 2024
======================================================================================
§ ABSTRACT
This paper investigates the best known bounds on the quadratic Gaussian distortion-rate-perception function with limited common randomness for the Kullback-Leibler divergence-based perception measure, as well as their counterparts for the squared Wasserstein-2 distance-based perception measure, recently established by Xie et al. These bounds are shown to be nondegenerate in the sense that they cannot be deduced from each other via a refined version of Talagrand's transportation inequality. On the other hand, an improved lower bound is established when the perception measure is given by the squared Wasserstein-2 distance. In addition,
it is revealed by exploiting the connection between rate-distortion-perception coding and entropy-constrained scalar quantization that all the aforementioned bounds are generally not tight in the weak perception constraint regime.
Entropy-constrained scalar quantizer, Gaussian source, Kullback–Leibler divergence, optimal transport, rate-distortion-perception coding, squared error, transportation inequality, Wasserstein distance.
§ INTRODUCTION
Rate-distortion-perception theory <cit.>, as a generalization of Shannon's rate-distortion theory, has received considerable attention in recent years. It provides a framework for investigating the performance limits of perception-aware image compression. This is partly accomplished by assessing compression results more comprehensively, using both distortion and perception measures. Unlike distortion measures, which compare each compressed image with its corresponding source image, perception measures focus on the ensemble-level relationship between pre- and post-compression images. It has been observed that at a given coding rate, there exists a tension between distortion loss and perception loss <cit.>. Moreover, the presence of a perception constraint often necessitates the use of stochastic algorithms <cit.>. In contrast, deterministic algorithms are known to be adequate for conventional lossy source coding.
Although significant progress has been made in characterizing the information-theoretic limits of rate-distortion-perception coding, existing results are almost exclusively restricted to special scenarios with the availability of unlimited common randomness or with the perfect perception constraint (also referred to as perfect realism).
To the best of our knowledge, the only exception is <cit.>, which makes an initial attempt to study the fundamental distortion-rate-perception tradeoff
with limited common randomness by leveraging the research findings from output-constrained lossy source coding <cit.>. In particular, lower and upper bounds on the quadratic Gaussian distortion-rate-perception function under a specified amount of common randomness
are established in <cit.> for both Kullback-Leibler divergence-based and squared Wasserstein-2 distance-based perception measures.
These bounds shed light on the utility of common randomness as a resource in rate-distortion-perception coding, especially when the perceptual quality is not required to be perfect.
On the other hand, they in general do not match and are therefore inconclusive. Note that the aforementioned upper bounds are derived by restricting the reconstruction distribution to be Gaussian. A natural question thus arises whether this restriction incurs any penalty. A negative answer to this question is equivalent to the existence of some new Gaussian extremal inequalities, which are of independent interest. It is also worth noting that Kullback-Leibler divergence and squared Wasserstein-2 distance are related via Talagrand's transporation inequality <cit.> when the reference distribution is Gaussian. As such, there exists an intrinsic connection between the quadratic distortion-rate-perception functions associated with Kullback-Leibler divergence-based and squared Wasserstein-2 distance-based perception measures.
This connection has not been explored in existing literature.
We shall show that the bounds on the quadratic Gaussian distortion-rate-perception function with limited common randomness for the Kullback-Leibler divergence-based perception measure cannot be deduced from their counterparts for the squared Wasserstein-2 distance-based perception measure via a refined version of Talagrand's transportation inequality. In this sense, they are not degenerate. On the other hand, it turns out that the lower bound can be improved via an additional tunable parameter when the perception measure is given by the squared Wasserstein-2 distance. Furthermore, all the aforementioned bounds are generally not tight in the weak perception constraint regime.
We demonstrate this result by exploiting the connection between rate-distortion-perception coding and entropy-constrained scalar quantization. Our finding implies
that restricting the reconstruction distribuiton to be Gaussian may incur a penalty. This is somewhat surprising in view of the fact that the quadratic Gaussian distortion-rate-percetion function with limited common randomness admits a single-letter characterization, which often implies the existence of a corresponding Gaussian extremal inequality <cit.>.
The rest of this paper is organized as follows. Section <ref> contains the definition of quadratic distortion-rate-perception function with limited common randomness and a review of some relevant results. Our technical contributions are presented in Sections <ref>, <ref>, and <ref>. We conclude the paper in Section <ref>.
We adopt the standard notation for information measures, e.g., H(·) for entropy, h(·) for differential entropy, I(·;·) for mutual information, and J(·) for Fisher information.
The cardinality of set 𝒮 is denoted by |𝒮|. For a given random variable X, its distribution, mean, and variance are written as p_X, μ_X, and σ^2_X, respectively. We use Π(p_X,p_X̂) to represent the set of all possible couplings of p_X and p_X̂.
For real numbers a and b, let a∧ b:=min{a,b}, a∨ b:=max{a,b}, and (a)_+:=max{a,0}.
Throughout this paper, the logarithm function is
assumed to have base e.
§ PROBLEM DEFINITION AND EXISTING RESULTS
A length-n rate-distortion-perception coding system (see Fig. <ref>) consists of an encoder f^(n):ℝ^n×𝒦→𝒥, a decoder g^(n):𝒥×𝒦→ℝ^n, and a random seed K. It takes an i.i.d. source sequence X^n as input and produces an i.i.d. reconstruction sequence X̂^n. Specifically, the encoder
maps X^n and K to a codeword J in codebook 𝒥 according to some conditional distribution p_J|X^nK while the decoder generates X̂^n based on J and K according to some conditional distribution p_X̂^n|JK. Here, K is assumed to be uniformly distributed over the alphabet 𝒦 and independent of X^n. The end-to-end distortion is quantified by 1/n∑_t=1^n𝔼[(X_t-X̂_t)^2] and the perceptual quality by 1/n∑_t=1^nϕ(p_X_t,p_X̂_t) with some divergence ϕ. It is clear that 1/n∑_t=1^nϕ(p_X_t,p_X̂_t)=ϕ(p_X,p_X̂), where p_X and p_X̂ are the marginal distributions of X^n and X̂^n, respectively.
For an i.i.d. source {X_t}_t=1^∞, distortion level D is said to be achievable subject to the compression rate constraint R, the common randomness rate constraint R_c, and the perception constraint P
if there exists a length-n rate-distortion-perception coding system such that
1/nlog|𝒥|≤ R,
1/nlog|𝒦|≤ R_c,
1/n∑_t=1^n𝔼[(X_t-X̂_t)^2]≤ D,
1/n∑_t=1^nϕ(p_X_t,p_X̂_t)≤ P,
and the reconstruction sequence X̂^n is ensured to be i.i.d. The infimum of such achievable distortion levels D is denoted by D(R,R_c,P|ϕ).
The following result <cit.>, which is built upon <cit.> (see also <cit.>), provides a single-letter characterization of D(R,R_c,P|ϕ).
For p_X with 𝔼[X^2]<∞,
D(R,R_c,P|ϕ) =inf_p_UX̂|X𝔼[(X-X̂)^2]
X↔ U↔X̂,
I(X;U)≤ R,
I(X̂;U)≤ R+R_c,
ϕ(p_X,p_X̂)≤ P.
Explicit lower and upper bounds on D(R,R_c,P|ϕ) are established for p_X=𝒩(μ_X,σ^2_X) when ϕ(p_X,p_X̂)=ϕ_KL(p_X̂p_X) <cit.> or ϕ(p_X,p_X̂)=W^2_2(p_X,p_X̂) <cit.>,
where
ϕ(p_X̂p_X):=𝔼[logp_X̂(X̂)/p_X(X̂)]
is the Kullback-Leibler divergence and
W^2_2(p_X,p_X̂):=inf_p_XX̂∈Π(p_X,p_X̂)𝔼[(X-X̂)^2]
is the squared Wasserstein-2 distance. Let
ξ(R,R_c):=√((1-e^-2R)(1-e^-2(R+R_c))).
Moreover, let
ψ(σ_X̂):=logσ_X/σ_X̂+σ^2_X̂-σ^2_X/2σ^2_X
and
σ(P) be the unique number σ∈[0,σ_X] satisfying ψ(σ)=P.
For p_X=𝒩(μ_X,σ^2_X),
D(R,R_c,P|ϕ_KL)≤ D(R,R_c,P|ϕ_KL)≤D(R,R_c,P|ϕ_KL),
where
D(R,R_c,P|ϕ_KL):=min_σ_X̂∈[σ(P),σ_X]σ^2_X+σ^2_X̂-2σ_Xσ_X̂√((1-e^-2R)(1-e^-2(R+R_c+P-ψ(σ_X̂))))
and
D(R,R_c,P|ϕ_KL):=σ^2_X-σ^2_Xξ^2(R,R_c)+(σ(P)-σ_Xξ(R,R_c))^2_+.
For p_X=𝒩(μ_X,σ^2_X),
D(R,R_c,P|W^2_2)≤ D(R,R_c,P|W^2_2)≤D(R,R_c,P|W^2_2),
where
D(R,R_c,P|W^2_2):=min_σ_X̂∈[(σ_X-√(P))_+,σ_X]σ^2_X+σ^2_X̂-2σ_X√((1-e^-2R)(σ^2_X̂-(σ_Xe^-(R+R_c)-√(P))^2_+))
and
D(R,R_c,P|W^2_2):=σ^2_X-σ^2_Xξ^2(R,R_c)+(σ_X-√(P)-σ_Xξ(R,R_c))^2_+.
The next three sections are devoted to investigating the tightness of these bounds, which will shed light on
rate-distortion-perception coding in general.
§ KULLBACK-LEIBLER DIVERGENCE VS. SQUARED WASSERSTEIN-2 DISTANCE
For p_X=𝒩(μ_X,σ^2_X), Talagrand's transportation inequality <cit.> states that
W^2_2(p_X,p_X̂)≤ 2σ^2_Xϕ_KL(p_X̂p_X),
which immediately implies
D(R,R_c,2σ^2_XP|W^2_2)≤ D(R,R_c,P|ϕ_KL).
Note that Talagrand's transportation inequality does not impose any assumptions on p_X̂. However, when p_X=𝒩(μ_X,σ^2_X), it suffices to consider p_X̂ with μ_X̂=μ_X and σ_X̂≤σ_X as far as D(R,R_c,P|ϕ_KL) and D(R,R_c,P|W^2_2) are concerned <cit.>. With this restriction on p_X̂, we have the following refined version of Talagrand's transportation inequality, which leads to an improvement on (<ref>).
For p_X=𝒩(μ_X,σ^2_X) and p_X̂ with μ_X̂=μ_X and σ_X̂≤σ_X,
W^2_2(p_X,p_X̂)≤ 2σ^2_X(1-e^-ϕ_KL(p_X̂p_X)).
As a consequence,
D(R,R_c,2σ^2_X(1-e^-P)|W^2_2)≤ D(R,R_c,P|ϕ_KL)
when p_X=𝒩(μ_X,σ^2_X).
See Appendix <ref>.
It is clear that (<ref>) and (<ref>) are stronger than their counterparts in (<ref>) and (<ref>) since 1+z≤ e^z for all z. Theorem <ref> implies that for p_X=𝒩(μ_X,σ^2_X),
every lower bound on D(R,R_c,·|W^2_2) induces a lower bound on D(R,R_c,·|ϕ_KL) and every upper bound on D(R,R_c,·|ϕ_KL) induces an upper bound on D(R,R_c,·|W^2_2); in particular, we have
D(R,R_c,P|ϕ_KL)≥D(R,R_c,2σ^2_X(1-e^-P)|W^2_2)
and
D(R,R_c,P|W^2_2)≤D(R,R_c,ν(P)|ϕ_KL),
where
ν(P) :=log2σ^2_X/(2σ^2_X-P)_+.
It is thus of considerable interest to see how these induced bounds are compared to their counterparts in Theorems <ref> and <ref>, namely,
D(R,R_c,P|ϕ_KL)≥D(R,R_c,P|ϕ_KL)
and
D(R,R_c,P|W^2_2)≤D(R,R_c,P|W^2_2).
The following result indicates that (<ref>) and (<ref>) are in general looser. In this sense, (<ref>) and (<ref>) are nondegenerate.
For p_X=𝒩(μ_X,σ^2_X),
D(R,R_c,P|ϕ_KL)≥D(R,R_c,2σ^2_X(1-e^-P)|W^2_2)
and
D(R,R_c,P|W^2_2)≤D(R,R_c,ν(P)|ϕ_KL).
See Appendix <ref>.
It be can seen from Fig. <ref> that D(R,R_c,2σ^2_X(1-e^-P)|W^2_2) is indeed a looser lower bound on D(R,R_c,P|ϕ_KL) as compared to
D(R,R_c,P|ϕ_KL) and the latter almost meets the upper bound D(R,R_c,P|ϕ_KL).
Similarly, Fig. <ref> shows that D(R,R_c,ν(P)|ϕ_KL) is indeed a looser upper bound on D(R,R_c,P|W^2_2)
as compared to D(R,R_c,P|W^2_2), especially in the low rate regime, where the latter has a diminishing gap from the lower bound D(R,R_c,P|W^2_2).
§ AN IMPROVED LOWER BOUND
The main result of this section is the following improved lower bound on D(R,R_c,P|W^2_2).
For p_X=𝒩(μ_X,σ^2_X),
D(R,R_c,P|W^2_2)≥D'(R,R_c,P|W^2_2)≥D(R,R_c,P|W^2_2),
where
D'(R,R_c,P|W^2_2):=min_σ_X̂∈[(σ_X-√(P))_+,σ_X]sup_α>0σ^2_X+σ^2_X̂-2σ_X√((1-e^-2R)(σ^2_X̂-δ^2_+(σ_X̂,α)))
with
δ_+(σ_X̂,α):=(σ_Xe^-(R+R_c)-√(σ^2_X-α(σ^2_X+σ^2_X̂-P)+α^2σ^2_X̂))_+/α.
Moreover, the second inequality in (<ref>) is strict if and only if R∈(0,∞), R_c∈(0,∞), and P∈(0,σ^2_X(2-e^-2R-2√((1-e^-2R)(1-e^-2(R+R_c))))).
See Appendix <ref>.
The difference between D'(R,R_c,P|W^2_2) and D(R,R_c,P|W^2_2) against P is plotted in Fig. <ref> for the case p_X=𝒩(0,1), R=0.1, and R_c=0.1.
According to Theorem <ref>, for R∈(0,∞) and R_c∈(0,∞), we have D'(R,R_c,P|W^2_2)=D(R,R_c,P|W^2_2) when
P≥σ^2_X(2-e^-2R-2√((1-e^-2R)(1-e^-2(R+R_c)))).
Setting σ^2_X=1, R=0.1, and R_c=0.1 in (<ref>) gives P⪆ 0.692, which is consistent with the result shown in Fig. <ref>. Fig. <ref> plots the difference between D'(R,R_c,P|W^2_2) and D(R,R_c,P|W^2_2) against R for the case p_X=𝒩(0,1), R_c=0.1, and P=0.1. As shown in Appendix <ref>, for R_c∈(0,∞) and P∈(0,∞], we can write
(<ref>) alternatively as
R≥
0 P≥σ^2_X,
-1/2logζ_3/ζ_2 R_c=log2, P<σ^2_X,
-1/2logζ_2-√(ζ^2_2-4ζ_1ζ_3)/2ζ_1 ,
where
ζ_1:=4e^-2R_c-1,
ζ_2:=4e^-2R_c+2P/σ^2_X,
ζ_3:=(4σ^2_X-P)P/σ^4_X.
Setting σ^2_X=1, R_c=0.1, and P=0.1 in (<ref>) gives R⪆1.052, which is consistent with the result shown in Fig. <ref>.
It is interesting to note that for p_X=𝒩(μ_X,σ^2_X),
D'(R,0,P|W^2_2) =D(R,0,P|W^2_2)
=σ^2_Xe^-2R+(σ_Xe^-R-√(P))^2_+,
which coincides with the minimum achievable mean squared error at rate R and squared Wasserstein-2 perception loss P when the reconstruction sequence is not required to be i.i.d. <cit.>. As shown in the next section, D(R,0,P|W^2_2) and D'(R,0,P|W^2_2) are actually strictly below D(R,0,P|W^2_2) for sufficiently large P. Therefore, a price has to be paid for enforcing the i.i.d. reconstruction constraint.
This should be contrasted with the case R_c=∞ for which it is known that the rate-distortion-perception tradeoff remains the same regardless of whether the reconstruction sequence is required to be i.i.d. or not <cit.> <cit.>.
§ CONNECTION WITH ENTROPY-CONSTRAINED SCALAR QUANTIZATION
This section is devoted to investigating the tightness of bounds in Theorems <ref>, <ref>, and <ref>. We shall focus on the weak perception constraint regime where P is sufficiently large.
To this end, it is necessary to first gain a better understanding of the properties of D(R,R_c,P|ϕ). Clearly, the map (R,R_c,P)↦ D(R,R_c,P|ϕ) is monotonically decreasing in each of its variables. The following result provides further information regarding D(R,R_c,P|ϕ) under certain conditions.
For p_X with bounded support, if p_X̂↦ϕ(p_X, p_X̂) is lower semicontinuous in the topology of weak convergence[It is known that
p_X̂↦ϕ_KL(p_X̂p_X) <cit.> and p_X̂↦ W^2_2(p_X,p_X̂) <cit.> are lower semicontinuous in the topology of weak convergence.], then the infimum in (<ref>) can be attained and the map (R,R_c,P)↦ D(R,R_c,P|ϕ) is right-continuous in each of its variables.
See Appendix <ref>.
Theorem <ref> is not applicable when p_X is a Gaussian distribution. However, it will be seen that assuming the attainability of the infimum in (<ref>) greatly simplifies the reasoning and helps develop the intuition behind the rigorous proof of the main result in this section (see Theorem <ref>).
The next two results deal with the special cases ϕ(p_X,p_X̂)=ϕ_KL(p_X̂p_X) and ϕ(p_X,p_X̂)=W^2_2(p_X,p_X̂), respectively.
For p_X=𝒩(μ_X,σ^2_X) and (R,R_c)∈[0,∞]^2, the map P↦ D(R,R_c,P|ϕ_KL) is continuous[A map x↦ f(x) is said to be continuous at x=∞ if lim_x→∞f(x)=f(∞).] for P∈[0,∞].
See Appendix <ref>.
For p_X with 𝔼[X^2]<∞ and (R,R_c)∈[0,∞]^2, the map P↦ D(R,R_c,P|W^2_2) is continuous for P∈[0,∞].
See Appendix <ref>.
Now consider the extreme case P=∞.
In light of Theorem <ref>, for p_X with 𝔼[X^2]<∞,
D(R,R_c,∞|ϕ) =inf_p_UX̂|X𝔼[(X-X̂)^2]
X↔ U↔X̂,
I(X;U)≤ R,
I(X̂;U)≤ R+R_c,
which does not depend on the choice of ϕ.
Moreover, it can be verified that for p_X=𝒩(μ_X,σ^2_X),
D(R,R_c,∞|ϕ_KL) =D(R,R_c,∞|W^2_2)
=D'(R,R_c,∞|W^2_2)
=σ^2_Xe^-2R
and
D(R,R_c,∞|ϕ_KL)
=D(R,R_c,∞|W^2_2)
=σ^2_X-σ^2_Xξ^2(R,R_c).
Therefore, we shall simply denote D(R,R_c,∞|ϕ_KL) and D(R,R_c,∞|W^2_2) by D(R,R_c,∞), denote D(R,R_c,∞|ϕ_KL), D(R,R_c,∞|W^2_2), and D'(R,R_c,∞|W^2_2) by D(R,R_c,∞), and denote D(R,R_c,∞|ϕ_KL) and
D(R,R_c,∞|W^2_2) by D(R,R_c,∞).
It will be seen that neither D(R,R_c,∞) nor D(R,R_c,∞) is tight in general. This fact can be established by exploiting the connection between rate-distortion-perception coding and entropy-constrained scalar quantization. For p_X with 𝔼[X^2]<∞, let[The existence of a minimizer for the optimization problem in (<ref>) can be proved via an argument similar to that for Theorem <ref>. Here, it suffices to assume 𝔼[X^2]<∞ since the bounded support condition in Theorem <ref> is only needed to address the intricacy caused by the Markov chain constraint (<ref>).]
D_e(R,R_c):=min_p_X̂|X: I(X;X̂)≤ R, H(X̂)≤ R+R_c𝔼[(X-X̂)^2],
which is the counterpart of D(R,R_c,∞) with the decoder restricted to be deterministic <cit.>.
When R_c=0, the constraint I(X;X̂)≤ R is redundant; as a consequence,
D_e(R,0)=min_p_X̂|X: H(X̂)≤ R𝔼[(X-X̂)^2],
which is simply the distortion-rate function for
entropy-constrained scalar quantization. Note that
D_e(R,0)=min_p_X̂: H(X̂)≤ RW^2_2(p_X,p_X̂)
as every coupling of p_X and p_X̂ induces a (possibly randomized) scalar quantizer. When p_X is absolutely continuous with respect to the Lebesgue measure, W^2_2(p_X,p_X̂) is attained by a coupling that transforms p_X to p_X̂ via a determinstic map <cit.>,
so there is no loss of optimality in restricting the quantizer to be deterministic. Moreover, if p_X has a piecewie monotone and piecewise continuous density, then we can further restrict the deterministic quantizer to be regular <cit.>. On the other hand, when R_c=∞, the constraint H(X̂)≤ R+R_c is redundant; as a consequence,
D_e(R,∞)=min_p_X̂|X: I(X;X̂)≤ R𝔼[(X-X̂)^2],
which is simply the classical distortion-rate function.
The following result reveals that D_e(R,R_c) is intimately related to D(R,R_c,∞).
For p_X with 𝔼[X^2]<∞,
D_e(R,R_c)≥ D(R,R_c,∞)≥ D_e(R,∞).
Moreover, if the infimum in (<ref>) can be attained[According to Theorem <ref>, this assumption holds for p_X with bounded support.], then
D_e(R,R_c)> D_e(R,∞) ⇔ D(R,R_c,∞)>D_e(R,∞).
See Appendix <ref>.
The connection revealed in Theorem <ref> enables us to derive the following result, which indicates that
D(R,R_c,∞) and D(R,R_c,∞) are not tight in general.
For p_X=𝒩(μ_X,σ^2_X),
D(R,R_c,∞)>D(R,R_c,∞)
when R∈(0,∞) and R_c∈[0,∞), and
D(R,R_c,∞)<D(R,R_c,∞)
when R_c∈[0,∞) and R∈(0,χ(R_c)), where χ(R_c) is a positive threshold that depends on R_c.
See Appendix <ref>.
For p_X=𝒩(μ_X,σ^2_X), we exhibit below an explicit improvement over D(R,0,∞) in the low rate regime. Consider the following binary quantizer:
X̂=μ_X-σ_Xe^-θ^2/2/√(2π)Q(θ) X-μ_X/σ_X<θ,
μ_X+σ_Xe^-θ^2/2/√(2π)(1-Q(θ)) X-μ_X/σ_X≥θ,
where θ≥ 0 and Q(θ):=1/√(2π)∫_-∞^θe^-x^2/2dx. It can be verified
that
𝔼[(X-X̂)^2] =σ^2_X-σ^2_Xe^-θ^2/2π Q(θ)(1-Q(θ))
=:D(θ)
and
H(X̂) =-Q(θ)log Q(θ)-(1-Q(θ))log(1-Q(θ))
=:R(θ).
For R∈(0,log 2], define D_e(R,0)
via the parametric equations
D_e(R,0)=D(θ) and R=R(θ).
Clearly, D_e(R,0) is an upper bound on D_e(R,0) and consequently is also an upper bound on D(R,0,∞) in light of Theorem <ref>.
It can be seen from Fig. <ref> that D_e(R,0)<D(R,0,∞) for R∈(0,log 2]. In particular, we have
D_e(log 2,0)=π-2/πσ^2_X≈ 0.3634σ^2_X
while
D(log 2, 0,∞)=7/16σ^2_X=0.4375σ^2_X.
By contrast, although D(R,R_c,∞) is known to be loose for R∈(0,∞) and R_c∈[0,∞), no explicit impprovement has been found (even when R_c=0). So D(R,0,∞) could be situated anywhere between D_e(R,0) (inclusive) and D(R,0,∞) (exclusive except at R=0) in Fig. <ref>.
As shown by the following results, Theorem <ref> has implications to the weak perception constraint regime in general.
For p_X=𝒩(μ_X,σ^2_X),
D(R,R_c,P|ϕ_KL)>D(R,R_c,P|ϕ_KL)
when R∈(0,∞), R_c∈[0,∞), and P is sufficiently large; moreover,
D(R,R_c,P|ϕ_KL)<D(R,R_c,P|ϕ_KL),
when R_c∈[0,∞), R∈(0,χ(R_c)), and P is sufficiently large.
See Appendix <ref>.
For p_X=𝒩(μ_X,σ^2_X),
D(R,R_c,P|W^2_2)>D'(R,R_c,P|W^2_2)
when R∈(0,∞), R_c∈[0,∞), and P∈(γ'(R,R_c),∞], where γ'(R,R_c) is a positive threshold that depends on (R,R_c) and γ'(R,R_c)<P'(R,R_c):=min{P∈[0,∞]:D'(R,R_c,P|W^2_2)=D'(R,R_c,∞|W^2_2)}[It can be verified that P'(R,R_c)=σ^2_X(2-e^-2R-2√((1-e^-2R)(1-e^-2(R+R_c)))) for p_X=𝒩(μ_X,σ^2_X).]; moreover,
D(R,R_c,P|W^2_2)<D(R,R_c,P|W^2_2)
when R_c∈[0,∞), R∈(0,χ(R_c)), and P∈(γ(R,R_c),∞], where γ(R,R_c) is a positive threshold that depends on (R,R_c) and γ(R,R_c)< P(R,R_c):=min{P∈[0,∞]:D(R,R_c,P|W^2_2)=D(R,R_c,∞|W^2_2)}[For p_X=𝒩(μ_X,σ^2_X), we have P(R,R_c)>0 when R∈[0,∞) and R_c∈[0,∞]
since D(R,R_c,0|W^2_2)=D(R,R_c,0|W^2_2)>D(R,R_c,∞|W^2_2)≥ D(R,R_c,∞|W^2_2).].
See Appendix <ref>.
According to <cit.>, the upper bounds D(R,R_c,P|ϕ_KL) and D(R,R_c,P|W^2_2) are tight when the reconstruction distribution p_X̂ is restricted to be Gaussian. In light of Corollaries <ref> and <ref>, this restriction incurs a penalty in the weak perception constraint regime. In fact, the connection with entropy-constrained scalar quantization suggests that discrete reconstruction distributions might be more preferable in this regime. This is somewhat surprising since D(R,R_c,P|ϕ) admits a single-letter characterization, which is typically associated with a Gaussian extremal inequality <cit.>, especially considering the fact that both ϕ_KL(p_X̂p_X) and W^2_2(p_X,p_X̂) favor Gaussian p_X̂ when p_X is a Gaussian distribution (see Lemma <ref>).
§ CONCLUSION
We have investigated and improved the existing bounds on the quadratic Gaussian distortion-rate-perception function with limited common randomness for the case where the perception measure is given by the Kullback-Leibler divergence or the squared Wasserstein-2 distance. Along the way, a refined version of Talagrand's transportation inequality is established and the connection between rate-distortion-perception coding and entropy-constrained scalar quantization is revealed.
Note that the fundamental rate-distortion-perception tradeoff depends critically on how the perception constraint is formulated. Our work focuses on a particulr formulation where the reconstruction sequence is required to be i.i.d. Therefore, great caution should be executed when utilizing and interpreting the results in the present paper. It is of considerable interest to conduct a comprehensive comparison of different formulations regarding their impacts on the information-theoretic performance limit of rate-distortion-perception coding.
§ PROOF OF THEOREM <REF>
We need the following result <cit.> concerning the Gaussian extremal property of the Kullback-Leibler divergence and the squared Wasserstein-2 distance.
For p_X=𝒩(μ_X,σ^2_X) and p_X̂ with 𝔼[X̂^2]<∞,
ϕ_KL(p_X̂p_X)
≥ϕ_KL(p_X̂^Gp_X)
=logσ_X/σ_X̂+(μ_X-μ_X̂)^2+σ^2_X̂-σ^2_X/2σ^2_X
and
W^2_2(p_X,p_X̂) ≥ W^2_2(p_X,p_X̂^G)
=(μ_X-μ_X̂)^2+(σ_X-σ_X̂)^2,
where p_X̂^G:=𝒩(μ_X̂,σ^2_X̂).
Lemma <ref> indicates that when the reference distribution is Gaussian, replacing the other distribution with its Gaussian counterpart leads to reductions in both the Kullback-Leibler divergence and the squared Wasserstein-2 distance.
These reductions turn out to be quantitatively related as shown by the next result.
For p_X=𝒩(μ_X,σ^2_X) and p_X̂ with 𝔼[X̂^2]<∞,
W^2_2(p_X,p_X̂)-W^2_2(p_X,p_X̂^G)≤2σ_Xσ_X̂(1-e^-(ϕ_KL(p_X̂p_X)-ϕ_KL(p_X̂^Gp_X))).
Note that
W^2_2(p_X,p_X̂) =(μ_X-μ_X̂)^2+W^2_2(p_X-μ_X,p_X̂-μ_X̂)
=(μ_X-μ_X̂)^2+σ^2_XW^2_2(p_σ^-1_X(X-μ_X), p_σ^-1_X(X̂-μ_X̂))
(a)≤(μ_X-μ_X̂)^2+σ^2_X+σ^2_X̂-2σ^2_X√(1/2π ee^2h(σ^-1_XX̂))
(b)=W^2_2(p_X,p_X̂^G)+2σ_Xσ_X̂-2σ^2_X√(1/2π ee^2h(σ^-1_XX̂)),
where (a) is due to <cit.> and (b) is due to Lemma <ref>. Moreover,
h(σ^-1_XX̂) =h(X̂)-logσ_X
=1/2log2π eσ^2_X̂/σ^2_X-ϕ_KL(p_X̂p_X)+ϕ_KL(p_X̂^Gp_X).
Substituting (<ref>) into (<ref>) proves Lemma <ref>.
Now we proceed to prove Theorem <ref>. In view of Lemmas <ref> and <ref>,
W^2_2(p_X,p_X̂) ≤max_μ,ση(μ,σ)
μ=μ_X,
σ≤σ_X,
(μ_X-μ)^2/2σ^2_X+ψ(σ)≤ϕ_KL(p_X̂p_X),
where
η(μ,σ) :=-2σ^2_Xe^(μ_X-μ)^2+σ^2-σ^2_X/2σ^2_Xe^-ϕ_KL(p_X̂p_X)+(μ_X-μ)^2+σ^2_X+σ^2
and ψ(·) is defined in (<ref>).
Since ψ(σ) decreases monotonically from ∞ to 0 as σ varies from 0 to σ_X and increases monotonically from 0 to ∞ as σ varies from σ_X to ∞, there must exist σ≤σ_X and σ≥σ_X satisfying
ψ(σ)=ψ(σ)=ϕ_KL(p_X̂p_X).
Note that (<ref>)–(<ref>) can be written compactly as
W^2_2(p_X,p_X̂)≤max_σ∈[σ,σ_X]η(μ_X,σ).
For σ∈[σ,σ_X],
∂/∂ση(μ_X,σ) =-2σ e^σ^2-σ^2_X/2σ^2_Xe^-ϕ_KL(p_X̂p_X)+2σ
≥ 0,
which implies the maximum in (<ref>) is attained at σ=σ_X. So we have
W^2_2(p_X,p_X̂) ≤η(μ_X,σ_X)
=2σ^2_X(1-e^-ϕ_KL(p_X̂p_X)).
This proves Theorem <ref>.
Interestingly, Talagrand's transportation inequality (<ref>)
corresponds to the relaxed version without the constraints (<ref>) and (<ref>), i.e.,
W^2_2(p_X,p_X̂) ≤max_μ,ση(μ,σ)
(μ_X-μ)^2/2σ^2_X+ψ(σ)≤ϕ_KL(p_X̂p_X).
We now prove this. It can be verified that
∂/∂(μ_X-μ)^2η(μ,σ)=-e^(μ_X-μ)^2+σ^2-σ^2_X/2σ^2_Xe^-ϕ_KL(p_X̂p_X)+1.
Given σ<σ, there is no μ satisfying (<ref>).
Given σ∈[σ,σ_X], for μ satisfying (<ref>), we have
∂/∂(μ_X-μ)^2η(μ,σ)≥ 0,
which implies that the maximum value of η(μ,σ) over μ satisfying (<ref>) is attained when
logσ_X/σ+(μ_X-μ)^2+σ^2-σ^2_X/2σ^2_X=ϕ_KL(p_X̂p_X).
Therefore, for σ∈[σ,σ_X],
max_μ:(<ref>)η(μ,σ)=κ(σ),
where
κ(σ):=2σ^2_X(ϕ_KL(p_X̂p_X)-logσ_X/σ+1)-2σ_Xσ.
Since the maximum value of κ(σ) over σ∈[σ,σ_X] is attained at σ=σ_X, it follows that
max_σ∈[σ,σ_X]max_μ:(<ref>)η(μ,σ)=2σ^2_Xϕ_KL(p_X̂p_X).
Given σ∈(σ_X,√(2σ^2_Xϕ_KL(p_X̂p_X)+σ^2_X)), for μ satisfying (<ref>), we have
∂/∂(μ_X-μ)^2η(μ,σ)≥ 0 (μ_X-μ)^2+σ^2-σ^2_X/2σ^2_X≤ϕ_KL(p_X̂p_X),
<0 (μ_X-μ)^2+σ^2-σ^2_X/2σ^2_X>ϕ_KL(p_X̂p_X),
which implies that the maximum value of η(μ,σ) over μ satisfying (<ref>) is attained when
(μ_X-μ)^2+σ^2-σ^2_X/2σ^2_X=ϕ_KL(p_X̂p_X).
Therefore, for σ∈(σ_X,√(2σ^2_Xϕ_KL(p_X̂p_X)+σ^2_X)),
max_μ:(<ref>)η(μ,σ)=2σ^2_Xϕ_KL(p_X̂p_X).
As a consequence,
max_σ∈(σ_X,√(2σ^2_Xϕ_KL(p_X̂p_X)+σ^2_X))max_μ:(<ref>)η(μ,σ)=2σ^2_Xϕ_KL(p_X̂p_X).
Given σ∈[√(2σ^2_Xϕ_KL(p_X̂p_X)+σ^2_X),σ], for μ satisfying (<ref>), we have
∂/∂(μ_X-μ)^2η(μ,σ)≤ 0,
which implies that the maximum value of η(μ,σ) over μ satisfying (<ref>) is attained when
(μ_X-μ)^2=0,μ=μ_X.
Therefore, for σ∈[√(2σ^2_Xϕ_KL(p_X̂p_X)+σ^2_X),σ],
max_μ:(<ref>)η(μ,σ)=κ'(σ),
where
κ'(σ):=-2σ^2_Xe^σ^2-σ^2_X/2σ^2_Xe^-ϕ_KL(p_X̂p_X)+σ^2_X+σ^2.
Since the maximum value of κ'(σ) over σ∈[√(2σ^2_Xϕ_KL(p_X̂p_X)+σ^2_X),σ] is attained at σ=√(2σ^2_Xϕ_KL(p_X̂p_X)+σ^2_X), it follows that
max_σ∈[√(2σ^2_Xϕ_KL(p_X̂p_X)+σ^2_X),σ]max_μ:(<ref>)η(μ,σ)=2σ^2_Xϕ_KL(p_X̂p_X).
Given σ>σ, there is no μ satisfying (<ref>).
Combining (<ref>), (<ref>), and (<ref>) proves (<ref>).
§ PROOF OF THEOREM <REF>
In view of the definition of D(R,R_c,P|ϕ_KL) and D(R,R_c,2σ^2_X(1-e^-P)|W^2_2), for the purpose of proving (<ref>), it suffices to show
[σ(P),σ_X]⊆[(σ_X-√(2σ^2_X(1-e^-P)))_+,σ_X]
and
σ^2_X̂-(σ_Xe^-(R+R_c)-√(2σ^2_X(1-e^-P)))_+^2≥σ^2_X̂-σ^2_X̂e^-2(R+R_c+P-ψ(σ_X̂))
for σ_X̂∈[σ(P),σ_X].
Invoking (<ref>) with p_X̂=𝒩(μ_X,σ(P)) (see also Lemma <ref> for the expressions of the Kullback-Leibler divergence and the squared Wasserstein-2 distance between two Gaussian distributions)
(σ_X-σ(P))^2≤ 2σ^2_X(1-e^-P),
from which (<ref>) follows immediately.
Note that (<ref>) is trivially true when
e^-(R+R_c)≤√(2(1-e^-P)). When e^-(R+R_c)>√(2(1-e^-P)), it can be written equivalently as
√(2(1-e^-P))≥ e^-(R+R_c)(1-e^-(P+σ^2_X-σ^2_X̂/2σ^2_X)).
Since e^-(R+R_c)≤ 1 and
1-e^-(P+σ^2_X-σ^2_X̂/2σ^2_X)≤1-e^-(P+σ^2_X-σ^2(P)/2σ^2_X)
for σ_X̂∈[σ(P),σ_X], it suffices to show
√(2(1-e^-P))≥1-e^-(P+σ^2_X-σ^2(P)/2σ^2_X).
According to the definition of σ(P),
P=logσ_X/σ(P)+σ^2(P)-σ^2_X/2σ^2_X.
Substituting (<ref>) into (<ref>) gives
√(2(1-e^logσ(P)/σ_X-σ^2(P)/2σ^2_X+1/2))≥ 1-σ(P)/σ_X.
We can rewrite (<ref>) as
τ(β)≥ 0,
where
τ(β):=1-2β e^-β^2/2+1/2+2β-β^2
with β:=σ(P)/σ_X.
Note that β∈[0,1]. We have
dτ(β)/dβ =-2e^-β^2/2+1/2+2β^2e^-β^2/2+1/2+2-2β
≤ -2(1-β^2)+2-2β
=-2(1-β)β
≤ 0.
Since τ(1)=0, it follows that τ(β)≥ 0 for β∈[0,1], which verifies (<ref>) and consequently proves (<ref>).
Now we proceed to prove (<ref>), which is equivalent to
D(R,R_c,2σ^2_X(1-e^-P)|W^2_2)≤D(R,R_c,P|ϕ_KL).
Since D(R,R_c,P|ϕ_KL)=D(R,R_c,(σ_X-σ(P))^2|W^2_2), it suffices to show
(σ_X-σ(P))^2≤ 2σ^2_X(1-e^-P),
i.e.,
P≥log2σ^2_X/σ^2_X-σ^2(P)+2σ_Xσ(P).
Substituting (<ref>) into (<ref>) and rearranging the inequality yields
logσ^2_X-σ^2(P)+2σ_Xσ(P)/2σ_Xσ(P)≥σ^2_X-σ^2(P)/2σ^2_X,
which is indeed true since
logσ^2_X-σ^2(P)+2σ_Xσ(P)/2σ_Xσ(P) (a)≥ 1-2σ_Xσ(P)/σ^2_X-σ^2(P)+2σ_Xσ(P)
=σ^2_X-σ^2(P)/σ^2_X-σ^2(P)+2σ_Xσ(P)
≥σ^2_X-σ^2(P)/2σ^2_X,
where (a) is due to
log z≥ 1-1/z for z>0.
This completes the proof of (<ref>).
§ PROOF OF THEOREM <REF>
It is known <cit.> that
D(R,R_c,P|W^2_2)≥ inf_p_X̂σ^2_X+σ^2_X̂-2σ_X√((1-e^-2R)(σ^2_X̂-D(R+R_c|p_X̂)))
μ_X̂=μ_X,
σ_X̂≤σ_X,
W^2_2(p_X,p_X̂)≤ P,
where
D(R+R_c|p_X̂):=inf_p_Ŷ|X̂:I(X̂;Ŷ)≤ R+R_c𝔼[(X̂-Ŷ)^2].
In light of Lemma <ref>, the constraints (<ref>)–(<ref>) imply σ_X̂∈[(σ_X-√(P))_+,σ_X]. The following result provides a lower bound on D(R+R_c|p_X̂) and proves
D(R,R_c,P|W^2_2)≥inf_σ_X̂∈[(σ_X-√(P))_+,σ_X]sup_α>0σ^2_X+σ^2_X̂-2σ_X√((1-e^-2R)(σ^2_X̂-δ^2_+(σ_X̂,α))).
For p_X=𝒩(μ_X,σ^2_X) and p_X̂ with W^2_2(p_X,p_X̂)≤ P,
D(R+R_c|p_X̂)≥sup_α>0(σ_Xe^-(R+R_c)-G(α))^2_+/α^2,
where
G(α):=√(σ^2_X-α((μ_X-μ_X̂)^2+σ^2_X+σ^2_X̂-P)+α^2σ^2_X̂).
First let p_X and p_X̂ be coupled according to the joint distribution attaining W^2_2(p_X,p_X̂).
Then add Ŷ into the probability space such that X↔X̂↔Ŷ form a Markov chain and I(X̂;Ŷ)≤ R+R_c. For any α>0,
𝔼[((X-μ_X)-α(Ŷ-μ_Ŷ))^2]
=𝔼[((X-μ_X)-α(X̂-μ_X̂))^2]+α^2𝔼[((X̂-μ_X̂)-(Ŷ-μ_Ŷ))^2]+2α𝔼[((X-μ_X)-(X̂-μ_X̂))((X̂-μ_X̂)-(Ŷ-μ_Ŷ))]
≤(√(𝔼[((X-μ_X)-α(X̂-μ_X̂))^2])+α√(𝔼[((X̂-μ_X̂)-(Ŷ-μ_Ŷ))^2]))^2
≤(√(𝔼[((X-μ_X)-α(X̂-μ_X̂))^2])+α√(𝔼[(X̂-Ŷ)^2]))^2
=(√(σ^2_X-2αρσ_Xσ_X̂+α^2σ^2_X̂)+α√(𝔼[(X̂-Ŷ)^2]))^2,
where ρ denotes the correlation coefficient of X and X̂. On the other hand,
𝔼[((X-μ_X)-α(Ŷ-μ_Ŷ))^2] (a)≥σ^2_Xe^-2I(X-μ_X;α(Ŷ-μ_Ŷ))
=σ^2_Xe^-2I(X;Ŷ)
(b)≥σ^2_Xe^-2I(X̂;Ŷ)
≥σ^2_Xe^-2(R+R_c),
where (a) and (b) are due to the Shannon lower bound <cit.> and the data processing inequality <cit.>, respectively.
Combining (<ref>) and (<ref>) yields
𝔼[(X̂-Ŷ)^2]≥(σ_Xe^-(R+R_c)-√(σ^2_X-2αρσ_Xσ_X̂+α^2σ^2_X̂))^2_+/α^2.
It can be verified that
P ≥ W^2_2(p_X,p_X̂)
=𝔼[(X-X̂)^2]
=(μ_X-μ_X̂)^2+𝔼[((X-μ_X)-(X̂-μ_X̂))^2]
=(μ_X-μ_X̂)^2+σ^2_X-2ρσ_Xσ_X̂+σ^2_X̂,
which implies
2ρσ_Xσ_X̂≥(μ_X-μ_X̂)^2+σ^2_X+σ^2_X̂-P.
Substituting (<ref>) into (<ref>) proves Lemma <ref>.
To establish the first inequality in (<ref>), we shall demonstrate that “inf" in (<ref>) can be replaced by “min". It suffices to consider the case R∈(0,∞) and R_c∈[0,∞) since otherwise the infimum is clearly attainable.
The problem boils down to showing that the map σ_X̂↦sup_α>0δ_+(σ_X̂,α) is continuous for σ_X̂∈[(σ_X-√(P))_+,P].
Obviously, sup_α>0δ_+(σ_X̂,α)=0 if and only if sup_α>0δ(σ_X̂,α)≤ 0, where
δ(σ_X̂,α):=σ_Xe^-(R+R_c)-√(σ^2_X-α(σ^2_X+σ^2_X̂-P)+α^2σ^2_X̂)/α.
Note that sup_α>0δ(σ_X̂,α)≤ 0 is equivalent to
P≥sup_α>0σ^2_X+σ^2_X̂-σ^2_X(1-e^-2(R+R_c))/α-ασ^2_X̂.
Since
sup_α>0σ^2_X+σ^2_X̂-σ^2_X(1-e^-2(R+R_c))/α-ασ^2_X̂=σ^2_X+σ^2_X̂-2σ_Xσ_X̂√(1-e^-2(R+R_c)),
one can rewrite (<ref>) as
P≥σ^2_X+σ^2_X̂-2σ_Xσ_X̂√(1-e^-2(R+R_c)).
On the other hand, we have
sup_α>0δ_+(σ_X̂,α)=sup_α>0δ(σ_X̂,α)>0 when
P<σ^2_X+σ^2_X̂-2σ_Xσ_X̂√(1-e^-2(R+R_c)).
If σ_X̂=σ_X-√(P), then
δ(σ_X̂,α)=σ_Xe^-(R+R_c)-|σ_X-ασ_X̂|/α
and sup_α>0δ(σ_X̂,α) is attained at[It follows by σ_X̂=σ_X-√(P) and (<ref>) that σ_X̂>0.]
α=σ_X/σ_X̂.
If σ_X̂>σ_X-√(P), then
σ^2_X-α(σ^2_X+σ^2_X̂-P)+α^2σ^2_X̂>0
for α>0.
As shown below, ∂/∂αδ(σ_X̂,α)=0 has a unique solution, denoted as α̂, for α>0.
It can be verified that
∂/∂αδ(σ_X̂,α)=α(σ^2_X+σ^2_X̂-P-2ασ^2_X̂)/2α^2√(σ^2_X-α(σ^2_X+σ^2_X̂-P)+α^2σ^2_X̂)-σ_Xe^-(R+R_c)-√(σ^2_X-α(σ^2_X+σ^2_X̂-P)+α^2σ^2_X̂)/α^2.
Setting ∂/∂αδ(σ_X̂,α)=0 gives
2σ^2_X-α(σ^2_X+σ^2_X̂-P)=2σ_Xe^-(R+R_c)√(σ^2_X-α(σ^2_X+σ^2_X̂-P)+α^2σ^2_X̂).
Note that (<ref>) has a solution in (0,2σ^2_X/σ^2_X+σ^2_X̂-P) since its left-hand side is greater than its right-hand side when α=0 and is less than its right-hand side when α=2σ^2_X/σ^2_X+σ^2_X̂-P. By taking the square of both sides of (<ref>) and simplifying the expression, we get
α^2((σ^2_X+σ^2_X̂-P)^2-4σ^2_Xσ^2_X̂e^-2(R+R_c))-4ασ^2_X(σ^2_X+σ^2_X̂-P)(1-e^-2(R+R_c))+4σ^4_X(1-e^-2(R+R_c))=0.
If σ^2_X+σ^2_X̂-P=2σ_Xσ_X̂e^-(R+R_c), then (<ref>) has only one solution, given by
α̂:=σ^2_X/σ^2_X+σ^2_X̂-P,
which is also the unique solution to ∂/∂αδ(σ_X̂,α)=0.
If σ^2_X+σ^2_X̂-P<2σ_Xσ_X̂e^-(R+R_c), then (<ref>) has two solutions with different signs and only the positive one, given by
α̂ :=2σ^2_X(σ^2_X+σ^2_X̂-P)(1-e^-2(R+R_c))-2σ^2_Xe^-(R+R_c)√((4σ^2_Xσ^2_X̂-(σ^2_X+σ^2_X̂-P)^2)(1-e^-2(R+R_c)))/(σ^2_X+σ^2_X̂-P)^2-4σ^2_Xσ^2_X̂e^-2(R+R_c),
is the solution to ∂/∂αδ(σ_X̂,α)=0 for α>0. If σ^2_X+σ^2_X̂-P>2σ_Xσ_X̂e^-(R+R_c), then (<ref>) has two positive solutions and only the small one, also given by (<ref>), is the solution to ∂/∂αδ(σ_X̂,α)=0.
Indeed, the large one is a solution to the following equation obtained by negating the left-hand side of (<ref>):
α(σ^2_X+σ^2_X̂-P)-2σ^2_X=2σ_Xe^-(R+R_c)√(σ^2_X-α(σ^2_X+σ^2_X̂-P)+α^2σ^2_X̂).
This can be verified by noticing that (<ref>) has a solution in (2σ^2_X/σ^2_X+σ^2_X̂-P,∞) since its left-hand side is less than its right-hand side when α=2σ^2_X/σ^2_X+σ^2_X̂-P and is greater than its right-hand side when α is sufficiently large. For σ_X̂ satisfying σ^2_X+σ^2_X̂-P=2σ_Xσ_X̂e^-(R+R_c), we have
lim_ϵ→02σ^2_X(σ^2_X+σ^2_X̂(ϵ)-P)(1-e^-2(R+R_c))-2σ^2_Xe^-(R+R_c)√((4σ^2_Xσ^2_X̂(ϵ)-(σ^2_X+σ^2_X̂(ϵ)-P)^2)(1-e^-2(R+R_c)))/(σ^2_X+σ^2_X̂(ϵ)-P)^2-4σ^2_Xσ^2_X̂(ϵ)e^-2(R+R_c)
=σ^2_X/σ^2_X+σ^2_X̂-P,
where σ_X̂(ϵ)=σ_X̂+ϵ.
Moreover, setting σ_X̂=σ_X-√(P) in (<ref>) gives α̂=σ_X/σ_X̂.
Therefore, (<ref>) and (<ref>) can be viewed as the degenerate versions of (<ref>).
Since δ(σ_X̂,α)<0 when α is either close to zero from the positive side or sufficiently large, α̂ must be the unique maximizer of both δ(σ_X̂,α) and δ_+(σ_X̂,α)
for α>0.
This implies the continuity of σ_X̂↦sup_α>0δ_+(σ_X̂,α) for σ_X̂ over the region defined by (<ref>).
It remains to show that δ_+(σ_X̂,α̂)→ 0 as σ_X̂, confined to the region defined by (<ref>), converges to some σ satisfying P=σ^2_X+σ^2-2σ_Xσ√(1-e^-2(R+R_c)). First consider the scenario where σ=0, which implies P=σ^2_X. We have α̂→∞ as σ_X̂→ 0, and consequently
lim_σ_X̂→0δ_+(σ_X̂,α̂)
=lim_σ_X̂→0(σ_Xe^-(R+R_c)/α-√(σ^2_X/α^2-σ^2_X+σ^2_X̂-P/α+σ^2_X̂))_+
=0.
Next consider the scenario where σ>0. We have α̂→σ^2_X+σ^2-P/2σ^2 as σ_X̂→σ, and consequently
lim_σ_X̂→0δ_+(σ_X̂,α̂) =δ_+(σ,σ^2_X+σ^2-P/2σ^2)
=0.
This completes the proof of the first inequality in (<ref>).
The second inequality in (<ref>) follows from the fact that
D(R,R_c,P|W^2_2)=min_σ_X̂∈[(σ_X-√(P))_+,σ_X]σ^2_X+σ^2_X̂-2σ_X√((1-e^-2R)(σ^2_X̂-δ^2_+(σ_X̂,1))).
Now we proceed to identify the sufficient and necessary condition under which this inequality is strict. It suffices to consider the case R∈(0,∞) and R_c∈[0,∞) since otherwise D'(R,R_c,P|W^2_2) clearly coincides with D(R,R_c,P|W^2_2).
Note that the minimum in (<ref>) is attained at and only at <cit.>
σ_X̂ =σ̂:=σ_X√(1-e^-2R) √(P)/σ_X≥(1-√(1-e^-2R))∨ e^-(R+R_c),
σ_X-√(P) √(P)/σ_X∈[e^-(R+R_c),1-√(1-e^-2R)),
√(σ^2_X(1-e^-2R)+(σ_Xe^-(R+R_c)-√(P))^2) √(P)/σ_X∈[ν(R,R_c),e^-(R+R_c)),
σ_X-√(P) √(P)/σ_X<ν(R,R_c)∧ e^-(R+R_c),
where
ν(R,R_c):=e^-2R-e^-2(R+R_c)/2-2e^-(R+R_c).
We have the following observation:
D'(R,R_c,P|W^2_2)>D(R,R_c,P|W^2_2) if and only if
sup_α>0σ^2_X+σ̂^2-2σ_X√((1-e^-2R)(σ̂^2-δ^2_+(σ̂,α)))>σ^2_X+σ̂^2-2σ_X√((1-e^-2R)(σ̂^2-δ^2_+(σ̂,1))).
“If" part: Assume the minimum in (<ref>) is attained at σ_X̂=σ̃. If σ̃=σ̂, we have
D'(R,R_c,P|W^2_2)
=sup_α>0σ^2_X+σ̂^2-2σ_X√((1-e^-2R)(σ̂^2-δ^2_+(σ̂,α)))
>σ^2_X+σ̂^2-2σ_X√((1-e^-2R)(σ̂^2-δ^2_+(σ̂,1)))
=D(R,R_c,P|W^2_2).
If σ̃≠σ̂, we have
D'(R,R_c,P|W^2_2)
=sup_α>0σ^2_X+σ̃^2-2σ_X√((1-e^-2R)(σ̃^2-δ^2_+(σ̃,α)))
≥σ^2_X+σ̃^2-2σ_X√((1-e^-2R)(σ̃^2-δ^2_+(σ̃,1)))
(a)>σ^2_X+σ̂^2-2σ_X√((1-e^-2R)(σ̂^2-δ^2_+(σ̂,1)))
=D(R,R_c,P|W^2_2),
where (a) is due to the fact that σ̂ is the unique minimizer of (<ref>). Thus, D'(R,R_c,P|W^2_2)>D(R,R_c,P|W^2_2) holds either way.
“Only if" part: This is because
sup_α>0σ^2_X+σ̂^2-2σ_X√((1-e^-2R)(σ̂^2-δ^2_+(σ̂,α))) ≥D'(R,R_c,P|W^2_2)
>D(R,R_c,P|W^2_2)
=σ^2_X+σ̂^2-2σ_X√((1-e^-2R)(σ̂^2-δ^2_+(σ̂,1))).
Equipped with the above observation, we shall treat the following two cases separately.
1) P≥σ^2_Xe^-2(R+R_c): In this case, δ_+(σ_X̂,1)=0. Therefore, (<ref>) holds if and only if sup_α>0δ_+(σ̂,α)>0, which, in light of (<ref>), is equivalent to
P<σ^2_X+σ̂^2-2σ_Xσ̂√(1-e^-2(R+R_c)).
For the subcase √(P)/σ_X≥(1-√(1-e^-2R))∨ e^-(R+R_c), we have σ̂=σ_X√(1-e^-2R), and consequently (<ref>) becomes
P<σ^2_X(2-e^-2R-2√((1-e^-2R)(1-e^-2(R+R_c)))).
For the subcase √(P)/σ_X∈[e^-(R+R_c),1-√(1-e^-2R)), we have σ̂=σ_X-√(P), and consequently (<ref>) becomes
0<(2σ^2_X-2σ_X√(P))(1-√(1-e^-2(R+R_c))),
which holds trivially. Combining the analyses for these two subcases shows that P/σ^2_X must fall into the following interval:
[e^-2(R+R_c),2-e^-2R-2√((1-e^-2R)(1-e^-2(R+R_c)))).
Note that
e^-2(R+R_c) =.2-e^-2R-1-e^-2(R+R_c)/α-α(1-e^-2R)|_α=1
≤sup_α>02-e^-2R-1-e^-2(R+R_c)/α-α(1-e^-2R)
=2-e^-2R-2√((1-e^-2R)(1-e^-2(R+R_c))),
where the supremum is attained at and only at α=√(1-e^-2(R+R_c)/1-e^-2R).
Thus, the interval in (<ref>) is nonempty unless R_c=0.
2) P<σ^2_Xe^-2(R+R_c): In this case, δ_+(σ_X̂,1)>0.
Clearly, δ_+(σ_X̂,α)=δ(σ_X̂,α) whenever δ_+(σ_X̂,α)>0. Since δ_+(σ_X̂,1)>0, we must have δ_+(σ_X̂,α)=δ(σ_X̂,α) in a neighbourhood of α=1.
Setting .∂/∂αδ(σ_X̂,α)|_α=1=0 gives
σ_X̂=σ^*_X̂:=√(σ^2_X-2σ_Xe^-(R+R_c)√(P)+P).
For the subcase √(P)/σ_X∈[ν(R,R_c),e^-(R+R_c)), we have
σ̂=√(σ^2_X(1-e^-2R+e^-2(R+R_c))-2σ_Xe^-(R+R_c)√(P)+P), which, in view of (<ref>), implies .∂/∂αδ(σ̂,α)|_α=1≠ 0 unless R_c=0. Note that in this subcase, P=0⇒ν(R,R_c)=0⇒ R_c=0.
For the subcase √(P)/σ_X<ν(R,R_c)∧ e^-(R+R_c), we have
σ̂=√(σ^2_X-2σ_X√(P)+P), which, in view of (<ref>), implies .∂/∂αδ(σ̂,α)|_α=1≠ 0 unless P=0. Note that this subcase is void when R_c=0 since ν(R,0)=0. Combining the analyses for these two subcases shows that if R_c>0 and P>0, then .∂/∂αδ(σ̂,α)|_α=1≠ 0, which further implies (<ref>).
It remains to show .α̂|_σ_X̂=σ̂=1 when R_c=0 or P=0. This can be accomplished via a direct verification. The proof of Theorem <ref> is thus complete.
§ PROOF OF (<REF>)
Clearly, (<ref>) holds for all R≥ 0 when P≥σ^2_X. It remains to consider the case P∈(0,σ^2_X). We can write (<ref>) equivalently as
2-z-P/σ^2_X≤ 2√((1-z)(1-ze^-2R_c)),
where z:=e^-2R.
Note that
2-z-P/σ^2_X= 2√((1-z)(1-ze^-2R_c))
has a solution in (0,1) since its left-hand side is less than its right-hand side when z=0 and is greater than its right-hand side when z=1. By taking the square of both sides of (<ref>) and simplifying the expression, we get
ζ_1z^2-ζ_2z+ζ_3=0.
It is easy to see that ζ_2>0 and ζ_3>0.
If ζ_1=0 (i.e., R_c=log 2), then (<ref>) has only one solution, given by
ẑ:=ζ_3/ζ_2,
which is also the unique solution to (<ref>). If ζ_1<0 (i.e., R_c>log 2), then (<ref>) has two solutions with different signs and only the positive one, given by
ẑ:=ζ_2-√(ζ^2_2-4ζ_1ζ_3)/2ζ_1,
is the solution to (<ref>) for z∈(0,1). If ζ_1>0 (i.e, R_c<log 2), then (<ref>) has two positive solutions and only the small one, also given by (<ref>), is the solution to (<ref>). Indeed, the large one is a solution to the following equation obtained by negating the left-hand side of (<ref>):
-2+z+P/σ^2_X= 2√((1-z)(1-ze^-2R_c)).
This can be verified by noticing that (<ref>) has a solution in (1,∞) since its left-hand side is less than its right-hand side when z=1 and is greater than its right-hand side when z is sufficiently large. Therefore, ẑ is the unique solution to (<ref>) for z∈(0,1), and consequently
(<ref>) holds if and only if z∈[0,ẑ], i.e., R≥-1/2logẑ.
§ PROOF OF THEOREM <REF>
For the optimization problem in (<ref>), there is no loss of generality in assuming U=𝔼[X|U] almost surely and 𝔼[X̂^2]≤(1+√(2))^2𝔼[X^2].
Note that
D(R,R_c,P|ϕ)≤ 2𝔼[X^2]
since we can trivially let X, U, and X̂ be mutually independent and p_X̂=p_X. Therefore, it suffices to consider p_UX̂|X with 𝔼[X̂^2]≤ (1+√(2))^2𝔼[X^2] because otherwise
𝔼[(X-X̂)^2] =𝔼[X^2]+𝔼[X̂^2]-2𝔼[XX̂]
≥𝔼[X^2]+𝔼[X̂^2]-2√(𝔼[X^2]𝔼[X̂^2])
>𝔼[X^2]+(1+√(2))^2𝔼[X^2]-2(1+√(2))𝔼[X^2]
=2𝔼[X^2].
For any p_UX̂|X satisfying (<ref>)–(<ref>), let Û:=𝔼[X|U]. Construct p_U'X̂'|X such that
X↔ U'↔X̂' form a Markov chain, p_U'|X=p_Û|X, and p_X̂'|U'=p_X̂|Û. Clearly,
I(X;U')=I(X;Û)(a)≤ I(X;U)≤ R,
I(X̂';U')=I(X̂;Û)(b)≤ I(X̂;U)≤ R+R_c,
where (a) and (b) are due to the data processing inequality <cit.>. Moreover, we have p_X̂'=p_X̂ and consequently
ϕ(p_X,p_X̂')=ϕ(p_X,p_X̂)≤ P.
It can also be verified that
𝔼[(X-X̂')] (c)=𝔼[(X-U')^2]+𝔼[(X̂'-U')^2]
=𝔼[(X-Û)^2]+𝔼[(X̂-Û)^2]
(d)=𝔼[(X-X̂)],
where (c) and (d) follow respectively from the facts that U'=𝔼[X|U',X̂'] and Û=𝔼[X|Û,X̂] almost surely.
Therefore, there is no loss of optimality in replacing p_UX̂|X with p_U'X̂'|X.
Now we proceed to prove Theorem <ref>. For any positive integer k, in light of Lemma <ref>, there exists p_U^(k)X̂^(k)|X satisfying
I(X;U^(k))≤ R,
I(X̂^(k);U^(k))≤ R+R_c,
ϕ(p_X,p_X̂^(k))≤ P,
U^(k)=𝔼[X|U^(k)],
𝔼[(X̂^(k))^2]≤ (1+√(2))^2𝔼[X^2]
as well as the Markov chain constraint X↔ U^(k)↔X̂^(k)
such that
𝔼[(X-X̂^(k))^2]≤ D(R,R_c,P|ϕ)+1/k.
The sequence {p_XU^(k)X̂^(k)}_k=1^∞ is tight <cit.> since given any ϵ>0,
ℙ{X^2≤3/ϵ𝔼[X^2], (U^(k))^2≤3/ϵ𝔼[X^2], (X̂^(k))^2≤3(1+√(2))^2/ϵ𝔼[X^2]}
≥ 1-ℙ{X^2>3/ϵ𝔼[X^2]}-ℙ{(U^(k))^2>3/ϵ𝔼[X^2]}-ℙ{(X̂^(k))^2>3(1+√(2))^2/ϵ𝔼[X^2]}
≥ 1-ϵ/3-𝔼[(U^(k))^2]ϵ/3𝔼[X^2]-𝔼[(X̂^(k))^2]ϵ/3(1+√(2))^2𝔼[X^2]
≥1-ϵ
for all k. By Prokhorov's theorem <cit.>, there exists a subsequence {p_XU^(k_m)X̂^(k_m)}_m=1^∞ converging weakly to some distriution p_XU^*X̂^*.
Since p_X has bounded support, it follows by <cit.> that
𝔼[(X-𝔼[X|U^*,X̂^*])^2]≥lim sup_m→∞𝔼[(X-𝔼[X|U^(k_m),X̂^k_m])^2]=lim sup_m→∞𝔼[(X-U^(k_m))^2].
On the other hand, as the map (x,u)↦(x-u)^2 is continuous and bounded from below, we have
𝔼[(X-U^*)^2]≤lim inf_m→∞𝔼[(X-U^(k_m))^2],
which, together with (<ref>), implies
U^*=𝔼[X|U^*,X̂^*].
Moreover, by the lower semicontinuity of mutual information and p_X̂↦ϕ(p_X, p_X̂) in the topology of weak convergence,
I(X;U^*)≤lim inf_m→∞I(X;U^(k_m)),
I(X̂^*;U^*)≤lim inf_m→∞I(X̂^(k_m);U^(k_m)),
ϕ(p_X,p_X̂^*)≤lim inf_m→∞ϕ(p_X,p_X̂^(k_m)).
Construct p_U'X̂'|X such that
X↔ U'↔X̂' form a Markov chain, p_U'|X=p_U^*|X, and p_X̂'|U'=p_X̂^*|U^*. In view of (<ref>)–(<ref>) and (<ref>)–(<ref>),
I(X;U')=I(X;U^*)≤ R,
I(X̂';U')=I(X̂^*;U^*)≤ R+R_c,
ϕ(p_X,p_X̂')=ϕ(p_X,p_X̂^*)≤ P.
Similarly to (<ref>), we have
𝔼[(X-X̂')^2] =𝔼[(X-U')^2]+𝔼[(X̂'-U')]
=𝔼[(X-U^*)^2]+𝔼[(X̂^*-U^*)]
=𝔼[(X-X̂^*)^2].
Since the map (x,x̂)↦(x-x̂)^2 is continuous and bounded from below, it follows that
𝔼[(X-X̂^*)^2]≤lim inf_m→∞𝔼[(X-X̂^(k_m))^2.
Combining (<ref>), (<ref>), and (<ref>) shows
𝔼[(X-X̂')^2]≤ D(R,R_c,P|ϕ).
Therefore, the infimum in (<ref>) is attained
at p_U'X̂'|X.
The above argument can be easily leveraged to prove the lower semicontinuity of (R,R_c,P)↦ D(R,R_c,P|ϕ), which implies the desired right-continuity property since the map (R,R_c,P)↦ D(R,R_c,P|ϕ) is monotonically decreasing in each of its variables.
The following subtlety in this proof is noteworthy. It is tempting to claim that the weak convergence limit p_XU^*X̂^* automatically satisfies the Markov chain constraint X↔ U^*↔X̂^*. We are unable to confirm this claim. In fact, this claim is false if (<ref>) does not hold. For example, let U^(k):=1/kU and
X=X̂^(k):=
1 U≥ 0,
-1 U<0,
where U is a standard Gaussian random variable. It is clear that X↔ U^(k)↔X̂^(k) form a Markov chain for any positive integer k. However, the Markov chain constraint is violated by the weak convergence limit p_XU^*X̂^* since X and X̂^* are two identical symmetric Bernoulli random variables whereas U^* is a constant zero. Our key observation is that it suffices to have (<ref>), with which the Markov chain structure can be restored without affecting the end-to-end distortion (see the construction of p_U'X̂'|X). Nevertheless, we only manage to establish (<ref>) when p_X has bounded support. Note that, according to the exampel above, the minimum mean square error is not necessarily preserved under weak convergence if (<ref>) does not hold. Indeed, while 𝔼[(X-𝔼[X|U^(k)])^2]=0 for any positive integer k, we have 𝔼[(X-𝔼[X|U^*])^2]=1.
§ PROOF OF THEOREM <REF>
We need the following well-known result regarding the Ornstein-Uhlenbeck flow (see, e.g., <cit.>).
For p_X=𝒩(μ_X,σ^2_X) and p_X̂ with μ_X̂=μ_X and 𝔼[X̂^2]<∞,
let X̂(λ):=μ_X+√(1-λ)(X̂-μ_X)+√(λ)(X̅-μ_X), where X̅ is independent of X̂ and has the same distribution as X.
The map λ↦ϕ_KL(p_X̂(λ)p_X) is continuous[ϕ_KL(p_X̂(λ)p_X) varies continuously from ϕ_KL(p_X̂p_X) to 0 as λ increases from 0 to 1. Note that ϕ_KL(p_X̂(λ)p_X)<∞ for λ∈(0,1] and
lim_λ→ 0ϕ_KL(p_X̂(λ)p_X)=ϕ_KL(p_X̂p_X) even if ϕ_KL(p_X̂p_X)=∞ (in this sense, the map λ↦ϕ_KL(p_X̂(λ)p_X) is continuous at λ=0).], decreasing, and convex for λ∈[0,1].
Now we proceed to prove Theorem <ref>. Given ϵ>0, there exists p_UX̂|X satisfying (<ref>)–(<ref>) with ϕ=ϕ_KL and 𝔼[(X-X̂)^2]≤ D(R,R_c,P|ϕ_KL)+ϵ. Without loss of generality, we assume μ_X̂=μ_X and σ_X̂≤σ_X <cit.>. For λ∈[0,1], let X̂(λ):=μ_X+√(1-λ)(X̂-μ_X)+√(λ)(X̅-μ_X), where X̅ is assumed to be independent of (X,U,X̂) and have the same distribution as X. Note that X↔ U↔X̂↔X̂(λ) form a Markov chain.
By the data processing inequality <cit.> and (<ref>),
I(X̂(λ);U)≤ I(X̂;U)≤ R+R_c.
First consider the case P∈(0,∞). In light of Lemma <ref>, given P̃∈(0,P], there exists λ̃∈[0,1] such that ϕ_KL(p_X̂(λ̃)p_X)=ϕ_KL(p_X̂p_X)∧P̃; moreover, we have[When ϕ_KL(p_X̂p_X)=0, we can set λ̃=0 and consequently λ̃≤P-P̃/P̃ still holds.]
λ̃≤ 1-ϕ_KL(p_X̂p_X)∧P̃/ϕ_KL(p_X̂p_X)≤P-P̃/P̃.
It can be verified that
D(R,R_c,P̃|ϕ_KL)-D(R,R_c,P|ϕ_KL)
≤ D(R,R_c,P̃|ϕ_KL)-𝔼[(X-X̂)^2]+ϵ
≤𝔼[(X-X̂(λ̃))^2]-𝔼[(X-X̂)^2]+ϵ
=2𝔼[(X-X̂)(X̂-X̂(λ̃))]+𝔼[(X̂-X̂(λ̃))^2]+ϵ
≤2√(𝔼[(X-X̂)^2]𝔼[(X̂-X̂(λ̃))^2])+𝔼[(X̂-X̂(λ̃))^2]+ϵ
=2√(𝔼[(X-X̂)^2]((1-√(1-λ̃))^2σ^2_X̂+λ̃σ^2_X))+(1-√(1-λ̃))^2σ^2_X̂+λ̃σ^2_X+ϵ
≤(4√(2-2√(1-λ̃))+2-2√(1-λ̃))σ^2_X+ϵ
≤(4√(2-2√((2P̃-P)_+/P̃))+2-2√((2P̃-P)_+/P̃))σ^2_X+ϵ.
This proves
D(R,R_c,P̃|ϕ_KL)-D(R,R_c,P|ϕ_KL) ≤(4√(2-2√((2P̃-P)_+/P̃))+2-2√((2P̃-P)_+/P̃))σ^2_X.
Next consider the case P=∞. In light of Lemma <ref>, given P̃∈[0,∞), there exists λ̃∈(0,1] such that ϕ_KL(p_X̂(λ̃)p_X)≤P̃; moreover, we can require λ̃→ 0 as P̃→∞. It can be verified that
lim_P̃→∞D(R,R_c,P̃|ϕ_KL) ≤lim_λ̃→ 0𝔼[(X-X̂(λ))^2]
=𝔼[(X-X̂)^2]
≤ D(R,R_c,∞|ϕ_KL)+ϵ.
This proves
lim_P̃→∞D(R,R_c,P̃|ϕ_KL)≤ D(R,R_c,∞|ϕ_KL).
Finally, consider the case P=0. Given P̃∈[0,∞] and ϵ>0, there exists p_UX̂|X
satisfying (<ref>)–(<ref>), ϕ_KL(p_X̂p_X)≤P̃, and 𝔼[(X-X̂)^2]≤ D(R,R_c,P̃|ϕ_KL)+ϵ. Without loss of generality, we assume μ_X̂=μ_X and σ_X̂≤σ_X <cit.>. Let X̅' be jointly distributed with (X,U,X̂)
such that X↔ U↔X̂↔X̅' form a Markov chain, X̅'∼ p_X, and 𝔼[(X̅'-X̂)^2]=W^2_2(p_X,p_X̂). By the data processing inequality <cit.> and (<ref>),
I(X̅';U)≤ I(X̂;U)≤ R+R_c.
It can be verified that
D(R,R_c,0|ϕ_KL)-D(R,R_c,P̃|ϕ_KL)
≤ D(R,R_c,0|ϕ_KL)-𝔼[(X-X̂)^2]+ϵ
≤𝔼[(X-X̅')^2]-𝔼[(X-X̂)^2]+ϵ
=2𝔼[(X-X̂)(X̂-X̅')]+𝔼[(X̂-X̅')^2]+ϵ
≤ 2√(𝔼[(X-X̂)^2]𝔼[(X̂-X̅')^2])+𝔼[(X̂-X̅')^2]+ϵ
=2√(𝔼[(X-X̂)^2]W^2_2(p_X,p_X̂))+W^2_2(p_X,p_X̂)+ϵ
(a)≤2√(2σ^2_X𝔼[(X-X̂)^2]ϕ_KL(p_X̂p_X))+2σ^2_Xϕ_KL(p_X̂p_X)+ϵ
≤2√(2σ^2_X𝔼[(X-X̂)^2]P̃)+2σ^2_XP̃+ϵ
≤(4√(2P̃)+2P̃)σ^2_X+ϵ,
where (a) is due to Talagrand's transportation inequality <cit.>.
This proves
D(R,R_c,0|ϕ_KL)-D(R,R_c,P̃|ϕ_KL)≤ (4√(2P̃)+2P̃)σ^2_X.
In view of (<ref>), (<ref>), and (<ref>), the desired continuity property follows by the fact that the map P↦ D(R,R_c,P|W^2_2) is monotonically decreasing.
§ PROOF OF THEOREM <REF>
We need the following result <cit.> regarding the distortion-perception tradeoff in the quadratic
Wasserstein space.
For λ∈[0,1] and p_XX̂ with 𝔼[X^2]<∞ and 𝔼[X̂^2]<∞, let
X̂(λ):=(1-λ)X̃+λX̅, where X̃:=𝔼[X|X̂], and X̅ is
jointly distributed with (X,X̂) such that X↔X̂↔X̅ form a Markov chain, X̅∼ p_X, and 𝔼[(X̅-X̃)^2]=W^2_2(p_X,p_X̃). We have
𝔼[(X-X̂(λ))^2]=𝔼[(X-X̃)^2]+(W_2(p_X,p_X̃)-W_2(p_X,p_X̂(λ)))^2
and
W^2_2(p_X,p_X̂(λ))=(1-λ)^2W^2_2(p_X,p_X̃).
Moreover, for any X̂' jointly distributed with (X,X̂) such that X↔X̂↔X̂' form a Markov chain and 𝔼[(X̂')^2]<∞,
𝔼[(X-X̂')^2]≥𝔼[(X-X̃)^2]+(W_2(p_X,p_X̃)-W_2(p_X,p_X̂'))^2_+.
Now we proceed to prove Theorem <ref>. Given ϵ>0, there exists p_UX̂|X satisfying (<ref>)–(<ref>) with ϕ=W^2_2 and 𝔼[(X-X̂)^2]≤ D(R,R_c,P|W^2_2)+ϵ. Let X̅ be jointly distributed with (X,U,X̂) such that (X,U)↔X̂↔X̅ form a Markov chain, X̅∼ p_X, and 𝔼[(X̅-X̃)^2]=W^2_2(p_X,p_X̃), where
X̃:=𝔼[X|X̂]. Moreover, let
X̂(λ):=(1-λ)X̃+λX̅ for λ∈[0,1]. Note that X↔ U↔X̂↔X̂(λ) form a Markov chain. By the data processing inequality <cit.> and (<ref>),
I(X̂(λ);U)≤ I(X̂;U)≤ R+R_c.
Given P̃∈[0,P], in light of (<ref>) in Lemma <ref>, there exists λ̃∈[0,1] such that W^2_2(p_X,p_X̂(λ̃))=W^2_2(p_X,p_X̃)∧P̃. We have
D(R,R_c,P̃|W^2_2)-D(R,R_c,P|W^2_2)
≤ D(R,R_c,P̃|W^2_2)-𝔼[(X-X̂)^2]+ϵ
≤𝔼[(X-X̂(λ̃))^2]-𝔼[(X-X̂)^2]+ϵ
(a)≤(W_2(p_X,p_X̃)-W_2(p_X,p_X̂(λ̃)))^2-(W_2(p_X,p_X̃)-W_2(p_X,p_X̂))^2_++ϵ
≤(W_2(p_X,p_X̃)-W_2(p_X,p_X̂(λ̃)))^2-(W_2(p_X,p_X̃)-(W_2(p_X,p_X̃)∧ P))^2+ϵ
=(2W_2(p_X,p_X̃)-(W_2(p_X,p_X̃)∧ P)-W_2(p_X,p_X̂(λ̃)))((W_2(p_X,p_X̃)∧ P)-W_2(p_X,p_X̂(λ̃)))+ϵ
≤ 2W_2(p_X,p_X̃)((W_2(p_X,p_X̃)∧ P)-W_2(p_X,p_X̂(λ̃)))+ϵ
≤ 2σ_X((σ_X∧√(P))-(σ_X∧√(P̃)))+ϵ,
where (a) is due to (<ref>) and (<ref>) in Lemma <ref>. This proves
D(R,R_c,P̃|W^2_2)-D(R,R_c,P|W^2_2)≤ 2σ_X((σ_X∧√(P))-(σ_X∧√(P̃))),
which, together with the fact that the map P↦ D(R,R_c,P|W^2_2) is monotonically decreasing, implies the desired continuity property.
§ PROOF OF THEOREM <REF>
Note that for any p_X̂|X such that I(X;X̂)≤ R and H(X̂)≤ R+R_c, the induced p_UX̂|X with U:=X̂ satisfies (<ref>)–(<ref>). This implies D_e(R,R_c)≥ D(R,R_c,∞). On the other hand, for any p_UX̂|X satisfying
(<ref>)–(<ref>), it follows by the data processing inequality <cit.> that I(X;X̂)≤ R.
Therefore, we must have D(R,R_c,∞)≥ D_e(R,∞). This completes the proof of (<ref>).
For the purpose of establishing the equivalence relationship (<ref>), it suffices to show that D_e(R,R_c)>D_e(R,∞) implies D(R,R_c,∞)>D_e(R,∞) since the converse is implied by (<ref>).
To this end, we shall prove the contrapositive statement, namely, D(R,R_c,∞)≤ D_e(R,∞) implies D_e(R,R_c)≤ D_e(R,∞).
Assume that the infimum in (<ref>) is attained by some p_U^*X̂^*|X. Let Û^*:=𝔼[X|U^*].
We have
D(R,R_c,∞) =𝔼[(X-X̂^*)^2]
(a)=𝔼[(X-Û^*)^2]+𝔼[(X̂^*-Û^*)^2],
where (a) holds because Û^*=𝔼[X|U^*,X̂^*] almost surely. Since
I(X;Û^*)≤ I(X;U^*)≤ R,
it follows that 𝔼[(X-Û^*)^2]≥ D_e(R,∞). Therefore, D(R,R_c,∞)≤ D_e(R,∞) implies 𝔼[(X-Û^*)^2]=D_e(R,∞) and 𝔼[(Û^*-X̂^*)^2]=0 (i.e., Û^*=X̂^* almost surely).
Note that
I(X;Û^*)≤ I(X;U^*) ≤ R
and
H(Û^*)=I(X̂^*; Û^*)≤ I(X̂^*; U^*)≤ R+R_c.
As a consequence, we have 𝔼[(X-Û^*)^2]≥ D_e(R,R_c). This proves D_e(R,R_c)≤ D_e(R,∞).
§ PROOF OF THEOREM <REF>
For p_X=𝒩(μ_X,σ^2_X),
D_e(R,R_c)>D(R,R_c,∞)
when R∈(0,∞) and R_c∈[0,∞), and
D_e(R,R_c)<D(R,R_c,∞)
when R_c∈[0,∞) and R∈(0,χ(R_c)), where χ(R_c) is a positive threshold that depends on R_c.
Let p_X̂^*|X be some conditional distribution that attains the minimum in (<ref>). Clearly, we must have μ_X̂^*=μ_X. Note that
R ≥ I(X;X̂^*)
=h(X)-h(X|X̂^*)
=h(X)-h(X-X̂^*|X̂^*)
(a)≥ h(X)-h(X-X̂^*)
(b)≥1/2logσ^2_X/D_e(R,R_c).
The inequalities (a) and (b) become equalities if and only if X-X̂^* is independent of X^* and is distributed as 𝒩(0,D_e(R,R_c)), which, together with the fact p_X=𝒩(μ_X,σ^2_X),
implies p_X̂^*=𝒩(μ_X,σ^2_X-D_e(R,R_c)).
This is impossible since H(X̂^*)≤ R+R_c<∞ whereas the entropy of a Gaussian distribution with positive variance[Since R>0, it follows that D_e(R,R_c)<σ^2_X.] is infinite. Therefore, at least one of the inequalities (a) and (b) is strict, yielding
R>1/2logσ^2_X/D_e(R,R_c).
Now one can readily prove (<ref>) by invoking (<ref>).
According to <cit.>,
D_e(R,0)=σ^2_X(1-2R)+o(R),
where o(R) stands for a term that approaches zero more rapidly than R as R→ 0.
On the other hand, it can be deduced from (<ref>) that
D(R,R_c,∞)=σ^2_X(1-2R+2Re^-2R_c)+o(R).
Combining (<ref>) and (<ref>) then invoking the fact D_e(R,R_c)≤ D_e(R,0) proves (<ref>).
Clearly, (<ref>) is a direct consequence of (<ref>) and (<ref>). Since
D(R,R_c,∞)=D_e(R,∞)
for p_X=𝒩(μ_X,σ^2_X), it is tempting to deduce (<ref>) from (<ref>) and (<ref>). Unfortunately, (<ref>) relies on the assumption that the infimum in (<ref>) can be attained, which has not been verified for Gaussian p_X. Nevertheless, we show below that the key idea underlying the proof of (<ref>) and (<ref>), namely, violating (<ref>) necessarily forces Û^* to be Gaussian and coincide with X̂^*, can be salvaged without resorting to the aforementioned assumption
by treating (Û^*,X̂^*) as a certain limit under weak convergence.
Assume that (<ref>) does not hold, i.e.,
D(R,R_c,∞)=σ^2_Xe^-2R.
For any positive integer k, there exists p_U^(k)X̂^(k)|X satisfying
I(X;U^(k))≤ R,
I(X̂^(k);U^(k))≤ R+R_c
as well as the Markov chain constraint X↔ U^(k)↔X̂^(k)
such that
𝔼[(X-X̂^(k))^2]≤σ^2_Xe^-2R+1/k.
Let Û^(k):=𝔼[X|U^(k)] and V^(k):=X-Û^(k). Since X↔ U^(k)↔X̂^(k) form a Markov chain, it follows that
𝔼[(X-X̂^(k))^2]=σ^2_V^(k)+𝔼[(Û^(k)-X̂^(k))^2],
which, together with (<ref>), implies
σ^2_V^(k)≤σ^2_Xe^-2R+1/k.
Moreover, we have
h(V^(k)|Û^(k)) ≥ h(V^(k)|U^(k))
=h(X|U^(k))
=h(X)-I(X;U^(k))
≥ h(X)-R
=1/2log(2π eσ^2_Xe^-2R),
and consequently
h(V^(k))≥1/2log(2π eσ^2_Xe^-2R).
Combining (<ref>) and (<ref>) gives
ϕ_KL(p_V^(k)𝒩(0,σ^2_Xe^-2R))
=-h(V^(k))+1/2log(2πσ^2_Xe^-2R)+σ^2_V^(k)/2σ^2_Xe^-2R
≤1/2kσ^2_Xe^2R.
Therefore, p_V^(k) converges to 𝒩(0,σ^2_Xe^-2R) in Kullback-Leibler divergence as k→∞. It can be shown that the sequence {p_XÛ^(k)V^(k)X̂^(k)}_k=1^∞ is tight (cf. the proof of Theorem <ref>). By Prokhorov's theorem <cit.>, there exists a subsequence {p_XÛ^(k_m)V^(k_m)X̂^(k_m)}_m=1^∞ converging weakly to some distribution p_XÛ^*V^*X̂^*. Clearly, we have p_V^*=𝒩(0,σ^2_Xe^-2R). Note that (<ref>) implies
h(V^(k))≤1/2log(2π e(σ^2_Xe^-2R+1/k)),
which, together with (<ref>), yields
I(Û^(k);V^(k))≤1/2log(1+1/kσ^2_Xe^2R).
By the lower semicontinuity of mutual informaiton in the topology of weak convergence,
I(Û^*;V^*)≤lim inf_m→∞I(Û^(k_m);V^(k_m))=0.
Thus Û^* and V^* must be independent. Since p_Û^*+V^*=p_X=𝒩(μ_X,σ^2_X) and p_V^*=𝒩(0,σ^2_Xe^-2R), it follows that p_Û^*=𝒩(μ_X,σ^2_X(1-e^-2R)).
It remains to show that Û^*=X̂^* almost surely. In view of (<ref>),
the Shannon lower bound gives
σ^2_V^(k)≥σ^2_Xe^-2R,
which, together with (<ref>) and (<ref>), further implies
𝔼[(Û^(k)-X̂^(k))^2]≤1/k.
As the map (û,x̂)↦ (û-x̂)^2 is continuous and bounded from below,
𝔼[(Û^*-X̂^*)^2]≤lim inf_m→∞𝔼[(Û^(k_m)-X̂^(k_m))^2]=0.
This leads to a contradiction with (<ref>) since
lim inf_k→∞I(X̂^(k_m);U^(k_m)) (a)≥lim inf_k→∞I(X̂^(k_m);Û^(k_m))
(b)≥ I(X̂^*;Û^*)
=∞,
where (a) is due to the data processing inequality <cit.>, and (b) is due to the lower semicontinuity of mutual informaiton in the topology of weak convergence.
The above proof can be simplified by circumventing the steps regarding the convergence of p_V^(k) to 𝒩(0,σ^2_Xe^-2R) in Kullback-Leibler divergence. Indeed, by
Cramér's decomposition theorem, both Û^* and V^* must be Gaussian if they are independent and their sum is Gaussian. Moreover, one can invoke the weak convergence argument to show that σ^2_Û^*≤σ^2_X(1-e^-2R) and
σ^2_V^*≤σ^2_Xe^-2R. Since σ^2_Û^*+σ^2_V^*=σ^2_X, we must have p_Û^*=𝒩(μ_X,σ^2(1-e^-2R)) and p_V^*=𝒩(0,σ^2_Xe^-2R). However, the original proof provides more information as convergence in Kullback-Leibler divergence is stronger than weak convergence.
§ PROOF OF COROLLARY <REF>
In light of Theorem <ref>,
D(R,R_c,∞|ϕ_KL)>D(R,R_c,∞|ϕ_KL)
when R∈(0,∞) and R_c∈[0,∞). This implies that
(<ref>) holds for sufficiently large P
since P↦ D(R,R_c,P|ϕ_KL) is monotonically decreasing while P↦D(R,R_c,P|ϕ_KL) is continuous at P=∞.
In light of Theorem <ref>,
D(R,R_c,∞|ϕ_KL)<D(R,R_c,∞|ϕ_KL)
when R_c∈[0,∞) and R∈(0,χ(R_c)).
This implies that (<ref>) holds for sufficiently large P since
P↦ D(R,R_c,P|ϕ_KL) is continuous at P=∞ by Theorem <ref> and P↦D(R,R_c,P|ϕ_KL) is monotonically decreasing.
§ PROOF OF COROLLARY <REF>
In light of Theorem <ref>,
D(R,R_c,∞|W^2_2)>D'(R,R_c,∞|W^2_2)
when R∈(0,∞) and R_c∈[0,∞). This implies
that (<ref>) holds
for P above a postive threshold γ'(R,R_c) strictly less than P'(R,R_c) since P↦ D(R,R_c,P|W^2_2) is monotonically decreasing while
P↦D'(R,R_c,P|W^2_2) is continuous and remains constant over the interval [P'(R,R_c),∞].
In light of Theorem <ref>,
D(R,R_c,∞|W^2_2)<D(R,R_c,∞|W^2_2)
when R_c∈[0,∞) and R∈(0,χ(R_c)).
This implies that (<ref>) holds for P above a postive threshold γ(R,R_c) strictly less than P(R,R_c) since
P↦ D(R,R_c,P|W^2_2) is continuous by Theorem <ref> and remains constant over the interval [P(R,R_c),∞]
while P↦D(R,R_c,P|W^2_2) is monotonically decreasing.
1
Matsumoto18
R. Matsumoto, “Introducing the perception-distortion tradeoff into the
rate-distortion theory of general information sources," IEICE Comm.
Express, vol. 7, no. 11, pp. 427–431, 2018.
Matsumoto19
R. Matsumoto, “Rate-distortion-perception tradeoff of variable-length source coding for general information sources,? IEICE Comm. Express, vol. 8,
no. 2, pp. 38–42, 2019.
BM19 Y. Blau and T. Michaeli, “Rethinking lossy compression:
The rate-distortion-perception tradeoff," Proc. Int. Conf. Mach. Learn., vol. 97, pp. 675–685, Jun. 2019.
TW21 L. Theis and A. B. Wagner, “A coding theorem for
the rate-distortion-perception function," Proc. ICLR, pp. 1–5, 2021.
YWYML21 Z. Yan, F. Wen, R. Ying, C. Ma, and P. Liu, “On
perceptual lossy compression: The cost of perceptual reconstruction
and an optimal training framework," Proc. Int. Conf. Mach. Learn., vol. 139, pp. 11682–11692, 2021.
ZQCK21 G. Zhang, J. Qian, J. Chen, and A. Khisti, "Universal
rate-distortion-perception representations for lossy compression,"
Proc. Adv. Neural Inf. Process. Syst., vol. 34, pp. 11517–11529, 2021.
QZCK22
J. Qian, G. Zhang, J. Chen, and A. Khisti, “A rate-distortion-perception
theory for binary sources," Proc. International Zurich Seminar on Information and Communication, pp. 34–38, 2022.
LZCK22 H. Liu, G. Zhang, J. Chen, A. Khisti, “Lossy compression
with distribution shift as entropy constrained optimal transport,"
in International Conference on Learning Representations,
2022.
LZCK22_2
H. Liu, G. Zhang, J. Chen, and A. Khisti,
“Cross-domain lossy compression as entropy constrained optimal transport," IEEE J. Sel. Areas Inf. Theory, vol. 3, pp. 513–527, Sep. 2022,
CYWSGT22
J. Chen, L. Yu, J. Wang, W. Shi, Y. Ge, and W. Tong, “On the rate-distortion-perception function," IEEE J. Sel. Areas Inf. Theory, vol. 3, no. 4, pp. 664–673, Dec. 2022.
SPCYK23
S. Salehkalaibar, T. B. Phan, J. Chen, W. Yu, and A. Khisti,
“On the choice of perception loss function for learned video compression," Proc. Adv. Neural Inf. Process. Syst., vol. 36, 2023.
QSCKYSGT24
J. Qian, S. Salehkalaibar, J. Chen, A. Khisti, W. Yu, W. Shi, Y. Ge, and W. Tong, “Rate-distortion-perception tradeoff for Gaussian vector sources," IEEE J. Sel. Areas Inf. Theory, under revision.
SCKY24
S. Salehkalaibar, J. Chen, A. Khisti, and W. Yu, “Rate-distortion-perception tradeoff based on the
conditional-distribution perception measure," 2024, arXiv:2401.12207. [Online] Available: https://arxiv.org/abs/2401.12207
BM18
Y. Blau and T. Michaeli, “The perception-distortion tradeoff," in Proc.
IEEE Conf. Comp. Vision and Pattern Recog. (CVPR), 2018, pp. 6288–6237.
FMM21
D. Freirich, T. Michaeli, and R. Meir, “A theory of the distortion-perception tradeoff in Wasserstein space," Proc. Adv. Neural Inf. Process. Syst., vol. 34, pp. 25661–25672, 2021.
FWM24
D. Freirich, N. Weinberger, and R. Meir,
“Characterization of the distortion-perception tradeoff for finite channels with arbitrary metrics,"
2024, arXiv:2402.02265. [Online] Available: https://arxiv.org/abs/2402.02265
TA21 L. Theis and E. Agustsson, “On the advantages of
stochastic encoders," Proc. ICLR, pp. 1–8, 2021.
Wagner22
A. B. Wagner, “The rate-distortion-perception tradeoff:
The role of common randomness," 2022, arXiv:2202.04147. [Online] Available: https://arxiv.org/abs/2202.04147
HWG24
Y. Hamdi, A. B. Wagner, and D. Gündüz, “
The rate-distortion-perception trade-off:
The role of private randomness," 2024, arXiv:2404.01111. [Online] Available: https://arxiv.org/pdf/2404.01111
XLCZ24
L. Xie, L. Li, J. Chen, and Z. Zhang, “Output-constrained lossy source coding with application to rate-distortion-perception theory," 2024, arXiv:2403.14849. [Online] Available: https://arxiv.org/abs/2403.14849
LKK10 M. Li, J. Klejsa, and W. B. Kleijn, “Distribution
preserving quantization with dithering and transformation," IEEE Signal Process. Lett., vol. 17, no. 12, pp. 1014–1017, Dec.
2010.
LKK11 M. Li, J. Klejsa, and W. B. Kleijn. (2011). “On
distribution preserving quantization. [Online]. Available: http://arxiv.org/abs/1108.3728
KZLK13 J. Klejsa, G. Zhang, M. Li, and W. B. Kleijn, “Multiple
description distribution preserving quantization," IEEE Trans.
Signal Process., vol. 61, no. 24, pp. 6410–6422, Dec. 2013.
SLY15J1 N. Saldi, T. Linder, and S. Yüksel, “Randomized
quantization and source coding with constrained output distribution,"
IEEE Trans. Inf. Theory, vol. 61, no. 1, pp. 91–106,
Jan. 2015.
SLY15J2 N. Saldi, T. Linder, and S. Yüksel, “Output constrained
lossy source coding with limited common randomness," IEEE
Trans. Inf. Theory, vol. 61, no. 9, pp. 4984–4998, Sep. 2015.
Talagrand96
M. Talagrand, “Transportation cost for Gaussian and other product measures,"
Geometric Funct. Anal., vol. 6, no. 3, pp. 587–600, May 1996.
GN14
Y. Geng and C. Nair, “The capacity region of the two-receiver Gaussian vector broadcast channel with private and common messages," IEEE Trans. Inf. Theory, vol. 60, no. 4, pp. 2087–2104, Apr. 2014.
YWL22
Z. Yan, F. Wen, and P. Liu, “Optimally controllable perceptual lossy compression," Proc. Int. Conf. Mach. Learn., vol. 162, pp. 24911–24928, 2022.
QCYX24
X. Qu, J. Chen, L. Yu, and X. Xu, “Rate-distortion-perception theory for the quadratic Wasserstein space," IEEE Trans. Inf. Theory, to be submitted.
PW24
Y. Polyanskiy and Y. Wu, Information Theory: From Coding to Learning. Cambridge, U.K.: Cambridge Univ. Press, 2024.
Villani08
C. Villani, Optimal Transport: Old and New. Berlin, Germany: Springer, 2008.
PZ20
V. M. Panaretos and Y. Zemel, An invitation to Statistics in Wasserstein Space. Berlin, Germany: Springer, 2020.
GL02
A. György and T. Linder, “On the structure of optimal entropy-constrained scalar quantizers," IEEE Trans. Inf. Theory, vol. 48, no. 2, pp. 416–427, Feb. 2002.
BWO23
Y. Bai, X. Wu, and A. Özgür, “Information constrained optimal transport: From Talagrand, to Marton, to Cover," IEEE Trans. Inf. Theory, vol. 69, no. 4, pp. 2059–2073, Apr. 2023.
CT91 T. M. Cover and J. A. Thomas, Elements of Information
Theory. New York, NY, USA: Wiley, 1991.
WV12
Y. Wu and S. Verdú, “Functional properties of minimum mean-square error and mutual information," IEEE Trans. Inf. Theory, vol. 58, no. 3, pp. 1289–1301, Mar. 2012.
WJ18
A. Wibisono and V. Jog, “Convexity of mutual information along the Ornstein-Uhlenbeck flow," 2018 International Symposium on Information Theory and Its Applications (ISITA), Singapore, 2018, pp. 55–59.
MN06
D. Marco and D. L. Neuhoff, “Low-resolution scalar quantization for Gaussian sources and squared error," IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1689–1697, Apr. 2006.
|
http://arxiv.org/abs/2409.02430v2 | 20240904041757 | Transfer-based Adversarial Poisoning Attacks for Online (MIMO-)Deep Receviers | [
"Kunze Wu",
"Weiheng Jiang",
"Dusit Niyato",
"Yinghuan Li",
"Chuang Luo"
] | eess.SP | [
"eess.SP",
"cs.CR",
"cs.LG"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Transfer-based Adversarial Poisoning Attacks
for Online (MIMO-)Deep Receviers
Kunze Wu, Weiheng Jiang, Dusit Niyato, Yinghuan Li, and Chuang Luo
September 9, 2024
=========================================================================================
§ ABSTRACT
Recently, the design of wireless receivers using deep neural networks (DNNs),
known as deep receivers, has attracted extensive attention for ensuring reliable communication in complex channel environments.
To adapt quickly to dynamic channels, online learning has been adopted to update the weights of deep receivers with over-the-air data (e.g., pilots).
However, the fragility of neural models and the openness of wireless channels expose these systems to malicious attacks.
To this end, understanding these attack methods is essential for robust receiver design.
In this paper, we propose a transfer-based adversarial poisoning attack method for online receivers.
Without knowledge of the attack target, adversarial perturbations are injected to the pilots, poisoning the online deep receiver and impairing its ability to adapt to dynamic channels and nonlinear effects.
In particular, our attack method targets Deep Soft Interference Cancellation (DeepSIC)<cit.> using online meta-learning.
As a classical model-driven deep receiver, DeepSIC incorporates wireless domain knowledge into its architecture.
This integration allows it to adapt efficiently to time-varying channels with only a small number of pilots, achieving optimal performance in a multi-input and multi-output (MIMO) scenario.
The deep receiver in this scenario has a number of applications in the field of wireless communication, which motivates our study of the attack methods targeting it.
Specifically, we demonstrate the effectiveness of our attack in simulations on synthetic linear, synthetic nonlinear, static, and COST 2100 channels.
Simulation results indicate that the proposed poisoning attack significantly reduces the performance of online receivers in rapidly changing scenarios.
Wireless securitys, poisoning attacks, adversarial attacks, model-based deep
learning, deep receivers, online learning, meta-learning.
§ INTRODUCTION
In recent years, the application of deep learning (DL) in designing wireless communication systems has garnered significant interest.
Researchers have concentrated on employing DL in wireless receivers to enhance communication performance in complex channels
and to bolster adaptability in dynamic environments
<cit.>.
However, DL-based wireless applications face vulnerabilities to evasion and data poisoning attacks
owing to the inherent openness of wireless channels and the fragility of neural models <cit.>.
Investigating attack methodologies on deep receivers serves to elucidate their response under such threats,
thereby facilitating the development of secure wireless DL systems, which forms the primary focus of this paper.
§.§ DL-based Wireless Receivers and Related Applications
Till now, numerous studies have explored DL-based designs for wireless communication systems.
Most of them utilize an independent DNN to map the input-output relationships of functional modules in communication links.
Jointly optimizes modules at both the transmitter and receiver by multiple DNNs.
Typically, the applications include DL-based adaptive modulation<cit.>, channel estimation<cit.>,
channel coding and decoding<cit.>, and modulation recognition<cit.>.
In addition to replacing functional modules in the physical layer,
various constraints can be integrated into the training process to optimize additional system metrics,
such as the adjacent channel leakage ratio (ACLR) and peak-to-average power ratio (PAPR) <cit.>,<cit.>.
For the designs of the receiver, lots of work has contributed to enhancing its adaptability to the dynamic channels,
including training and integrating multiple models with extensive data <cit.> and joint learning
<cit.>.
However, DL-based wireless designs mentioned above are data-driven methods
that heavily depend on a substantial amount of training data to improve their generalization capacity.
Given that deep receivers typically have access to only limited pilots for adaptation, this characteristic poses a significant challenge.
In addition, data-driven designs may suffer from performance degradation
when faced with data distribution drift caused by dynamic channels<cit.>.
To handle these issues, an online learning based approach has been proposed, involving dataset, training algorithm and deep receivers architecture.
In particular, data augmentation<cit.> and self supervision methods
<cit.>,<cit.> were proposed to to expand the training data for online adaptation.
In <cit.> and <cit.>, meta learning was employed to improve the generalization capability.
Furthermore, Model-based deep learning provide a solution for receivers architecture design
to satisfy the adaptability and data efficiency<cit.>.
Specifically, the deep receivers were explicitly modelled by incorporating wireless domain knowledge, thereby reducing the dependence on data,
such as DNN-aided inference<cit.>,<cit.> and deep unfolding
<cit.>.
In these studies, <cit.> proposed a classic model-based deep receiver, i.e., the DeepSIC, in MIMO scenarios,
derived from the iterative soft interference cancellation (SIC)<cit.> MIMO detection algorithm.
It employed DNN instead of each round of interference cancellation and soft detection,
requiring only a few iterations to achieve extremely low data dependence and optimal performance.
<cit.> utilized meta-learning to improve the training performance of online DeepSIC,
and the evaluation results indicated that its performance was improved compared with traditional data-driven receiver,
and it exhibited commendable adaptability to dynamic channels.
§.§ Security of DL in Wireless Communications
As mentioned earlier, while DL-based transceiver designs can enhance performance, they remain vulnerable to attacks by malicious users.
In particular, attacks on DL-based transceivers are divided into two main categories, i.e., the evasion attacks and data poisoning attacks.
Evasion attacks, also known as adversarial attacks, manipulates test data to mislead the model<cit.>,<cit.>.
On the other hand, data poisoning attacks corrupt the training data, affecting the model's performance during testing<cit.>,
<cit.>.
According to extensive literature review, numerous studies on DL-based wireless communication primarily concentrate on evasion attacks.
For instance, <cit.> proposed an adversarial attack method against adaptive modulation.
<cit.> proposed generative adversarial network (GAN)-based method to generate adversarial perturbations for channel received data,
which can unnoticeably mislead wireless end-to-end autoencoders,
modulation pattern recognition, and the DL-based symbol detection in orthogonal frequency division multiplexing (OFDM) systems.
<cit.> reported adversarial perturbations can interfere with gradient-based iterative optimization algorithms in the physical layer.
<cit.> proposed semantic attacks against semantic communication.
Furthermore, adversarial perturbations can also play a role in interpretable (e.g., deep unfolding-based) architectures.
To illustrate, <cit.> employed transfer-based methods to attack deep sparse coding networks
and demonstrated that these attacks exert deleterious effects on the various components of deep unfolded-based sparse coding.
Regarding data poisoning attacks, current research primarily focuses on cognitive radio spectrum-aware poisoning <cit.>,<cit.>
and disrupting distributed wireless federated learning<cit.>.
§.§ Contribution of This Paper
Unlike previous studies, this paper addresses security threats to online deep receivers.
Furthermore, we propose a transfer-based adversarial poisoning attack method,
which can significantly corrupt various online deep receivers even without prior knowledge of the target system.
Specifically,
we focus on online receivers based on model-based deep learning, such as DeepSIC<cit.> and Meta-DeepSIC<cit.>,
as well as general DNN detectors, including the black-box DNN detector<cit.>,<cit.>
and the ResNet detector designed based on DeepRX<cit.>.
As previously stated, DeepSIC is a classical model-based deep receiver that can be combined with meta learning for efficient online adaptation.
This design effectively tackles the challenge of limited pilot data in wireless communication scenarios,
thereby improving the generalization of deep receivers under dynamic channel conditions.
Moreover, studies on the attack methods for DeepSIC can provide comprehensive insights into deep receiver characteristics and contribute to robust designs.
Ultimately, this research aids in creating secure and efficient DL-enabled wireless communication systems.
Specifically, the mainly contributions of this paper are summarised as below.
∙
We highlight a communication system susceptible to malicious user poisoning attacks.
We then analyze the vulnerability of the deep receiver based on online learning in the authorization system.
From the perspective of malicious users, we further develop an attack utility model and an optimal attack utility decision problem.
∙
We effectively design a poisoning attack framework and attack perturbation generation method for online learning deep receivers.
The fundamental concept is to introduce a poisoned sample into the online training and updating phase of the deep receiver,
thereby compromising its performance over time. The poisoning attack framework has two stages.
Firstly, malicious users employ joint learning to create a surrogate model,
which can be selected from a generic DNN architecture, e.g., feedforward DNNs.
Secondly, they generate poisoning perturbation samples based on the surrogate model.
The transferability of the poisoning attack makes it work on different types of deep receivers.
∙We numerically evaluate the effect of the proposed poisoning attack method on four channel models:
Linear synthetic channel, nonlinear synthetic channel, static channel, and COST 2100 channel.
Simulation results demonstrate that the proposed poisoning attack method impairs the deep receiver's ability to adapt to rapid changes
in dynamic channels and to learn from nonlinear effects.
Furthermore, deep receivers adapted using meta-learning more severely damaged after poisoning.
The rest of the paper is organized as follows.
Section II introduces the system and scenario models
and the attack models of the malicious user.
Section III presents the basic theory of adversarial machine learning, focusing on evasion attacks, data poisoning attacks,
and the conceptual approaches to attack transferability.
Section IV details the proposed poisoning attack framework and the method for generating poisoning attack samples for online deep receivers.
Section V evaluates and analyzes the effectiveness of the proposed poisoning attack method.
Section VI concludes the paper.
§ SYSTEM AND SCENARIO MODELLING
In this section, we first present the communication system and scenario model under the presence of a malicious poisoning attack user in Section II-A.
Subsequently, we introduce the operational model of the legitimate receiver based on deep learning in Section II-B.
Finally, we discuss the detail of malicious user poisoning attack, focusing on pilot poisoning attacks in Section II-C.
§.§ Communication System Scenario Model
In this paper, we investigate a poisoning attack scenario model for a communication system, as illustrated in Fig. 1.
This system consists of a pair of legitimate transceivers and a malicious poisoning attack user.
Both the legitimate transmitter and receiver equip with multiple antennas, denoted as N_tx and N_rx respectively.
We focus on a single-antenna malicious user, as this represents a cost-effective and straightforward approach to conducting attacks.
The data transmission from transceiver to receiver is block-based, as illustrated in Fig. 2.
The length of one block is L, including L_pilot pilot symbols
and L_info information symbols, herein, L_info≫ L_pilot.
As shown in Fig. 3, the legitimate receiver utilizes a DL-based architecture for signal receiving and processing.
It is trained using pilot data and employs the trained deep receiver to decode information data.
For the malicious user, based on the previously collected
pilot data, it launches an attack by poisoning or disturbing the transmission process of the pilot used by the legitimate user.
Its objective is to corrupt with the online training and updating of the deep receiver,
thereby disrupting its information data reception and decoding.
Based on the above illustrated scenario,
defining the transmit symbols by the legitimate transmitter as
𝐬∈ℝ^N_tx,
and the corresponding modulated symbols as
𝐱∈ℂ^N_tx.
These modulation symbols are then upconverted, amplified, transmitted through multiple antennas,
and finally arrived at the receiver.
Let 𝐇∈ℝ^N_rx× N_tx
denote the baseband equivalent channel matrix, and
𝐰∼𝒞 𝒩(0, σ^2 𝐈)
represent the additive gaussian white noise experienced by the legitimate receiver.
The equivalent baseband signal received by the legitimate receiver,
in the absence of a malicious user poisoning attack, is
𝐲∈ℂ^N_rx,
which can be expressed as
𝐲=𝐇 𝐱+𝐰.
Following that, defining a block of data symbols received by the authorised receiver as 𝐘.
The received data symbols can then be divided into two parts, denoted as
𝐘_pilot={𝐲_i}_i=1^L_pilot
and 𝐘_info={𝐲_i}_i=L_pilot+1^L.
§.§ Deep Receiver with Online Training
In this paper,
the online learning based deep receiver is adopted
by the legitimate receiver, as illustrated in Fig. 3.
Here, define the deep receiver as a classifier f_θ
with model parameter θ. f_θ is trained using a supervised learning approach.
The data used for training is the pilot dataset, which is defined as
𝒯={𝐘_pilot, 𝐒_pilot}={𝐲_i, 𝐬_i}_i=1^L_pilot.
Model testing is done with information dataset, which is represented as
𝒱={𝐘_info, 𝐒_info}={𝐲_i, 𝐬_i}_i=L_pilot+1^L.
The supervised training loss function is the cross-entropy loss, which is represented as
ℒ(𝒯 ; θ).
P̂_θ(·|·)
denotes the likelihood probability of symbol estimation for deep receivers.
The deep receiver training objective can be described by
θmin{ℒ(𝒯 ; θ)=-∑_(y_i, 𝐱_j) ⊂𝒯logP̂_θ(𝐬_i |𝐲_i)}.
The deep receiver, trained using 𝒯, is used to decode the symbols in 𝒱.
For the i-th received symbol 𝐲_i, the decoded result is expressed as 𝐬̂_i(𝐲_i ;θ).
The performance metric of the deep receiver is the symbol error rate (SER), which is defined as
S E R(θ)=1/L_info∑_i=L_pilot+1^L
Pr(𝐬̂_i(𝐲_i ; θ) ≠𝐬_i) .
§.§ Modus Operandi of the Malicious User
For the considered system, as mentioned earlier,
there is a malicious user, which aims to corrupt the information
receiving and decoding of the legitimate receiver,
by poisoning on the pilot data transmission from the transmitter to the receiver.
This is similar to <cit.>.
In particular, the attack process of the malicious user is shown in Fig. 4 and
summarised as below.
* Since the pilot pattern is fixed during transmission, the malicious user can collect pilot data during the communication process between the legitimate transceiver.
* The accumulated pilot data is employed to train a surrogate model
that is analogous to the attack target, specifically the authorised deep receiver.
* The malicious user generates the optimal perturbation based on the surrogate model and the transferability of the attack.
* The malicious user injects the channel perturbation.
The deep receiver will gradually be poisoned until the model fails when it receives the perturbed pilot data used to train the model.
In principle, a poisoning attack perpetrated
by a malicious user can be conceptualized as a perturbation injection process.
In particular, the perturbation is defined as a vector in the complex space of receiver inputs, denoted as δ∈ℂ^N_rx,
with a poisoning process represented by 𝒫(·).
For the i-th received symbol 𝐲_i, the corresponding poisoning perturbation is given by δ_i,
and the corresponding poisoned received symbol is given by 𝒫(𝐲_i)=𝐲_i+δ_i.
Therefore, within the context of poisoning attacks on deep receivers,
the primary challenge for malicious users is to design an optimal perturbation signal structure that maximizes the deep receiver's loss
on subsequent information symbols or validation sets,
which will be addressed in the following sections.
§ ADVERSARIAL MACHINE LEARNING THEORY
Before introducing the poisoning attack method for online deep receivers proposed in this paper,
we briefly discuss the theory of adversarial machine learning.
In particular, we provide a brief overview of the threat model presented in Section III-A,
including the concepts of the attacker's goal, the attacker's knowledge, and the attacker's capability.
The aforementioned concepts and definitions facilitate a more comprehensive understanding of the attack method for deep receivers proposed in this paper.
Subsequently, in Section III-B and Section III-C, the fundamental optimization issues associated with two distinct attack paradigms,
namely evasion attacks and data poisoning attacks, are briefly elucidated.
Furthermore, we explain the differences and connections between the two attack paradigms.
Finally, we analyze the transferability of attack samples in Section III-D,
and discuss the methods to enhance the transferability of attacks.
§.§ Threat Model: Attacker's Goal, Knowledge and Capabilities
§.§.§ Attacker's Goal
As discussed in <cit.>,<cit.>,<cit.>, the objective of the attacker in adversarial machine learning scenarios
can be categorised according to the form of security threats, including integrity attacks and availability attacks.
∙ Integrity attacks: The attacker's goal is to tamper the integrity of the target.
Specifically, this implies that the attack samples generated by the attacker are only effective in certain parts of the target system,
while the remainder of the target system retains its original functionality.
For example, an attacker applies a specific adversarial perturbation to an image of a traffic sign
in order to circumvent the detection of the visual classifier in an automated vehicle.
However, at that moment, the visual classifier remains accurately operational for other objects.
∙ Availability attacks: In contrast, availability attacks aim to disrupt the normal functioning of the entire system,
rendering it unavailable to legitimate users.
The difference between integrity attacks and availability attacks primarily arises from
the different focuses of the optimization objectives inherent to the attack models constructed.
In this paper, the proposed attack method is an availability attack,
namely the destruction of the usability of all functions of a deep receiver.
§.§.§ Attacker's Knowledge
The knowledge possessed by the attacker is indicative of the extent to which they are aware of the attack target.
This knowledge encompasses several key dimensions, as outlined in <cit.>,<cit.>.
These dimensions include:
(i) The data utilized for training purposes.
(ii) The architectural design of the target model, the learning algorithms employed during training,
along with their associated parameters and training hyperparameters.
(iii) The data comprising the test set.
Based on the combination of these dimensions, two main attack scenarios can be further defined as below:
∙ White-box attacks:
We consider that the attacker has complete knowledge about the attack target.
In this context, the attacker will adapt the nature of the attack
to align with the specific characteristics of the target to achieve the most effective and impactful outcome.
∙ Black-box attacks:
Black box attacks can be further categorized into two main types:
Transfer-based attacks and query attacks. In transfer-based attacks,
the attacker lacks or only has partial knowledge of the internal workings of the target model.
The knowledge about the target encompasses the aforementioned dimensions (i), (ii), and (iii).
In this setting, the attacker is limited to relying on the data they have collected to construct a surrogate model that approximates the target model.
This attack is then transferred to the target model by launching a white-box attack on the surrogate model.
In a black-box query attack, the attacker can query the target's output or confidence level to optimize the attack.
Currently, the majority of black-box attacks exploit the transferability for attack purposes
<cit.>,<cit.>.
The discussion regarding the attacker's knowledge aims to define the scenarios in which attacks are deployed,
particularly in more practical black-box attacks, which are the focus of this paper.
Moreover, within the framework of black-box attacks, the transferability of attack samples holds particular significance.
This will be addressed in greater detail in Section III-D of this paper.
§.§.§ Attacker's Capability
The attacker's capabilities determine the methods used to influence the attack target and the specific constraints for data manipulation.
To avoid potential defense filtering mechanisms, the attacker must impose constraints on the manipulated data.
It is standard practice to add an upper bound of ϵ to the perturbation δ under the p-norm space.
From the perspective of attack methods<cit.>,
if the attacker can manipulate data from both the training and testing phases simultaneously,
which considered as causal attacks and is called data poisoning attacks.
If the attacker can only manipulate the data during the testing phase, this attack is considered exploratory and is called evasion attacks.
Their difference lies in solving optimization objectives and implementation methods.
This paper focuses on adversarial poisoning attacks in black-box scenarios,
which can be seen as a synthesis of evasion attacks and data poisoning attacks.
The specific optimization goals and implementation forms of evasion attacks and data poisoning attacks are described in the following
Section III-B and Section III-C, respectively.
§.§ Evasion Attacks
The evasion attacks are implemented by constructing adversarial samples whose objective is to induce a catastrophic misclassification in the testing phase of the DNN.
This is achieved by identifying the blind spots of the DNN and introducing carefully crafted tiny perturbations to the input, as illustrated in Fig. 5(a).
The construction of adversarial samples arises from the observation that the deeper features and outputs of a classifier can undergo notable alterations
when the inputs undergo slight directional changes. Gradient-based optimizers <cit.>,<cit.> are capable of readily identifying these directions,
which exert an influence on the DNN.
Accordingly, the methodology for the construction of adversarial samples can be expressed as follows:
In the case of a given target model and inputs,
the gradient of the objective function is employed to direct the application of minor perturbations to the input data to maximise the loss incurred by the input.
This process can therefore be conceptualised as a single-layer optimization problem.
Specifically, 𝐲 denotes the model input, 𝐬 denotes the labels corresponding
to 𝐲, and 𝐲^' denotes the adversarial samples after the adversarial perturbation is added.
The adversarial perturbation δ=𝐲^'-𝐲_p has an upper bound of ϵ>0,
and the optimal adversarial sample obtained after optimization is 𝐲^*.
θ denotes the parameters of the classifier f_θ. For the classification task,
the cross-entropy loss function ℒ(𝐲^', 𝐬 ; f_θ) is defined.
Thus, finding the optimal adversarial samples constitutes the following optimization problem
𝐲^* ∈𝐲^'maxℒ(𝐲^', 𝐱 ; θ) , such that δ=𝐲^'-𝐲_p ≤ϵ.
§.§ Data Poisoning Attacks
The optimization goal of the data poisoning attack is to poison the target with poisoned training data to degrade its test performance,
which is shown in Fig. 5(b).
Similar to the production of adversarial samples, in the production of poisoning samples,
the perturbation δ is applied to each sample in the training set at p-norm and does not exceed the upper bound ϵ>0.
Specifically, the test set 𝒱 and the training set 𝒯 are defined,
as well as the poisoned training set 𝒫(𝒯) after applying the perturbation to the data set.
Furthermore, θ^* denotes the optimal poisoning parameter of the target classifier with respect to the parameter θ.
Thus, the poisoning attack can be modelled as a dual optimization problem as follows:
max _𝒫ℒ(𝒱 ; θ^*) , and θ^* ∈minℒ(𝒫(𝒯) ; θ).
Herein, firstly, the inner layer optimization involves the standard model training process.
In this process, the attacker uses the poisoned data to train the target model by minimizing the empirical loss ℒ(𝒫(𝒯); θ)
to obtain the optimal poisoned parameter θ^*. Secondly, based on the obtained θ^*,
the attacker maximizes the loss ℒ(𝒱; θ^*) on the test set.
Note that solving this optimization problem directly is often very difficult,
and it is more common practice to approximate this maximization process through gradient optimization of δ.
§.§.§ Adversarial Samples as Poisoning Attacks
As previously stated in Section III-B and Section III-C,
although both adversarial sample construction for evasion attacks and data poisoning attack sample construction
can be attributed to the gradient-based optimization framework,
the goals achieved by constructing perturbations for these two types of attacks are different.
Recently, however, researchers have discovered that adversarial samples are also highly effective for poisoning DNNs,
a phenomenon known as Adversarial Poisoning <cit.>,<cit.>.
In this case, the poisoning attack optimization problem (<ref>eq:5) can also be uniformly expressed
in the form of the adversarial sample attack optimization problem (<ref>eq:4) in Section III-B.
<cit.> provides a method for creating adversarial poisoning samples to obtain optimal poisoning results.
Additionally, <cit.> shows that antipoisoning attacks can also cause serious harm to the meta-learner in a white-box attack setting.
Compared with the dual optimization process of the poisoning attack method in (<ref>eq:5),
using adversarial samples as poisoning attack samples is more convenient and practically feasible.
This is also the basis for the poisoning attack method proposed in this paper.
§.§ Transferability of Attacks
§.§.§ Why Can Attacks Be Transferred?
The concept of attack transferability means that the attack generated for one model can be used against another in a black-box attacks scenario.
This phenomenon has been observed and demonstrated in various studies, as referenced in <cit.>.
Given that the approach outlined in this paper can be classified as a black-box attack,
it is essential that the attack samples demonstrate the capacity for transferability.
<cit.> presented an upper bound on the loss theory that arises when black-box transfer occurs.
Define f_φ as the surrogate model with parameter φ,
f_θ as the target model with parameter θ, and ℒ(𝐲, 𝐬, φ) as the loss of the input 𝐲
in f_φ against the label 𝐬.
In consideration of the transferability of evasion attacks (poisoning attacks also take the same form),
the optimal adversarial sample, denoted as 𝐲^* is obtained by solving (<ref>eq:4) on f_φ,
with the corresponding optimal perturbation, denoted as δ^*.
To illustrate, consider the sphere space with p=2 and radius denoted as ϵ.
The optimal adversarial perturbation obtained on f_φ can be expressed in (<ref>) as follows:
δ^*=ϵ∇_𝐲ℒ(𝐲, 𝐬, φ)/∇_𝐲ℒ(𝐲, 𝐬, φ)_2.
As a result, the loss of 𝐲 on the target model, denoted as ℒ(𝐲, 𝐬, θ).
Define Δℒ as the increase in loss of the input 𝐲^* compared to the input 𝐲 on the target model.
The upper bound of Δℒ on the target model can be described by
Δℒ=ϵ∇_𝐲ℒ(𝐲, 𝐬, φ)^⊤/∇_𝐲ℒ(𝐲, 𝐬, φ)_2∇_𝐲ℒ(𝐲, 𝐬, θ) ≤ϵ∇_𝐲ℒ(𝐲, 𝐬, θ)_2.
The left-hand side of the inequality in (<ref>) represents the loss in the black-box attack scenario,
while the right-hand side represents the loss in the white-box attack scenario.
In the white-box attack scenario, i.e., f_φ = f_θ, inequalities in (<ref>) becomes an equality.
The attack achieves its upper bound and has optimal attack effect.
Therefore, the effectiveness of the attack when the attack sample is transferred from the surrogate model to the target model
is influenced by two factors: The intrinsic adversarial vulnerability of the target model (right-hand side of the inequality in (<ref>))
and the complexity of the surrogate model used to optimize the attack (left side of the inequality in (<ref>)).
The right-hand side of the inequality in (<ref>) shows that a more vulnerable target model has a larger upper bound on the loss,
represented by ϵ||∇_𝐲ℒ(𝐲, 𝐬, θ)||_2.
The intrinsic complexity of the model measures the learning algorithm's ability to fit the training data.
More complex models, like those without regularization or those prone to overfitting,
have more complex parameter spaces and rugged loss landscapes, making them sensitive to input perturbations and susceptible to attacks.
For robust models with smaller upper loss bounds, a successful attack requires a higher perturbation limit,
reducing the likelihood of bypassing the system's monitoring.
This demonstrates the impact of the intrinsic adversarial vulnerability of the target model on transferability of attacks.
The complexity of the surrogate model used to optimize the attack depends on two main factors:
The gradient alignment between the surrogate and the target,
and the variance magnitude of the surrogate model's loss function.
These factors are particularly relevant for the left-hand side of the inequality in (<ref>).
When the surrogate has better gradient alignment with the target,
such as the gradient cosine similarity used in <cit.>,
attack samples from the surrogate model exhibit better transferability.
This is reflected in ∇_𝐲ℒ(𝐲, 𝐬, φ)^⊤∇_𝐲ℒ(𝐲, 𝐬, θ),
as shown on the left-hand side of the inequality in (<ref>).
Additionally, a surrogate model with low variance leads to a more stable optimization process,
producing attack samples effective across different target models.
In contrast, a large variance leads to an unstable optimization process, leading to attack samples may not match the target model, resulting in failure.
Intuitively, on the left-hand side of the inequality in (<ref>), high variance of the loss function increases the corresponding denominator term
||∇_𝐲ℒ(𝐲, 𝐬, φ)||_2,
which results in smaller upper bounds on the achievability of the transfer.
§.§.§ Related Work of Transfer-based Attacks
In summary, when designing transfer-based methods in black-box attacks scenario, one can approach from two perspectives:
The inherent adversarial vulnerability of the target model and the complexity of optimizing the surrogate model.
∙ For the former, certain assumptions about the target model are typically necessary to model the attack objective.
For example, one might assume the existence of unstable common vulnerabilities in the integrated model as discussed in <cit.>.
In this scenario, more robust constraints can be applied to the optimization target,
or an optimization algorithm with strong generalization ability <cit.>,<cit.> can be used to optimize the attack samples,
making the transfer attack more effective.
However, the intrinsic adversarial vulnerability of unknown target is often difficult to identified directly.
∙
For the latter, typical approaches include using integration-based surrogate model <cit.>,
self-integration method based on diverse regularizations <cit.>, and data augmentation <cit.>.
<cit.> mentioned that alternating different training paradigms (e.g., unsupervised and self-supervised models) as surrogate models
when generating poisoning attack samples can improve the transferability.
These methods aim to develop more generalized and robust surrogate models,
which are then optimized to obtain more effective and transferable attack samples.
§ A POISONING ATTACK FRAMEWORK
FOR ONLINE DEEP RECEIVERS
As mentioned earlier, in the scenarios discussed in this paper, legitimate deep receivers adapt to fast-varying wireless channels by utilizing pilot and updating model parameters using online learning methods.
However, this online update mechanism, aimed at adapting to local channel variations, faces the threat of sample input poisoning.
In the attack framework of the malicious user, the legitimate online deep receiver becomes the target model,
which is updated online to adapt to local variations in the channel for better local performance.
However, this process is inherently not robust and the local adaptation can result in overfitting <cit.>.
From this point onwards, the attack target of this paper is similar to <cit.>,
which causes catastrophic forgetting of the target by constructing poisoned samples.
According to the analysis of attack transferability in Section III-D,
this overfitting is the source of the target model's intrinsic adversarial vulnerability.
This implies that a malicious user can effectively attack the target model by optimizing a surrogate model and generating corresponding adversarial perturbations.
Therefore, this section proposes a poisoning attack framework targeting online deep receivers.
The core idea is to poison the model training and updating phases, resulting in a poisoned model after a certain period, leading to performance degradation.
Specifically, based on the behavioral patterns of malicious users described in Section II-C,
the poisoning attack framework and the method for generating the poisoning attack samples can be refined into below three steps, as illustrated in Fig. 7.
* The malicious user collects communication data (e.g., pilot) from the wireless channel and produces a joint learning dataset.
This dataset is used to train a surrogate model for attacking the target (the legitimate deep receiver).
* The malicious user optimization solution (<ref>eq:4) generates an adversarial perturbation based on the surrogate model.
* The malicious user injects the adversarial perturbation onto the channel, causing the deep receiver to receive the poisoned pilot.
Consequently, the deep receiver undergoes online learning with the perturbed pilot, resulting in the model being poisoned and deactivated.
The specific details of Steps 1 and 2 are detailed in the following Sections IV-A to IV-C.
§.§ Surrogate Model Selection
As discussed earlier in Section III-D regarding attack transferability,
black-box attacks require the malicious user to consider the degree of gradient alignment between the surrogate model and the target model to ensure the attack's generalization.
To achieve this, it is important to avoid using surrogate models that are overly specialized for specific problems.
This strategy ensures consistent compatibility with various types of deep receivers,
thereby enhancing the attack's overall effectiveness.
Consequently, this paper adopts a generic DNN architecture, e.g., feedforward neural networks, as the surrogate model for the attack, as illustrated in Step 1 of Fig. 6.
§.§ Joint Learning for Training of Surrogate Models
From the perspective of a malicious user,
it is essential to select a suitable surrogate model architecture similar to the target model
and to effectively train and optimize the surrogate model.
In the proposed attack framework, joint learning is used to train the surrogate model.
This approach utilizes data collected under various channel conditions to train the DNN,
enabling it to adapt to dynamic channels <cit.>.
It is important to note that,
unlike legitimate communicating parties that have to use online learning methods to adapt to time-varying wireless channels,
the malicious user has preparation time to eavesdrop communications and collect large samples to train a robust model.
Furthermore, the use of joint learning, instead of channel state information as input or online learning <cit.>,
addresses the issue of attack generality at the data level, avoiding over-specialized surrogate models in black-box attack transfers,
as previously discussed in Section IV-A.
Finally, in terms of training effectiveness, joint learning meets the data volume requirements of deep learning,
making the surrogate model more robust compared to the target model using online learning.
This results in attack patterns with more stable and effective attack results.
Conversely, the robust learning process of the target model mitigates the impact of poisoning.
To demonstrate this, we will adjust the pilot size in Section V-E.
Specifically, as illustrated in Step 2 of Fig. 6, the malicious user deploys joint learning to train a DNN (i.e., the surrogate model)
utilizing a substantial quantity of communication data (e.g., pilot) amassed from authorised transceivers.
The objective is for this DNN to learn a mapping that is applicable to the majority of channel states,
reflecting the input-to-output mapping relationship of the authorised deep receivers (i.e., the target model).
The data that is jointly learned comprises two categories:
Communication data under disparate channel distribution conditions and communication data under varying signal-to-noise ratio (SNR) conditions within the same channel.
Finally, to produce effective adversarial poisoning samples,
malicious user need to consider suitable attack generation methods based on the surrogate model, as detailed in Section IV-C.
§.§ Adversarial Poisoning Attack Samples Generation
Once the optimized surrogate model has been obtained, the malicious user generates the necessary poisoning attack samples
in order to execute the attack.
As illustrated in Step 2 of Fig. 6, we employ the projected gradient descent (PGD) algorithm <cit.>,
to generate the adversarial poisoning attack perturbations discussed in Section III-C.
The whole algorithmic flow of generating perturbations is shown in Algorithm 1,
and summarised as below.
* Obtain the pilot dataset 𝒯_pilot≡{𝐲, 𝐬}.
Then, within an interval [-ϵ, +ϵ] defined by the upper bound of the p-norm constraints,
use uniformly distributed sampling to generate a randomly initialized perturbation vector δ.
* Superimpose the δ on the pilot data.
The perturbed data should then be fed into the surrogate model f_φ to calculate the attack loss.
* δ updates in the gradient direction of loss with step size γ,
and 𝐲 + γδ remains within the specified upper bound I_max, and lower bound I_min.
* The Step 2 and Step 3 iteratively repeat the Q rounds to obtain poisoned pilot data with the optimal attack perturbation.
Once the iteration is complete, the optimal perturbation will be applied to the current block's pilot to generate the poisoned pilot dataset, denoted as 𝒫(𝒯).
This dataset will then be received by the deep receiver and poisoned during the training and updating of the model, as illustrated in Step 3 of Fig. 6.
§ NUMERICAL EVALUATIONS
In this section, we conduct a numerical evaluation of the proposed poisoning attack method aimed at disrupting the online adaptation process of the deep receiver.
We first provide provides an overview of the parameter settings employed in
the simulation experiments in Sections V-A to V-D.
These parameters include the channel models, the deep receivers, the online training methods and the poisoning attack method, which are all evaluated in this paper.
Subsequently, the simulation results for our evaluated deep receivers are presented under the following conditions:
A linear time-varying synthetic channel, a nonlinear time-varying synthetic channel, a linear static synthetic channel, and the time-varying COST 2100 channel (Section V-E).
Finally, the experimental results of the proposed attack method under the four channel settings are summarised and discussed in Section V-F.
§.§ Evaluated Channel Models
The deep receiver operates on a discrete memoryless MIMO channel.
The number of transmitting and receiving antennas is set to N_t x=N_r x=4.
The experimental evaluation channel model comprises synthetic channels <cit.>
and COST 2100 channels <cit.>.
Fig. 7 illustrates the four channel tapping coefficients for a randomly selected user in a multiuser system over 100 blocks of data transmission.
In the context of linear channels, the input-output relationship is expressed by the (<ref>) given in Section II-A.
In the context of a nonlinear channel model <cit.>, the input-output relationship is represented by
𝐲=tanh (k(𝐇 𝐱+𝐰)),
where the tanh (·) function is used to simulate the non-linear variations in the transceiver process
due to non-ideal hardware, with the parameter k=0.5.
§.§ Evaluated Deep Receivers
We consider three deep receiver architectures in the evaluated network architecture: Namely,
the model-based deep receiver DeepSIC <cit.>, the black-box DNN detector <cit.>,<cit.>,
and the ResNet detector designed based on DeepRX <cit.>.The relevant details are as follows:
∙ DeepSIC:
SIC estimates and cancels the interfering signals at each iteration to revise previous estimates,
while DeepSIC unfolds the SIC iterative process and replaces it with a sub-neural network to improve the performance of each iteration.
The function of each sub-neural network is to enhance the reliability of the present estimation
by utilizing the received symbols and the preceding iteration's output confidence.
This design enables DeepSIC to attain high reliability even with a limited number of training data<cit.>.
Furthermore, DeepSIC presented in this paper incorporates 3 iterations, resulting in a total of 3 × N_t x sub-networks.
In this configuration, each sub-network is a two-layer fully connected layer network.
The first layer has a dimension of (N_r x + N_t x - 1) × 64,
and the second layer has a dimension of 64 × |S|, where |S| is the size of the set of symbols to be transmitted.
For example, |S| = 4 when using QPSK transmission.
The activation function employed in the initial layer of each subnetwork is ReLU,
while the second layer utilizes softmax classification.
∙ Black-box DNN Detector:
The black-box DNN architecture comprises four fully-connected layers and a softmax classification header.
The input and output dimensions are N_r x× 60,60 × 60,60 × 60 and 60 ×|S|^N_tx,
respectively, for each layer of the network.
∙ ResNet Detector:
The ResNet detector employed in this paper consists of 10 layers of residual blocks,
with each block comprising two convolutional layers that use 3×3 kernels,
one-pixel padding on both sides, and no bias terms, with a ReLU activation function in between. Each convolutional layer is followed by 2D batch normalization.
§.§ Online Training Methods
The objective of this paper is to present an attack strategy, which attacks online adaptation of the deep receivers.
Consequently, the focus is on the deep receiver's online training, with the parameter configurations illustrated in Table I.
The parameter settings for online training come from <cit.>.
Upon receipt of a data block, the deep receiver is only able to utilize a subset of the data block for training,
like pilot symbols, specifically L_pilot = 200.
Then the deep reciver predicts the subsequent L_info = 50000 symbols.
In the experiment, a total of 100 data blocks were transmitted,
with the transmitted data being QPSK modulated, i.e., the user-transmitted symbols 𝐬 were mapped to the set
C={( ±1/√(2), ±1/√(2))}^4.
Furthermore, the deep receivers are trained using the Adam optimizer<cit.>.
The training epochs are 300.
The initial learning rate η is set to 5 × 10^-3 for ResNet detector, black-box DNN detector and DeepSIC.
Additionally, online training is performed for different architectural receivers,
which were implemented in the following two cases:
∙ Online learning:
Based on the adaptation to the pilot data from the previous data block, the deep receiver trains
and updates the current model using the limited pilot symbols received in the current data block.
∙ Online meta-learning:
According to <cit.>, <cit.>, the meta-learning framework is employed to facilitate adaptation to the channel.
In particular, the pilot data from 5 data blocks is accumulated each time, after which meta-learning is
performed on the accumulated data in order to obtain the meta-learning weights of the deep receiver.
Subsequently, the aforementioned meta-learning weights are employed in an online learning process involving the pilot data of the current block.
Unless otherwise stated,
the deep receivers are trained using online learning,
including black-box DNN detector, ResNet detector, and DeepSIC.
Only online meta-learning DeepSIC receiver,
i.e., Meta-DeepSIC, is trained using online meta-learning methods.
§.§ Attacker's Configuration
The attack samples are designed based on the gradient of the surrogate model,
and joint learning is performed on the surrogate model based on the collected pilot dataset.
In the joint learning configuration, the black-box DNN detector, presented in Section V-B, is employed as the surrogate model.
This model has three times the number of parameters compared to a single sub-network of DeepSIC.
The parameter configurations for joint learning are referenced in <cit.>.
In particular, a linear time-varying synthetic channel model is employed to generate the channel data for joint learning.
The training data is generated under the condition of an SNR of 2 dB to 16 dB with an interval of 2 dB,
and the SNR used for generation is represented as SNR_sur.
In this manner, L_sur=5000 training pilot symbols are generated at each SNR value, and the surrogate model is trained in accordance with this procedure.
Moreover, the poisoning samples are optimized iteratively using the PGD algorithm.
In this context, the adversarial poisoning attack samples are created with reference to the settings in <cit.>,
and the adversarial poisoning samples at this specific parameter configuration are the most toxic.
The iteration step size is set to γ = 0.01,
the iterations of PGD is Q = 250, and the upper bound ϵ of the perturbation under the constrained norm p = 2 is set to 0.3.
The maximum and minimum values of the received symbol magnitude are denoted as I_max and I_min, respectively,
and I_max = max {𝐲} = -I_min.
The attack samples are presented in Fig. 8 and Fig. 9, respectively.
The two subfigures in Fig. 8 represent the real and imaginary parts of the original and poisoned received symbols, respectively.
Fig. 9 presents the original and poisoned received symbols in a constellation diagram.
§.§ Numerical Results under Four Channel Models
This section presents the results of an experimental investigation into the efficacy of the proposed poisoning attack method in the context of various channel models.
Specifically, the effectiveness of the method is evaluated under the following scenarios:
Linear time-varying synthetic channels, nonlinear time-varying synthetic channels, linear static synthetic channels, and time-varying COST 2100 channels.
For black-box DNN detector, a white-box poisoning attack is launched against this architecture receiver since its architecture is the same as the surrogate model.
For the ResNet detector, we implement transfer poisoning attacks in black box scenarios.
As the DeepSIC comprises multiple sub-networks, it is not feasible to utilize its gradient information directly in the design of poisoning attack samples.
It is therefore anticipated that the attack perturbations designed on the surrogate model will transfer the poisoning effect to DeepSIC.
Consequently, a transfer-based poisoning attack is executed on DeepSIC.
§.§.§ Linear Time-varying Synthesis Channel Results
Firstly, Fig. 10 evaluates the effectiveness of the proposed poisoning attack method under a linear time-varying synthetic channel.
Fig. 10(a) illustrates the results of 100 data transmission blocks,
with the cumulative SER calculated on a block-wise basis over five repetitions at an SNR of 14 dB.
It can be observed that the Meta-DeepSIC, which combines the model-based method and the meta-learning,
is more effective in capturing the time-varying characteristics of the wireless channel in the absence of a poisoning attack.
Consequently, it achieves a superior SER performance in comparison to the black-box DNN detector, ResNet detector and the DeepSIC.
However, the Meta-DeepSIC is also more susceptible to poisoning attacks than the DeepSIC.
This finding aligns with the conclusions presented in <cit.>.
Furthermore, the proposed black box poisoning attack significantly impacts the ResNet detector.
Since this method targets the black-box DNN detector through a white-box attack approach, the black-box DNN detector performs the worst after poisoning.
Fig. 10(b) illustrates the results of the 100-block data transmission experiment,
with the SER averaged across the same five repetitions under varying SNR conditions.
It can be observed that the proposed poisoning attack is effective for ResNet detector, black-box DNN detector and DeepSIC at different SNRs.
In particular, the SER performance of the poisoned DeepSIC exhibits a degradation of approximately 0.67 dB compared to that of the normal DeepSIC at an SNR of 10 dB.
Similarly, the SER performance of the poisoned Meta-DeepSIC is seen to deteriorate by approximately 0.91 dB in comparison to that of the normal Meta-DeepSIC.
Additionally, it is observed that the SER of the black-box DNN detector exhibits an increase of approximately 0.5 dB,
and the SER of the ResNet detector exhibits an increase of approximately 0.68 dB.
As the SNR increases, the impact of the attack becomes increasingly pronounced.
The deterioration in the SER performance of the poisoned DeepSIC and Meta-DeepSIC reaches 1.37 dB and 2.0 dB, respectively, at SNR = 15 dB.
§.§.§ Non-linear Time-varying Synthetic Channels Results
Secondly, Fig. 11 evaluates the effectiveness of the proposed poisoning attack methodology under a nonlinear time-varying synthetic channel.
The experimental configuration are consistent with those depicted in Fig. 10.
It can be observed that the proposed poisoning attack method results in a more pronounced poisoning effect in the nonlinear time-varying synthetic channel
environment than in the linear time-varying synthetic channel.
In particular, the proposed poisoning attack method has a detrimental impact on the SER performance of the ResNet detector, the black-box DNN detector,
DeepSIC, and Meta-DeepSIC, with an SER performance deterioration of 0.93 dB, 0.53 dB, 1.35 dB, and 1.42 dB, respectively, at SNR = 10 dB.
Similarly, as the SNR increases, the attack effect becomes more pronounced,
with the SER performance deterioration of the four receiver architectures reaching 1.13 dB, 0.71 dB, 2.74 dB, and 3.09 dB, respectively, at SNR = 15 dB.
§.§.§ Linear Static Synthetic Channels Results
Fig. 12 presents an evaluation of the efficacy of
the proposed poisoning attack method in the context of a linear static synthetic channel.
Similarly, the experimental scenario and parameter configuration are consistent with those depicted in Fig. 10.
It can be observed that the performance of DeepSIC and Meta-DeepSIC is essentially indistinguishable in this case.
Furthermore, the lack of diverse data renders the black-box DNN detector unsuitable for adapting to the channel environment.
Meanwhile, compared to the black box DNN detector,
the ResNet detector that performed better in the first two channel environments performed the worst in the static channel model.
The poisoning effect of the ResNet detector has lost its reference significance.
In particular, in this scenario with an SNR of 10 dB, the proposed poisoning attack method results in an SER performance
degradation of 0.08 dB, 0.71 dB, and 0.72 dB for the black-box architecture receiver, the DeepSIC receiver, and the Meta-DeepSIC receiver, respectively.
As with the preceding scenario, the impact of the attack is more pronounced as the SNR increases.
At an SNR of 15 dB, the SER performance degradation for the three receiver architectures is 0.06 dB, 1.05 dB, and 1.07 dB, respectively.
§.§.§ Time-varying COST 2100 Channels Results
The efficacy of the proposed poisoning attack method is subsequently assessed under the time-varying COST 2100 channel,
with the experimental results illustrated in Fig. 13. The experimental configuration is identical to that of the linear time-varying synthetic channel.
Concurrently, the surrogate model is trained on the joint channel data based on the time-varying linear channel model.
As illustrated in the Fig. 13,
the proposed poisoning attack method is not only effective for receivers with different architectural depths and different online training scenario settings,
but also adaptable to different channel environments.
In this channel environment and with an SNR of 10 dB, the poisoning attack results in a deterioration of SER performance for the ResNet detector,
the black-box DNN detector, DeepSIC, and Meta-DeepSIC of up to 0.12 dB, 0.25 dB, 0.61 dB, and 0.78 dB, respectively.
At an SNR of 15 dB, the deterioration of SER performance for the four receiver architectures is up to 0.04 dB, 0.23 dB, 2.11 dB, and 1.36 dB, respectively.
§.§.§ The Impact of Pilot Size on Poisoning
The online learning of deep receivers is influenced by the training data, i.e., the pilot size L_pilot.
Larger pilot sizes generally enhance performance, leading to more stable learning and reduced overfitting.
To explore the impact of overfitting on the poisoning effect,
Fig. 14 examines the influence of pilot size in a linear time-varying synthetic channel at an SNR of 14 dB.
The results show that as L_pilot increases,
the effectiveness of poisoning attacks diminishes.
Specifically, when comparing L_pilot=200 with L_pilot=1000,
the poisoning effects on the ResNet detector, black-box DNN detector, DeepSIC, and Meta-DeepSIC are reduced by 0.34 dB, 0.74 dB, 0.14 dB, and 0.34 dB, respectively.
However, Larger pilot sizes cannot alleviate the poisoning effects on meta-learning methods, and the poisoned Meta-DeepSIC still performs worse than DeepSIC.
§.§ Discussion of Results
The experimental results demonstrate that the poisoning attack method devised in this paper is effective in four distinct channel environments.
Nevertheless, the precise impact of these attacks varies depending on the specific channel environment.
In light of the aforementioned experimental results, the following inferences can be drawn.
As illustrated in Fig.11, the poisoning effect observed in the nonlinear time-varying synthetic channel is markedly higher than in the other three cases.
The performance of the deep receivers subjected to a poisoning attack is severely degraded and approaches failure in this channel environment.
This suggests that the poisoning attack is capable of impeding the deep receiver's ability to adapt to the rapid changes in the channel's effects
and to learn the nonlinear effects.
Secondly, the experimental results for the linear time-varying synthetic channel and the COST 2100 channel corroborate the preceding conclusion from disparate vantages.
In comparison to the synthetic channel, the tap coefficients of the COST 2100 channel demonstrate greater long-term variance,
while exhibiting a relatively flat profile in the short term (for instance, between two blocks).
As shown in Fig. 13(a), the impact of poisoning attacks consistently reduces receiver performance over an extended period in the COST 2100 channel,
indicating a sustained but limited exacerbation.
In contrast, in the linear time-varying synthetic channel undergoing significant changes over time,
the poisoning attack has a more pronounced effect on the receiver's performance.
This indicates that the attack has a impact on the deep receiver's capacity to track long-term channel alterations and
a more pronounced disruptive effect on short-term rapid adaptation.
Notably, as shown in Fig. 14, a larger pilot size can mitigate the adverse effects of poisoning attacks, but this means more spectrum resources are consumed.
Finally, from the perspective of the deep receiver, Meta-DeepSIC, which incorporates a meta-learning approach,
demonstrates optimal performance when utilizing limited pilot data, particularly in scenarios where channel variations occur rapidly.
Furthermore, this learning capability demonstrates efficacy in nonlinear channel environments,
while increased sensitivity to poisoned samples results in a more pronounced deterioration
in performance than that observed for DeepSIC in fast-varying channels.
In scenarios characterised by slow-varying or static channels,
the performance of Meta-DeepSIC and DeepSIC, with or without poisoning attacks, is more consistent than in fast-varying channel conditions.
This is corroborated by the data presented in Fig. 12 and Fig. 13.
In conclusion, the attack method devised in this paper primarily impedes the receiver's ability to learn rapid channel changes and non-linear effects
in the short term, resulting in a decline in performance. The poisoning effect is particularly pronounced in the context of Meta-DeepSIC
when combined with meta-learning techniques. This conclusion highlights the security risks associated with the design of wireless receivers using
online and online meta-learning methods, particularly in environments characterised by rapid channel changes and non-linear effects, where the system is particularly susceptible
to poisoning attacks.
§ CONCLUSION
This paper proposes a transfer-based adversarial poisoning attack method for online deep receivers without the knowledge of the target.
The fundamental concept is to corrupt the online training and updating phases of deep receivers
in such a way that the model becomes compromised after a designated period of training,
resulting in a decline in performance.
The poisoning attack framework and the generation of poisoning attack samples comprise two steps.
Initially, the malicious user acquires the surrogate model through the joint learning method.
Subsequently, the poisoning attack perturbations are generated based on the surrogate model to poisoning the pilot.
Simulation experiments on the proposed poisoning attack method under varying channel models demonstrate that
it disrupts the adaptation of dynamic channls and learning of nonlinear effects.
Meanwhile, the proposed attack can be effective against both model-based deep learning architectures and typical DNN-based receiver architectures.
Meta-DeepSIC demonstrates optimal performance in fast-varying channels.
However, it is particularly susceptible to poisoning attack samples, resulting in a notable decline in performance.
It is therefore recommended that future research should concentrate on the development of efficient, robust and secure deep receiver architectures
that are capable of defending against potential attacks,
such as poisoning purification before learning or reducing the impact after poisoning<cit.>,
with a view to furthering the application of deep learning in wireless transceiver design and deep receiver deployment.
IEEEtran
|
http://arxiv.org/abs/2409.03595v1 | 20240905145325 | Charged critical behavior and nonperturbative continuum limit of three-dimensional lattice SU($N_c$) gauge Higgs models | [
"Claudio Bonati",
"Andrea Pelissetto",
"Ivan Soler Calero",
"Ettore Vicari"
] | hep-lat | [
"hep-lat",
"cond-mat.stat-mech",
"hep-th"
] |
Dipartimento di Fisica dell'Università di Pisa and
INFN Sezione di Pisa, Largo Pontecorvo 3, I-56127 Pisa, Italy
Dipartimento di Fisica dell'Università di Roma Sapienza
and INFN, Sezione di Roma I, I-00185 Roma, Italy
Dipartimento di Fisica dell'Università di Pisa and
INFN Sezione di Pisa, Largo Pontecorvo 3, I-56127 Pisa, Italy
Dipartimento di Fisica dell'Università di Pisa,
Largo Pontecorvo 3, I-56127 Pisa, Italy
§ ABSTRACT
We consider the three-dimensional (3D) lattice SU(N_c) gauge Higgs
theories with multicomponent (N_f>1) degenerate scalar fields and
U(N_f) global symmetry, focusing on systems with N_c=2, to
identify critical behaviors that can be effectively described by the
corresponding 3D SU(N_c) gauge Higgs field theory. The
field-theoretical analysis of the RG flow allows one to identify a
stable charged fixed point for large values of N_f, that would
control transitions characterized by the global symmetry-breaking
pattern U(N_f)→SU(2)⊗U(N_f-2). Continuous transitions with the same
symmetry-breaking pattern are observed in the SU(2) lattice gauge
model for N_f ≥ 30. Here we present a detailed finite-size
scaling analysis of the Monte Carlo data for several large values of
N_f. The results are in substantial agreement with the
field-theoretical predictions obtained in the large-N_f
limit. This provides evidence that the SU(N_c) gauge Higgs field
theories provide the correct effective description of the 3D
large-N_f continuous transitions between the disordered and the
Higgs phase, where the flavor symmetry breaks to
SU(2)⊗U(N_f-2). Therefore, at least for
large enough N_f, the 3D SU(N_c) gauge Higgs field theories with
multicomponent scalar fields can be nonperturbatively defined by the
continuum limit of lattice discretizatized models with the same
local and global symmetries.
Charged critical behavior and nonperturbative continuum
limit
of three-dimensional lattice SU(N_c) gauge Higgs models
Ettore Vicari
September 9, 2024
========================================================================================================================
§ INTRODUCTION
Local gauge symmetries play a fundamental role in the construction of
quantum and statistical field theories that describe phenomena in
various physical contexts: In high-energy physics they are used to
formulate the theories of fundamental
interactions <cit.>, in condensed-matter physics their application spans
from superconductors to systems with topologically ordered
phases <cit.>, in statistical mechanics they
are needed to describe classical and quantum critical phenomena with
(also emergent) gauge fields <cit.>.
The physical properties of lattice gauge models with scalar fields
crucially depend on the behavior of gauge and scalar
modes <cit.>. Their interplay can give rise
to continuous phase transitions, which are associated with notrivial
continuum limits of the corresponding gauge theories. The
corresponding critical behavior depends both on the breaking pattern
of the global symmetry and on the local gauge symmetry, which
determines which scalar degrees of freedom can become
critical. Moreover, in the presence of gauge symmetries, scalar
systems show Higgs phases <cit.>, a fundamental
feature of many modern-physics systems.
In this paper we focus on a class of three-dimensional (3D)
non-Abelian Higgs (NAH) field theories, which are characterized by
SU(N_c) gauge invariance and by the presence of N_f degenerate
scalar fields transforming in the fundamental representation of the
gauge group. The fundamental fields are a complex scalar field
Φ^af(x), where a=1,...,N_c and f=1,…,N_f, and a
gauge field A_μ^c(x), where c=1,…,N_c^2-1. The most
general renormalizable Lagrangian consistent with the local SU(N_c)
color symmetry and the global U(N_f) flavor symmetry of
the scalar sector is
L= 1/g^2 Tr F_μν^2 + Tr [(D_μΦ)^† (D_μΦ)]
+ r Tr Φ^†Φ
+ u 4 ( Tr Φ^†Φ)^2 + v 4 Tr (Φ^†Φ)^2 ,
where F_μν = ∂_μ A_ν -∂_ν A_μ -i[A_μ,
A_ν] (with A_μ, ab=A_μ^c t_ab^c), and D_μ, ab =
∂_μδ_ab -i t_ab^c A_μ^c where t^c_ab are the
SU(N_c) Hermitian generators in the fundamental representation.
The Lagrangian (<ref>) has been written in the standard
continuum form, in which perturbative computations are usually carried
out (after gauge fixing). An important issue is whether it is possible
to give a definition of the model that goes beyond perturbation
theory. To investigate this issue, one may proceed as it is usually
done in quantum chromodynamics (QCD), where the question is studied by
considering the lattice QCD formulation <cit.>. In
this setting a nonperturbative continuum limit exists if the lattice
regularized model undergoes a continuous transition with a divergent
length scale, in which all fields become critical.
Thus, the crucial point is the identification of critical transitions
in 3D lattice NAH models. In the field-theoretical setting this is
equivalent to the existence of a stable fixed point (FP) of the
renormalization-group (RG) flow of the 3D NAH field theory
(<ref>). Its existence allows us to define a continuum limit
and therefore it would provide a nonperturbative definition of the
model, as it occurs in the case of QCD <cit.>.
This program has been carried out in 3D Abelian Higgs (AH) theories
(scalar electrodynamics). Noncompact lattice formulations of the U(1)
gauge fields <cit.>, and compact formulations with
higher-charge scalar fields <cit.> undergo continuous
transitions, where scalar and gauge modes become critical, allowing us
to define a corresponding scalar-gauge statistical field theory. Note
that the identification of the correct nonperturbative continuum limit
is not trivial, since 3D lattice AH models also undergo continuous
transitions that are not related with the gauge field theory. Indeed,
there are transitions where gauge modes play no role and that have an
effective Landau-Ginzburg-Wilson (LGW) description with no local gauge
symmetry <cit.>, and topological transitions only driven
by the gauge fields, where scalar fields play no role <cit.>.
None of these transitions, even if continuous, allows one to define
the continuum limit of the gauge Higgs field theory, which requires
both gauge and scalar modes to be critical.
For this reason, in order to correctly identify the continuous
transitions that provide the continuum limit for the corresponding
field theory, it is crucial to compare the lattice results with an
independent calculation. In the case of the lattice AH models, the
identification was supported by the comparison of the numerical
lattice results with nonperturbative field-theoretical computations in
the limit of a large number of components of the scalar
field <cit.>.
In this paper, we wish to pursue the same program for the NAH field
theory (<ref>). The RG flow in the space of the Lagrangian
couplings has been analyzed to one-loop order <cit.>, close
to to four dimensions, in the ε≡ 4-d
expansion <cit.>. It has a stable infrared FP, with positive
quartic coupling v, for any N_c and sufficiently large N_f
<cit.>. We qualify this FP as charged, because the
gauge coupling assumes a nonzero positive value, thus implying
nontrivial critical correlations of the gauge field. These one-loop
ε-expansion results only indicate that a continuum limit
can be defined for large N_f but do not provide a quantitative
characterization of the behavior in three dimensions and thus, they do
not provide quantitative results that can be compared with numerical
estimates obtained in the corresponding three-dimensional lattice
model. For this purpose the nonperturbatice large-N_f expansion at
fixed N_c is more useful: O(1/N_f) estimates of critical exponents
<cit.> can be used to verify the correspondence of lattice
results and field-theory estimates.
In this work we mostly focus on lattice NAH models with SU(2) gauge
symmetry. Their phase diagram was investigated in
Ref. <cit.>, identifying different transition lines. In
this paper we present an accurate numerical study of some of these
transitions, with the purpose of verifying if the observed critical
behavior is consistent with the predictions of the NAH field theory.
We perform Monte Carlo (MC) simulations for sufficiently large N_f
and perform a finite-size scaling (FSS) analysis of the MC results to
estimate the universal features of the transitions. The numerical
estimates of the N_f-dependent critical exponents are then compared
with the results obtained by using the 1/N_f field-theoretical
expansion <cit.>. The numerical results for the
length-scale exponent ν that we present here nicely agree with the
1/N_f prediction, providing a robust evidence of the fact that the
lattice NAH models develop critical behaviors that can be associated
with the stable charged FP of the RG flow of the NAH field theory.
It is worth emphasizing that the existence of these new universality
classes – characterized by the presence of a non-Abelian gauge
symmetry – not only establish the nonperturbative existence of a new
class of 3D quantum field theories, but also allow us to extend the
phenomenology of continuous transitions of 3+1 dimensional lattice
gauge theories at finite temperature, see, e.g., Refs. <cit.>.
The paper is organized as follows. In Sec. <ref> we collect the
known results on the RG flow of the NAH field theory (<ref>),
based on ε expansion, and the large-N_f nonperturbative
predictions. In Sec. <ref> we define the lattice NAH models,
essentially obtained by discretizing the NAH field theory, and discuss
some general features of their phase diagram. In Sec. <ref> we
present the FSS analyses of the numerical MC data obtained for N_c=2
and N_f =30,40,60. Finally, we draw our conclusions in
Sec. <ref>.
§ NAH FIELD THEORY
§.§ RG flow and large-N_f predictions
The RG flow of the field theory (<ref>) was determined close to
four dimensions in the framework of the ε≡ 4-d
expansion <cit.>. The RG functions were computed by using
dimensional regularization and the minimal-subtraction (MS)
renormalization scheme, see, e.g., Ref. <cit.>. The RG
flow is determined by the β functions associated with the
Lagrangian couplings u, v, and α = g^2. At one-loop order
they are given by <cit.>
β_α = -
εα + (N_f-22N_c) α^2,
β_u = -ε u + (N_f N_c + 4) u^2
+ 2 (N_f+N_c) u v + 3 v^2
- 18 (N_c^2 -1) N_c u α
+ 27 (N_c^2 + 2) N_c^2 α^2,
β_v = - ε v + (N_f+N_c)v^2 + 6uv
- 18 (N_c^2 -1) N_c v α
+ 27 (N_c^2 - 4) N_c α^2.
Some numerical factors, which can be easily inferred from the above
expressions, have been reabsorbed in the normalizations of the
renormalized couplings to simplify the expressions.
The analysis of the common zeroes of the β
functions <cit.> shows that the RG flow close to four
dimensions has a stable charged FP with a nonvanishing α if
N_f > N_f^*, where N_f^* depends on N_c and on the space
dimension. Close to four dimensions, we have N_f^*=
375.4+O(ε) for N_c=2, and N_f^* = 638.9+O(ε)
for N_c=3. The stable charged FP lies in the region v>0 for any
N_c. The number of components N_f^* necessary to have a stable
charged FP is quite large in four dimensions. However, we expect
N_f^* to significantly decrease in three dimensions, as it happens
in the AH theories <cit.>, where it
varies from N_f^*≈ 183 in four dimensions <cit.> to a
number in the range 4<N_f^*<10 in three dimensions <cit.>
(see also Refs. <cit.>).
As we already mentioned in the introduction, the one-loop
ε expansion provides only qualitative informations for
three dimensional systems. A more quantitative approch is the 1/N_f
expansion at fixed N_c <cit.>. Assuming the existence of
a charged critical behavior for finite N_f, this approach provides
exact predictions of critical quantities for large values of
N_f. The length-scale critical exponent ν for was computed to
O(N_f^-1) <cit.>, obtaining
ν = 1 - 48 N_cπ^2 N_f + O(N_f^-2),
for tree-dimensional systems. In particular, ν≈ 1 - 9.727/N_f for
N_c=2.
§.§ Relevance of the field-theoretical results
The studies of the continuous transitions and critical behaviors of
lattice Abelian and non-Abelian gauge theories with scalar matter,
see, e.g.,
Refs. <cit.>, have shown
the emergence of several qualitatively different types of transitions.
In some cases only gauge-invariant scalar-matter correlations become
critical at the transition, while the gauge variables do not display
long-range correlations. At these transitions, gauge fields prevent
non-gauge invariant scalar correlators from acquiring nonvanishing
vacuum expectation values and from developing long-range order. In
other words, the gauge symmetry hinders some scalar degrees of
freedom—those that are not gauge invariant—from becoming critical.
In this case the critical behavior or continuum limit is driven by the
condensation of a scalar order parameter. This operator plays the
role of fundamental field in the LGW theory which provides an
effective description of the critical behavior. The effective model
depends only on the scalar order-parameter field, and is only
characterized by the global symmetry of the model. Gauge invariance
is only relevant in determining the gauge-invariant scalar order
parameter. Examples of such continuous transitions are found in
lattice AH models <cit.>, and lattice NAH
models <cit.>. A more complex example is
the finite-temperature chiral transitions in QCD. Ref. <cit.>
(see also Refs. <cit.>) assumed this transition to be
only driven by the fermionic related modes, proposing an effective LGW
theory in terms of a scalar gauge-invariant composite operator
bilinear in the fermionic fields, without gauge fields.
There are also examples of phase transitions in lattice gauge models
where scalar-matter and gauge-field correlations are both critical. In
this case the critical behavior is expected to be controlled by a
charged FP in the RG flow of the corresponding continuum gauge field
theory. This occurs, for instance, in the 3D lattice AH model with
noncompact gauge fields <cit.>, and in the compact model
with scalar fields with higher charge Q≥ 2 <cit.>, for a
sufficiently large number of scalar components. Indeed, the critical
behavior along one of their transition lines is associated with the
stable FP of the AH field
theory <cit.>,
characterized by a nonvanishing gauge coupling.
As already mentioned in the introduction, at present, there is no
conclusive evidence that 3D NAH lattice models undergo continuous
transitions with both scalar and gauge critical correlations, which
can be associated with the stable charged FP of their RG flow
discussed in Sec. <ref>. A preliminary study was reported in
Ref. <cit.>. In this paper we return to this issue,
comparing more accurate, numerical analyses with the results obtained
in the field-theoretical 1/N_f expansion. In particular, we
investigate whether, along some specific transition lines, the
critical behavior is characterized by a critical exponent ν that
is consistent, for large values of N_f, with the nonperturbative
1/N_f result (<ref>).
§ LATTICE SU(N_C) GAUGE MODELS WITH MULTIFLAVOR SCALAR FIELDS
§.§ The lattice model
As in lattice QCD <cit.>, we consider lattice SU(N_c)
gauge models which are lattice discretizations of the NAH field theory
(<ref>). They are defined on a cubic lattice of linear size L
with periodic boundary conditions. The scalar fields are complex
matrices Φ^af_ x (with a=1,...,N_c and f=1,...,N_f),
satisfying the unit-length constraint Tr Φ_ x^†Φ_ x^† = 1, defined on the lattice sites,
while the gauge variables are SU(N_c) matrices U_
x,μ <cit.> defined on the lattice links. The
lattice Hamiltonian reads <cit.>
H = - J N_f∑_ x,μ Re Tr Φ_
x^† U_ x,μ Φ_
x+μ̂^† + v 4∑_ x
Tr (Φ_ x^†Φ_ x)^2
-
γ N_c∑_ x,μ>ν Re Tr
[U_ x,μ U_ x+μ̂,ν U_
x+ν̂,μ^† U_ x,ν^†].
In the following we set J=1, so that energies are measured in units
of J, and write the partition function as Z = ∑_{Φ,U}exp(-β H) where β=1/T.
The Hamiltonian H is invariant under local SU(N_c) and global
U(N_f) transformations. Note that U(N_f) is not a simple group and
thus we may separately consider SU(N_f) and U(1) transformations,
that correspond to Φ^af→∑_g V^fgΦ^ag, V∈
SU(N_f), and Φ^af→ e^iαΦ^ag, α∈
[0,2π), respectively. Since the diagonal matrix with entries
e^2π i/N_c is an SU(N_c) matrix, α can be restricted
to [0,2π/N_c) and the global symmetry group is more precisely
U(N_f)/ℤ_N_c when N_f≥ N_c (if N_f<N_c a global
U(1)/ℤ_N_c transformation can be reabsorbed by a
SU(N_c) gauge transformation, see Ref. <cit.>).
Note that the parameter v in the lattice Hamiltonian corresponds to
the Lagrangian parameter v in Eq. (<ref>). Therefore, if the
lattice model (<ref>) develops a critical behavior described
by the charged FP of the NAH field theory, then this is expected to
occur for positive values of v.
§.§ The phase diagrams for N_f>N_c
A thorough discussion of the phase diagram of the lattice NAH models
(<ref>) was reported in Ref. <cit.>. In this
section we recall the main features that are relevant for the present
study. For N_f=1 the phase diagram is trivial, as only one phase is
present <cit.>. For N_f>1, the lattice model
has different low-temperature Higgs phases, which are essentially
determined by the minima of the scalar potential v
Tr(Φ^†Φ)^2 with the unit-length constraint Tr Φ^†Φ=1. Their properties crucially depend on the
sign of the parameter v, the number N_c of colors, and the number
N_f of flavors. Substantially different behaviors are found for
N_f>N_c, N_f=N_c, and N_f< N_c. Also N_c is relevant and one
should distinguish systems with N_c=2 from those with N_c≥ 3.
Since we are interested in phase transitions that can be described by
the stable charged FP of the NAH field theory, and we want to compare
their features with the large-N_f predictions at fixed N_c, we
focus on the case N_f>N_c.
Sketches of the phase diagrams for N_c=2 and N_c≥ 3 when
N_f>N_c are shown in Figs. <ref> and
<ref>, respectively. They are qualitatively similar,
with two different Higgs phases and a single high-temperature
phase. The only difference is the shape of the line that separates the
two Higgs phases. For N_c=2, the model with v=0 is invariant
under a larger global symmetry group, the Sp(N_f)/ℤ_2 group <cit.>. In this case, the line v=0, that is a
first-order line for N_f≥ 3, separates the Higgs phases. For N_c
> 2 instead, there is no additional symmetry and the boundary of the
two Higgs phases is a generic curve that lies in the positive v
region, see Fig. <ref>.
In the following we focus on the SU(2)-gauge NAH theory
(<ref>), which should be already fully representative for the
problem we address in this paper. We recall that the analysis of the
RG flow of the NAH field theory, see Sec. <ref>, indicates that
the attraction domain of the stable charged FP must be located in the
region v>0. Therefore, we should focus on the continuous
transitions occurring in the domain v>0, where the symmetry-breaking
pattern is <cit.>
U(N_f)→SU(2)⊗U(N_f-2).
§ NUMERICAL ANALYSES OF THE MULTIFLAVOR LATTICE SU(2) NAH MODELS
The numerical results reported in Ref. <cit.> for the SU(2)
lattice gauge model provided good evidence of continuous transitions
for v=1, γ=1, and N_f=40. First-order transitions were
instead observed for N_f=20, for several values of γ and
v. Therefore, a natural hypothesis is that for v=1 and γ=1
(more generally, for generic positive v and sufficiently large
values of γ) the transitions are continuous for N_f>N_f^*,
with N_f^* in the interval 20<N_f^*<40.
To understand whether these transitions are associated with the
charged FP of the NAH field theory, we need accurate numerical results
that can be compared with predictions obtained from the 3D NAH field
theory. We will focus on the critical exponent ν, comparing the
numerical estimates with the large-N_f result, Eq. (<ref>).
For this purpose, we have performed numerical simulations for v=1,
γ=1, and N_f=30, N_f=40, and N_f=60, varying β
across the transition line. Simulations have been performed on cubic
lattices with periodic boundary conditions. Some technical details on
the MC simulations have been already reported in
Ref. <cit.>, to which we refer for more details.
§.§ Observables and finite-size scaling
To study the breaking of the global SU(N_f) symmetry, we monitor
correlation functions of the gauge-invariant bilinear operator
Q_ x^fg = ∑_a Φ̅_ x^afΦ_ x^ag
- 1 N_fδ^fg.
We define its two-point correlation function (since we use periodic
boundary conditions, translation invariance holds)
G( x- y) = ⟨ Tr Q_ x Q_ y⟩,
the corresponding susceptibility χ, and second-moment
correlation length ξ defined as
χ=∑_ x G( x), ξ^2 = 1 4 sin^2 (π/L)G( 0) - G( p_m)G( p_m),
where p_m = (2π/L,0,0) and G(
p)=∑_ x e^i p· x G( x) is the
Fourier transform of G( x). In our numerical study we also
consider the Binder parameter
U = ⟨μ_2^2⟩/⟨μ_2 ⟩^2, μ_2 = L^-6∑_ x, y Tr Q_ x Q_ y,
and the ratio
R_ξ=ξ/L.
At a continuous phase transition, any RG invariant ratio R, such as
the Binder parameter U or the ratio R_ξ, scales as <cit.>
R(β,L) = R(X) + L^-ω R_ω(X) +
…,
where
X = (β-β_c)L^1/ν,
ν is the critical correlation-length exponent, ω>0 is the
leading scaling-correction exponent associated with the first
irrelevant operator, and the dots indicate further negligible
subleading contributions. The function R(X) is universal up
to a normalization of its argument, and also R_ω(X) is
universal apart from a multiplicative factor and normalization of the
argument [the same of ℛ(X)]. In particular, R^*≡ R(0) is universal, depending only on the boundary conditions
and aspect ratio of the lattice. Since R_ξ defined in
Eq. (<ref>) is an increasing function of β, we can
combine the RG predictions for U and R_ξ to obtain
U(β,L) = U(R_ξ) + O(L^-ω),
where U now depends on the universality class, boundary
conditions, and lattice shape, without any nonuniversal multiplicative
factor. Eq. (<ref>) is particularly convenient because it
allows one to test universality-class predictions without requiring a
tuning of nonuniversal parameters.
Analogously, in the FSS limit the susceptibility defined in
Eq. (<ref>) scales as
χ≈ L^2-η_Q C(R_ξ),
where η_Q is the critical exponent, that parametrizes the
power-law divergence of the two-point function (<ref>) at
criticality, and C is a universal function apart from a
multiplicative factor.
§.§ Numerical results
We now present the FSS analyses of the observables introduced in
Sec. <ref>, for the SU(2) gauge theory. We set v=γ=1 and
consider N_f=30, 40, 60. We report data up to L=48 for N_f=40
and N_f=60, and up to L=42 for N_f=30. As we shall see, they are
sufficient to accurately determine the critical behavior of the
lattice SU(2)-gauge NAH models (<ref>).
To begin with, we discuss the behavior for N_f=40, a case that was
already considered in Ref. <cit.>. Here we consider
significantly larger systems and obtain more accurate data. Estimates
of R_ξ are shown in Fig. <ref> for several values of
L, up to L=48, Data have a clear crossing point for R_ξ≈
0.32, which indicates a transition at β≈ 1.186.
Accurate estimates of the critical point β_c and of the critical
exponent ν are determined by fitting R_ξ to the expected FSS
behavior (<ref>). We perform several fits, parametrizing the
function R(X) with an order-n polynomial (stable results
are obtained for n≳ 3) an also including O(L^-ω)
corrections with ω in the range [0.5,1.0]. Note that ω
is generally expected to be smaller than one and to approach one in
the large-N_f limit, as in the 3D N-vector
models <cit.>. In any case, results are almost independent of
the value of ω. Moreover, to have an independent check of the
role of the scaling corrections, fits have been repeated,
systematically discarding the data for the smallest lattice sizes
(i.e. including only data for L≥ L_ min with L_
min=8,12,16,20 typically). Combining all fit results we obtain the
estimates
β_c=1.1863(1), ν=0.745(15), for N_f=40,
where the errors take into account how the results change when the fit
parameters are varied in reasonable ranges (these results are in
substantial agreement with results reported in Ref. <cit.>
using smaller lattice sizes, up to L=28). In Fig. <ref>
we plot R_ξ versus X=(β-β_c)L^1/ν using the above
estimates of β_c and ν. The resulting scaling behavior when
increasing L definitely confirm the correctness of the estimates
reported in Eq. (<ref>). Some sizeable scaling
corrections are observed only for R_ξ≲ 0.12,
corresponding to X≲ -1, however the convergence of large
lattices, L≳ 30 say, is clear also in that region. We also
mention that consistent, but less precise, results are obtained by
analyzing the Binder parameter U.
Further evidence of FSS is achieved by the unbiased plot of the Binder
parameter U versus R_ξ, cf. Eq. (<ref>), see
Fig. <ref>. Again we observe a nice scaling behavior for
R_ξ≳ 0.2, see in particular the inset of
Fig. <ref> where data around R_ξ≈ 0.3 are shown.
We also note that sizable scaling corrections are observed around the
peak of U, corresponding to R_ξ≈ 0.12, which is also the
region where the scaling behavior of R_ξ versus X show larger
scaling corrections. These corrections are consistent with the expected L^-ω asymptotic approach
and ω≈ 1. It is also important to note that, although
significant corrections are present in the peak region, the peak
values decrease when increasing the lattice size, excluding a
discontinuous transitions (if the transition were of first order, the
Binder parameter would diverge for L→∞
<cit.>).
We have also estimated the exponent η_Q characterizing the
behavior of the susceptibility χ. Using the expected FSS
behavior (<ref>), η_Q was estimated by fitting logχ to (2-η_Q) log L + C(R_ξ), using a polynomial
parametrization for the function C(x). Proceeding as in the analysis
of R_ξ, we obtain η_Q = 0.87(1). The resulting FSS plot is
shown in Fig. <ref>.
The MC data obtained for N_f=30 and N_f=60 (again for v=1 and
γ=1) have been analyzed analogously. In both cases we observe a
clear evidence of a continuous transition. In particular, the Binder
parameter U approaches an asymptotic FSS curve when plotted versus
R_ξ, see, e.g., Fig. <ref>. By fitting R_ξ to the
FSS ansatz (<ref>), as we did for N_f=40, we obtain the
estimates
β_c=1.22435(10), ν=0.64(2), for N_f=30,
and
β_c=1.1416(1), ν=0.81(2), for N_f=60,
where again the errors take into account the small variations of the
results when changing the fit parameters. A FSS plot of R_ξ for
N_f=30 is shown in Fig. <ref>. We have also estimated
the exponent η_Q. Performing the same analysis of the
suscelptibility as for N_f=40, we obtain the estimates
η_Q=0.79(1) for N_f=30 and η_Q=0.910(5) for N_f=60.
We now compare the above results for ν with the large-N_f
prediction, Eq. (<ref>), see Fig. <ref>. The
agreement is satisfactory, For instance, Eq. (<ref>) predicts
ν=0.757 for N_f=40 and N_c=2, to be compared with the MC
result ν=0.745(15). Concerning the exponent η_Q, the
numerical estimates are compatible with the limiting value η_Q=1
for N_f→∞, which holds for any bilinear
operator. Finite-N_f results are consistent with a 1/N_f
correction, as expected. A fit of the data gives η_Q≈ 1 -
c/N_f with c≈ 5 for N_f≳ 40.
The nice agreement between the numerical estimates of ν and
the field-theoretical large-N_f prediction allows us to conclude
that, for γ > 0 and v>0 and large values of N_f,
transitions along the line that separates the disordered from the
Higgs phase are continuous and naturally associated
with the charged FP of the SU(2)-gauge NAH field theory
(<ref>).
We expect this result to hold also for larger values of N_c.
§ CONCLUSIONS
We consider 3D lattice SU(N_c) gauge Higgs models with U(N_f)
global invariance with the purpose of identifying continuous
transition lines with a critical behavior associated with the stable
charged FP of the RG flow of the NAH field theory defined by the
Lagrangian (<ref>). This would imply that the lattice models
admit a continuum limit that provides a nonperturbative definition of
the NAH field theory, as it occurs for lattice QCD <cit.>.
We focus on SU(2) gauge theories. We perform MC simulations for a
relatively large number of flavors, in order to be able to compare the
MC results with field-theoretical 1/N_f predictions. The RG flow of
the SU(2)-gauge NAH field theory has a stable charged FP in the region
v>0, for N_f > N^*_f. Close to four dimensions, N_f^* is very
large, N^*_f ≈ 376, see Sec. <ref>. However, our 3D
numerical results show that continuous transitions in the relevant
parameter region occur for significantly smaller numbers of
components. While for N_f=20 only first-order transitions (for
different values of v and γ) are observed <cit.>,
for N_f=30 a continuous transition is found for v=γ=1. These
results suggest that 20<N_f^*<30, or equivalently that N_f^*=25(4)
in three dimensions. More importantly, the numerical estimates of the
length-scale critical exponent ν for N_f=30,40,60 are in nice
agreement with the large-N_f field-theoretical result,
Eq. (<ref>). As far as we know, this is the first evidence
of the existence of critical behaviors in 3D lattice NAH models that
can be associated with the charged FP of the 3D SU(N_c)-gauge NAH
field theory.
As we mentioned in Sec. <ref> not all transitions in gauge
systems require an effective description in terms of a gauge field theory.
There are many instances in which gauge fields have no role. In these cases
the effective model is a scalar LGW theory in which the fundamental field is a
(coarse-grained) gauge-invarianct scalar order parameter. This approach was
employed in Refs. <cit.> to discuss the nature of the
finite-temperature transition of QCD in the chiral limit. Indeed, it was
assumed that the transition was only due to the condensation of a
gauge-invariant operator, bilinear in the fermionic fields. Such operator was
then taken as fundamental field in an effective 3D LGW Φ^4 theory, whose
RG flow was supposed to determine the nature of the chiral transition. The
implicit assumption was that only gauge-invariant fermionic related modes are
relevant critical modes.
It is thus worth discussing the predictions of the LGW approach in the
present case, to exclude that the transitions we have discussed above
have an effective LGW description. In the LGW approach the fundamental field is
a hermitian traceless N_f× N_f
matrix field Ψ( x), which represents a
coarse-grained version of the gauge-invariant bilinear operator
Q_ x defined in Eq. (<ref>). The corresponding most
general LGW Lagrangian with global SU(N_f) symmetry
is <cit.>
L_ LGW = Tr ∂_μΨ∂_μΨ +
r TrΨ^2
+ w Tr Ψ^3 +
u (Tr Ψ^2)^2 + v Tr Ψ^4.
For N_f=2 the cubic term vanishes and the two quartic terms are equivalent.
In this case a continuous transition is possible in the SU(2)/ℤ_2,
that is in the O(3) vector, universality class. For N_f > 2 the cubic term
is present and, on the basis of the usual mean-field arguments, one expects a
first-order transition also in three dimensions (unless a tuning of the model
parameters is performed to cancel the cubic term). Therefore, the LGW approach
does not give the correct predictions for the transitions we have investigated.
The reason of the failure is likely related to the fact that the LGW approach
assumes that gauge fields are not relevant at criticality. In LGW transitions
their only role is that of restricting the critical modes to the
gauge-invariant sector. Instead, the relation between the critical transitions
we observed and the NAH field theory implies that gauge fields are critical and
relevant for the critical behavior in the cases we studied.
We should note that the results presented here are valid for v > 0. For v <
0 continuous transitions are observed for N_f=2, in the O(3) universality
class <cit.>. The NAH field theory does not provide their correct
effective description, since there are no stable FPs in the RG flow of the NAH
field theory with negative v for any N_f. On the other hand, the LGW
theory predicts O(3) transitions for N_f=2, since the Lagrangian (<ref>)
is equivalent to the O(3) Lagrangian for this value of N_f. We conclude
that, for v < 0 and N_f=2, gauge modes do not play any role and the
transition admits a LGW description.
This discussion shows that the critical behavior of 3D models (or 4D models at
finite temperature) with non-Abelian gauge symmetry is quite complex and
possibily more interesting than expected. In particular, the knowledge of the
order parameter of the transition is not enough to characterize the critical
behavior. Informations on the behavior of the gauge fields are required to
identify the correct effective description.
The authors acknowledge support from project PRIN 2022 “Emerging
gauge theories: critical properties and quantum dynamics”
(20227JZKWP). Numerical simulations have been performed on the CSN4
cluster of the Scientific Computing Center at INFN-PISA.
99
Weinberg-book1 S. Weinberg, The Quantum Theory of
Fields. Volume I. Foundations, (Cambridge University Press, 2005).
Weinberg-book2 S. Weinberg, The Quantum Theory of
Fields. Volume II. Modern Applications, (Cambridge University Press, 2005).
ZJ-book J. Zinn-Justin,
Quantum Field Theory and Critical Phenomena,
fourth edition (Clarendon Press, Oxford, 2002).
Georgi-book H. Georgi, Weak interactions and modern particle
theory, (The Benjamin/Cummings Publishing Company, Menlo Park, California,
1984).
Anderson-book P. W. Anderson, Basic Notions of
Condensed Matter Physics, (The Benjamin/Cummings Publishing
Company, Menlo Park, California, 1984).
Wen-book X.-G. Wen, Quantum field theory of many-body
systems: from the origin of sound to an origin of light and
electrons, (Oxford University Press, 2004).
Sachdev-19 S. Sachdev, Topological order, emergent gauge
fields, and Fermi surface reconstruction, Rep. Prog. Phys. 82,
014001 (2019).
Anderson-63 P. W. Anderson, Plasmons, Gauge Invariance, and
Mass, Phys. Rev. 130, 439 (1963); Superconductivity: Higgs,
Anderson and all that, Nat. Phys. 11, 93 (2015).
SSBgauge F. Englert and R. Brout,
Broken Symmetry and the Mass of Gauge Vector Mesons,
Phys. Rev. Lett. 13, 321 (1964);
P. W. Higgs,
Broken Symmetries and the Masses of Gauge Bosons,
Phys. Rev. Lett. 13, 508 (1964);
G. S. Guralnik, C. R. Hagen and T. W. B. Kibble,
Global Conservation Laws and Massless Particles,
Phys. Rev. Lett. 13, 585 (1964).
GG-72 H. Georgi and S. L. Glashow,
Unified weak and electromagnetic interactions without neutral currents,
Phys. Rev. Lett. 28, 1494 (1972).
HLM-74
B. I. Halperin, T. C. Lubensky, and S. K. Ma,
First-Order Phase Transitions in Superconductors and Smectic-A
Liquid Crystals, Phys. Rev. Lett. 32, 292 (1974).
OS-78 K. Osterwalder and E. Seiler, Gauge Field Theories on
the Lattice, Ann. Phys. (NY) 110, 440 (1978).
FS-79 E. Fradkin and S. Shenker, Phase diagrams of lattice
gauge theories with Higgs fields, Phys. Rev. D 19, 3682
(1979).
DRS-80 S. Dimopoulos, S. Raby, and L. Susskind, Light
Composite Fermions, Nucl. Phys. B 173, 208 (1980).
Hikami-80 S. Hikami, Non-Linear σ Model of Grassmann
Manifold and Non-Abelian Gauge Field with Scalar Coupling,
Prog. Theor. Phys. 64, 1425 (1080).
BN-87 C. Borgs and F. Nill, The Phase Diagram of the Abelian
Lattice Higgs Model. A Review of Rigorous Results,
J. Stat. Phys. 47, 877 (1987).
SSSNH-02 A. Sudbø, E. Smørgrav, J. Smiseth,
F. S. Nogueira, and J. Hove, Criticality in the (2+1)-Dimensional
Compact Higgs Model and Fractionalized Insulators,
Phys. Rev. Lett. 89, 226403 (2002).
MZ-03 M. Moshe and J. Zinn-Justin, Quantum field theory in
the large N limit: A review, Phys. Rep. 385, 69 (2003).
NRR-03 T. Neuhaus, A. Rajantie, and K. Rummukainen,
Numerical study of duality and universality in a frozen
superconductor, Phys. Rev. B 67, 014525 (2003).
SBSVF-04 T. Senthil, L. Balents, S. Sachdev, A. Vishwanath,
and M. P. A. Fisher, Quantum Criticality beyond the
Landau-Ginzburg-Wilson Paradigm, Phys. Rev. B 70, 144407
(2004).
DP-14
P. S. Bhupal Dev and A. Pilaftsis, Maximally Symmetric
Two Higgs Doublet Model with Natural Standard Model Alignment, JHEP
1412, 024 (2014); (Erratum) JHEP 1511, 147 (2015).
PV-19-AH3d A. Pelissetto and E. Vicari,
Multicomponent compact Abelian-Higgs lattice models,
Phys. Rev. E 100, 042134 (2019).
SSST-19 S. Sachdev, H. D. Scammell, M. S. Scheurer, and
G. Tarnopolsky, Gauge theory for the cuprates near optimal doping,
Phys. Rev. B 99, 054516 (2019).
BPV-19 C. Bonati, A. Pelissetto, and E. Vicari, Phase
Diagram, Symmetry Breaking, and Critical Behavior of
Three-Dimensional Lattice Multiflavor Scalar Chromodynamics,
Phys. Rev. Lett. 123, 232002 (2019);
Three-dimensional lattice multiflavor scalar chromodynamics:
Interplay between global and gauge symmetries, Phys. Rev. D 101, 034505 (2020).
BPV-20 C. Bonati, A. Pelissetto, and E. Vicari,
Higher-charge three-dimensional compact lattice Abelian-Higgs
models, Phys. Rev. E 102, 062151 (2020).
SPSS-20 H. D. Scammell, K. Patekar, M. S. Scheurer, and
S. Sachdev, Phases of SU(2) gauge theory with multiple adjoint Higgs
fields in 2+1 dimensions, Phys. Rev. B 101, 205124 (2020).
BPV-20-on C. Bonati, A. Pelissetto, and E. Vicari,
Three-dimensional phase transitions in multiflavor scalar SO(N_c)
gauge theories, Phys. Rev. E 101, 062105 (2020).
BPV-21 C. Bonati, A. Pelissetto, and E. Vicari, Lattice
Abelian-Higgs models with noncompact gauge field, Phys. Rev. B 103, 085104 (2021).
BFPV-21-su-ad C. Bonati, A. Franchi, A. Pelissetto, and
E. Vicari, Three-dimensional lattice SU(N_c) gauge theories with
multiflavor scalar fields in the adjoint representation, Phys. Rev B
114, 115166 (2021).
BFPV-21-su C. Bonati, A. Franchi, A. Pelissetto, and
E. Vicari, Phase diagram and Higgs phases of 3D lattice SU(N_c)
gauge theories with multiparameter scalar potentials, Phys. Rev. E
104, 064111 (2021).
BPV-22 C. Bonati, A. Pelissetto, and E. Vicari, Critical
behaviors of lattice U(1) gauge models and three-dimensional
Abelian-Higgs gauge field theory, Phys. Rev. B 105, 085112
(2022).
BPV-23 C. Bonati, A. Pelissetto, and E. Vicari,
Coulomb-Higgs phase transition of three-dimensional lattice Abelian
Higgs gauge models with noncompact gauge variables and gauge fixing,
Phys. Rev. E 108, 044125 (2023).
BPV-24 C. Bonati, A. Pelissetto, and E. Vicari, Diverse
universality classes of the topological deconfinement transitions of
three-dimensional noncompact lattice Abelian-Higgs models,
Phys. Rev. D 109, 034517 (2024)
WK-74 K. G. Wilson and J. Kogut, The renormalization group
and the ϵ expansion, Phys. Rep. 12, 75 (1974).
Wilson-74 K. G. Wilson, Confinement of quarks, Phys. Rev. D
10, 2445 (1974).
MM-book I. Montvay and G. Münster, Quantum Fields on
a Lattice, (Cambridge University Press, 1994).
PW-84 R. D. Pisarski and F. Wilczek, Remarks on the chiral
phase transition in chromodynamics, Phys. Rev. D 29, 338
(1984).
Nadkarni:1989na
S. Nadkarni,
The SU(2) Adjoint Higgs Model in Three dimensions,
Nucl. Phys. B 334, 559 (1990).
Kajantie:1993ag
K. Kajantie, K. Rummukainen and M. E. Shaposhnikov,
A Lattice Monte Carlo study of the hot electroweak phase transition,
Nucl. Phys. B 407, 356 (1993).
AY-94 P. Arnold and L. G. Yaffe, The ϵ expansion
and the electroweak phase transition, Phys. Rev. D 49, 3003
(1994).
Buchmuller:1994qy
W. Buchmüller and O. Philipsen,
Phase structure and phase transition of the SU(2) Higgs model
in three-dimensions,
Nucl. Phys. B 443, 47 (1995).
Laine-95 M. Laine, Exact relation of lattice and continuum
parameters in three-dimensional SU(2)+Higgs theories,
Nucl. Phys. B 451, 484 (1995).
Kajantie:1996mn
K. Kajantie, M. Laine, K. Rummukainen, and M. E. Shaposhnikov,
Is there a hot electroweak phase transition at m_H ≳ m_W?,
Phys. Rev. Lett. 77, 2887 (1996).
Meyer-Ortmanns:1996ioo
H. Meyer-Ortmanns,
Phase transitions in quantum chromodynamics,
Rev. Mod. Phys. 68, 473 (1996).
Hart:1996ac
A. Hart, O. Philipsen, J. D. Stack, and M. Teper,
On the phase diagram of the SU(2) adjoint Higgs model in (2+1)-dimensions,
Phys. Lett. B 396, 217 (1997).
BPV-03 A. Butti, A. Pelissetto, and E. Vicari, On the nature
of the finite-temperature transition in QCD, J. High Energy
Phys. 08, 029 (2003).
BVS-06
D. Boyanovsky, H. J. de Vega, and D. J. Schwarz,
Phase transitions in the early and the present universe
Ann. Rev. Nucl. Part. Sci. 56, 441 (2006).
PV-13 A. Pelissetto and E. Vicari, Relevance of the axial
anomaly at the finite-temperature chiral transition in QCD,
Phys. Rev. D 88, 105018 (2013).
IZMHS-19 B. Ihrig, N. Zerf, P. Marquard, I. F. Herbut, and
M. M. Scherer, Abelian Higgs model at four loops, fixed-point
collision and deconfined criticality, Phys. Rev. B 100, 134507
(2019).
DHMNP-81 P. Di Vecchia, A. Holtkamp, R. Musto, F. Nicodemi,
and R. Pettorino, Lattice CP^N-1 models and their large-N
behaviour, Nucl. Phys. B 190, 719 (1981).
FH-96
R. Folk and Y. Holovatch, On the critical fluctuations
in superconductors, J. Phys. A 29, 3409 (1996).
YKK-96 V. Yu. Irkhin, A. A. Katanin, and M. I. Katsnelson,
1/N expansion for critical exponents of magnetic phase transitions
in the CP^N-1 model for 2<d<4, Phys. Rev. B 54, 11953
(1996).
KS-08 R. K. Kaul and S. Sachdev, Quantum criticality of U(1)
gauge theories with fermionic and bosonic matter in two spatial
dimensions, Phys. Rev. B 77, 155105 (2008).
PV-02 A. Pelissetto and E. Vicari, Critical phenomena and
renormalization group theory, Phys. Rep. 368, 549 (2002).
SZJSM-23 M. Song, J. Zhao, M. Cheng, C. Xu, M. M. Scherer, L. Janssen and Z. Y. Meng,
Deconfined quantum criticality lost,
[arXiv:2307.02547 [cond-mat.str-el]].
PV-19 A. Pelissetto and E. Vicari, Three-dimensional
ferromagnetic CP^N-1 models, Phys. Rev. E 100, 022122
(2019).
PV-20-largeNCP A. Pelissetto and E. Vicari, Large-N behavior
of three-dimensional lattice CP^N-1 models, J. Stat. Mech.:
Th. Expt. 033209 (2020).
CLB-86
M. S. S. Challa, D. P. Landau, and K. Binder,
Finite-size effects at temperature-driven first-order transitions
Phys. Rev. B 34, 1841 (1986).
VRSB-93
K. Vollmayr, J. D. Reger, M. Scheucher, and K. Binder,
Finite size effects at thermally-driven first order phase transitions: A
phenomenological theory of the order parameter distribution
Z. Phys. B 91 113 (1993).
CPPV-04 P. Calabrese, P. Parruccini, A. Pelissetto,
and E. Vicari, Critical behavior of O(2)⊗O(N)-symmetric
models, Phys. Rev. B 70, 174439 (2004).
|
http://arxiv.org/abs/2409.02346v1 | 20240904002055 | Robust Federated Finetuning of Foundation Models via Alternating Minimization of LoRA | [
"Shuangyi Chen",
"Yue Ju",
"Hardik Dalal",
"Zhongwen Zhu",
"Ashish Khisti"
] | cs.LG | [
"cs.LG",
"cs.DC"
] |
[
Robust Federated Finetuning of Foundation Models
via Alternating Minimization of LoRA
Shuangyi Chenyyy
Yue Juxxx
Hardik Dalalxxx
Zhongwen Zhuxxx
Ashish Khistiyyy
yyyECE Department, University of Toronto, Toronto, Canada
xxxEricsson-GAIA Montréal, Canada
Shuangyi [email protected]
Ashish [email protected]
Machine Learning, ICML
0.3in
]
§ ABSTRACT
Parameter-Efficient Fine-Tuning (PEFT) has risen as an innovative training strategy that updates only a select few model parameters, significantly lowering both computational and memory demands. PEFT also helps to decrease data transfer in federated learning settings, where communication depends on the size of updates. In this work, we explore the constraints of previous studies that integrate a well-known PEFT method named LoRA with federated fine-tuning, then introduce RoLoRA, a robust federated fine-tuning framework that utilizes an alternating minimization approach for LoRA, providing greater robustness against decreasing fine-tuning parameters and increasing data heterogeneity. Our results indicate that RoLoRA not only presents the communication benefits but also substantially enhances the robustness and effectiveness in multiple federated fine-tuning scenarios.
§ INTRODUCTION
The recent emergence of foundation models in various applications significantly changes the field of machine learning. Characterized by their broad adaptability and massive scale, these models require access to vast and diverse datasets to effectively learn across different tasks and domains. However, this presents a significant challenge: foundation models not only require large amounts of data, but also data of high quality.
Federated learning provides a promising solution to this issue. It enables the use of data from multiple sources while protecting the privacy of the data. By combining insights from different decentralized sources, federated learning allows for collaborative model training without exposing sensitive information. This method is especially beneficial for foundation models, as it can access a broad range of data while maintaining privacy.
Recently, Parameter-Efficient Fine-Tuning (PEFT) has emerged as an innovative training strategy that updates only a small subset of model parameters, substantially reducing computational and memory demands. A notable method in this category is LoRA <cit.>, which utilizes low-rank matrices to approximate weight changes during fine-tuning. These matrices are integrated with pre-trained weights for inference, facilitating reduced data transfer in scenarios such as federated learning, where update size directly impacts communication efficiency. Many works integrate LoRA into federated setting. For example, FedPETuning <cit.> compared various PEFT methods in a federated setting. SLoRA <cit.>, a hybrid approach that combines sparse fine-tuning with LoRA, is introduced to tackle data heterogeneity in federated settings. Furthermore, FS-LLM <cit.> is presented, which is a framework for fine-tuning LLMs in federated environments. However, these studies typically apply the FedAVG algorithm directly to LoRA modules, overlooking the interference introduced by this aggregation approach. With this consideration, Sun et al. designs a federated finetuning framework named FFA-LoRA <cit.> based on LoRA by freezing down-projection matrix 𝐀 for all the clients and only updating up-projection matrix 𝐁. Furthermore, they apply DP-SGD to preserve privacy. Using sufficient number of finetuning parameters, FFA-LoRA with a larger learning rate achieves performance comparable to FedAVG for LoRA modules while halving the communication costs. However, we observe that with fewer fine-tuning parameters, FFA-LoRA is less robust than FedAVG for LoRA modules, primarily due to its limited expressiveness stemming from the restricted number of trainable parameters.
Another common issue in federated learning is data heterogeneity among clients. To address this, we drew inspiration from the personalized federated framework FedRep <cit.>, which alternates between updating clients' representation and head. This approach highlights the importance of learning a robust low-rank representation and demonstrates superior convergence speed compared to simultaneously updating both representation and head.
Therefore, we propose a robust federated fine-tuning framework, RoLoRA, based on alternating minimization of LoRA. Empirical evidence demonstrates that RoLoRA is more robust against Decreasing Fine-tuning Parameters and Increasing Data Heterogeneity, while still halving communication costs, similar to FFA-LoRA.
Related Work We provide a summary of the literature on PEFT, Variants of LoRA, PEFT in Federated Setting, and FL with data heterogeneity in Appendix <ref>.
§ PRELIMINARIES
§.§ Low-Rank Adaptation: LoRA
Low-Rank Adaptation (LoRA) <cit.> fine-tunes large language models efficiently by maintaining the original model weights fixed and adding small, trainable matrices in each layer. These matrices perform low-rank decompositions of updates, reducing the number of trainable parameters. This approach is based on the finding that updates to model weights during task-specific tuning are usually of low rank, which allows for fewer parameters to be adjusted. For example, for a pre-trained weight matrix 𝐖_0 ∈ℝ^d× d, the update is a low-rank product 𝐁𝐀, where 𝐀∈ℝ^r× d and 𝐁∈ℝ^d× r, with r << d. Only 𝐀 and 𝐁 are trainable, allowing 𝐖 = 𝐖_0 + α𝐁𝐀, with α adjusting the update's impact.
Applying LoRA in a federated setting is a practical choice. By using LoRA adapters, clients can fine-tune foundation models efficiently with limited resources. Since only these specific matrices need to be transmitted to a central server, this approach significantly reduces communication costs. This makes LoRA an advantageous solution for enhancing model performance in collaborative scenario comparing to full parameter finetuning in the federated setting.
§.§ FedAVG of LoRA Introduces Interference
Integrating LoRA within a federated setting presents challenges. In such a setup, each of the N clients is provided with the pretrained model weights 𝐖_0, which remain fixed during finetuning. Clients are required only to send the updated matrices 𝐁_i and 𝐀_i to a central server for aggregation. While most current studies, such as SLoRA<cit.> and FedPETuning<cit.>, commonly apply FedAVG directly to these matrices as shown in (<ref>), this approach might not be optimal. The precise update for each client’s model, Δ𝐖_i, should be calculated as the product of the low-rank matrices 𝐀_i and 𝐁_i. Consequently, aggregation on the individual matrices introduces interference.
1/N∑_i=1^N Δ𝐖_i = 1/N (𝐁_1𝐀_1+𝐁_2𝐀_2+...+ 𝐁_N𝐀_N)
≠1/N (𝐁_1+𝐁_2+...+𝐁_𝐍) 1/N (𝐀_1+𝐀_2+...+𝐀_𝐍)
§.§ FedRep: Common Representation via Alternating Minimization
A common challenge in federated learning is data heterogeneity among clients. FedRep <cit.> addresses this by finding a common representation, which is effectively achieved via alternatively updating clients' representation and head, aggregating only representation while keeping the head diverse. The algorithm demonstrates the necessity of learning a robust low-rank representation. Additionally, the alternating optimization has shown superior convergence speed compared to approaches that simultaneously update both representation and head. We observe a structured similarity between the LoRA adapter and the representation-head structure. Specifically, we consider the down-projection matrix 𝐀 in each LoRA adapter as the low-rank representation for the features of the intermediate layers. We hypothesize that learning a robust low rank representation (down-projection matrix) is also advantageous for the intermediate features when the clients has heterogeneous inputs. However, since the LoRA adapters are cascaded in the model unlike single representation-head structure in model considered in FedRep, keeping up-projection matrix 𝐁 diverse may not be favorable for convergence.
With these considerations, we propose robust federated fine-tuning framework based on alternating minimization of LoRA (RoLoRA).
§ OUR FRAMEWORK
We describe the framework design of RoLoRA and discuss its practical advantages.
Alternating Minimization and Corresponding Aggregation
Motivated by the observations discussed in Section <ref> and <ref>, we propose applying alternating minimization to the local fine-tuning of each client in a setting with N clients. Unlike the approach in FFA-LoRA, where 𝐀 is consistently frozen, we suggest a alternating update strategy. There are alternating odd and even communication rounds designated for updating, aggregating 𝐀 and 𝐁, respectively.
In the odd comm. round: 1/N∑_i=1^N Δ𝐖_i^2t+1
= 1/N (𝐁_1^t+1𝐀_1^t+𝐁_2^t+1𝐀_2^t+...+ 𝐁_N^t+1𝐀_N^t)
= 1/N (𝐁_1^t+1+𝐁_2^t+1+...+ 𝐁_N^t+1)𝐀^t
In the even comm. round: 1/N∑_i=1^N Δ𝐖_i^2t+2
= 1/N (𝐁_1^t+1𝐀_1^t+1+𝐁_2^t+1𝐀_2^t+1+...+ 𝐁_N^t+1𝐀_N^t+1)
= 1/N𝐁^t+1(𝐀_1^t+1+𝐀_2^t+1+...+ 𝐀_N^t+1)
In the odd communication round, all clients freeze 𝐀^t and update 𝐁^t. The central server then aggregates these updates to compute 𝐁^t+1 = 1/N∑_i=1^N𝐁^t+1_i and distributes 𝐁^t+1 back to the clients. In the subsequent communication round, clients freeze 𝐁^t+1 and update 𝐀^t. The server aggregates these to obtain 𝐀^t+1 = 1/N∑_i=1^N𝐀^t+1_i and returns 𝐀^t+1 to the clients.
It is important to note that in round 2t+1, the frozen 𝐀_i^t are identical across all clients, as they are synchronized with 𝐀^t from the central server at the beginning of the round. This strategy ensures that the update and aggregation method introduces no interference, as demonstrated in (<ref>) and (<ref>).
Computation and Communication Cost
The parameter-freezing nature of RoLoRA enhances computational and communication efficiency. In each communication round, the number of trainable parameters in the model is effectively halved compared to FedAVG with LoRA. The only additional cost for RoLoRA compared to FFA-LoRA is the alternating freezing of the corresponding parameters. We remark this additional cost is negligible because it is applied to the clients' models and can be executed concurrently during the server's aggregation.
§ EXPERIMENTS
We evaluate the performance of RoLoRA in various federated settings. We use NVIDIA GeForce RTX 4090 or NVIDIA A40 for all the experiments.
Baselines Considering cross-silo federated setting where the number of clients is relatively small and all clients will participate in each round, we will explore the following three methods based on FedAVG.
* LoRA means LoRA adapter and its finetuning algorithm are directly applied to local finetuning of clients in the federated system. Specifically, in iteration t, the server receives 𝐀_i^t and 𝐁_i^t from client i and aggregates by 𝐀^t = 𝖠𝗏𝗀(𝐀_i^t) and 𝐁^t = 𝖠𝗏𝗀(𝐁_i^t).
* LoRA-FFA <cit.> is a baseline that enable the clients to finetune 𝐁 and keep 𝐀 frozen locally. Thus, in iteration t, the server aggregates by 𝐁^t = 𝖠𝗏𝗀(𝐁_i^t).
* RoLoRA enables clients to alternate updating 𝐀 and 𝐁 as described in Section <ref>.
Model and Datasets. We take the pre-trained RoBERTa-Large (355M) <cit.> models from the HuggingFace Transformers library. and evaluate the performance of three federated finetuning methods on 5 datasets (SST-2, QNLI, MNLI, QQP, RTE) from the GLUE <cit.>. Due to the limitation of the unpublished test set in GLUE, we follow the previous studies <cit.> and use the original validation set as the new test set and split a part of the training set as the validation set.
Implementation.We implement all the methods based on FederatedScope-LLM <cit.>. To make a fair comparison, for each dataset, we obtain the best performance on test set and report the average over five seeds. Specifically, the learning rate is chosen from the set {5e-4, 1e-3, 2e-3, 5e-3, 1e-2, 2e-2, 5e-2, 1e-1, 2e-1}. Other hyper-parameters for experiments are specified in Table <ref> in Appendix <ref>.
Effect of Number of Finetuning Parameters
In Figure <ref>, we compare three methods across five GLUE datasets. We apply LoRA to every weight matrix of the selected layers, given different budgets of LoRA parameters. For each dataset, we experiment with three budgets, ranging from high to low. The corresponding layer sets, 𝒫_1, 𝒫_2, 𝒫_3, are detailed in Table <ref> in Appendix <ref>.
The figures indicates that with sufficient number of finetuning parameters, the three methods can achieve comparable best accuracy; as the number of LoRA parameters is reduced, the performance of the three methods deteriorates to varying degrees. However, RoLoRA, which achieves performance comparable to LoRA, demonstrates greater robustness compared to FFA-LoRA, especially under conditions of limited fine-tuning parameters. It is important to note that with the same finetuning parameters, the communication cost of RoLoRA and FFA-LoRA is always half of that of LoRA due to their parameter freezing nature. This implies that RoLoRA not only sustains its performance but also enhances communication efficiency. We expand the middle set of data of each of Figure <ref>, corresponding to 𝒫_2, and show the details of the performance of three methods in Table <ref>.
Effect of Data Heterogeneity
In this section, we study the effect of data heterogeneity. The layer set with LoRA adapters in Table <ref> is 𝒫_2 as in Table <ref>. In Table <ref>, we increased the number of clients from 3 to 20, and then to 50, ensuring that there is no overlap in the training samples each client can access. Consequently, each client receives a smaller fraction of the total dataset, leading to a rise in data heterogeneity among the clients. We observe that as the data heterogeneity increases, while maintaining the same number of fine-tuning samples, the performance of the LoRA method significantly deteriorates for most datasets. In contrast, RoLoRA maintains its accuracy levels. The performance of FFA-LoRA also declines, attributed to the limited expressiveness of the random initialization of 𝐀 for clients' heterogeneous data. Notably, RoLoRA achieves this accuracy while incurring only half the communication costs associated with LoRA. Figure <ref> in Appendix <ref> illustrates the dynamics during fine-tuning for three methods, highlighting that the convergence speed of RoLoRA is substantially better than that of the other two methods.
Align Communication Cost for Three Methods
In Table <ref>, we conduct a comparison of three methods under the constraint of identical communication costs under the assumption that the number of clients is small. To align the communication costs across these methods, two approaches are considered. The first approach involves doubling the rank of FFA-LoRA and RoLoRA, with the outcomes detailed in Table <ref>. The second approach requires doubling the number of layers equipped with LoRA adapters. In the results presented in Table <ref>, the latter strategy is employed. Specifically, for both FFA-LoRA and RoLoRA, we adjust the communication costs by doubling the number of layers equipped with LoRA adapters, compared to the baseline LoRA method, where the layer set 𝒫_3 are attached with adapters, thereby standardizing the size of the transmitted messages. Table <ref> demonstrates that when operating within a constrained communication cost budget, the performance of RoLoRA consistently surpasses that of the other two methods.
More experimental results with different models and settings are provided in Appendix <ref>.
§ CONCLUSION
In this work, we introduce RoLoRA, a robust federated fine-tuning framework using alternating minimization for LoRA. RoLoRA improves robustness against reduced fine-tuning parameters and increased data heterogeneity. Our results show that RoLoRA enhances communication efficiency, robustness, and effectiveness in various federated fine-tuning settings.
icml2024
§ RELATED WORKS
§.§ Parameter Efficient Fine Tuning (PEFT): LoRA and Its Variants
As the size of large language models (LLMs) continues to increase, it is computationally expensive and time-consuming to finetune to the full model. Parameter efficient finetuning (PEFT) allows for updates to a smaller subset of parameters, significantly reducing the computational and memory requirements. One of the most well-known methods is LoRA<cit.>.
LoRA uses low-rank matrices to approximate changes in weights during fine-tuning, allowing them to be integrated with pre-trained weights before inference. Based on LoRA, many PEFT methods are developed. For example, Zhang et al. <cit.> designs AdaLoRA by using SVD decomposition and pruning less significant singular values for more efficient updates. VeRA <cit.> is proposed to further reduce the number of trainable parameters during finetuning by using a single pair of low-rank matrices shared across all layers and learning small scaling vectors. Zhang et al. <cit.> proposes a memory-efficient fine-tuning method named LoRA-FA which keeps the projection-down weight of 𝐀 fixed and updates the projection-up weight of 𝐁 during finetuning. Hayou et al. <cit.> enhance LoRA by assigning different learning rates to 𝐀 and 𝐁, theoretically confirming that the optimal approach requires a higher learning rate for 𝐁 than for 𝐀. Liu et al. analyze magnitude and directional updates in LoRA versus full parameter fine-tuning and introduce DoRA<cit.>, which decomposes pre-trained weights for fine-tuning and applies LoRA for directional updates. A quantized version of LoRA named QLoRA<cit.> is introduced. Building upon that, Li et al. develops LoftQ <cit.> for a better initialization for quantized training.
§.§ PEFT in Federated Setting
PEFT adjusts only a few lightweight or a small portion of the total parameters for specific tasks, keeping most foundational model parameters unchanged. This feature can help reduce data transfer in federated learning, where communication depends on the size of updates. Zhang et al. <cit.> compares multiple PEFT methods in federated setting, including Adapter<cit.>, LoRA<cit.>, Prompt tuning<cit.> and Bit-Fit<cit.>. SLoRA<cit.>, which combines sparse finetuning and LoRA, is proposed by Babakniya et al. to address the data heterogeneity in federated setting.
Sun et al. designs a federated finetuning framework named FFA-LoRA based on LoRA <cit.> by freezing matrix 𝐀 for all the clients and only updating matrix 𝐁. Furthermore, they apply DP-SGD to preserve privacy. FS-LLM <cit.>, a framework for finetuning LLM, is introduced.
§.§ FL with Data Heterogeneity
FL with a Common Representation
FL with a common representation aims to address the challenges of data heterogeneity in FL. Those FL methods learn a shared global representation while allowing each client to have its own personalized partial model. Works include FedRep<cit.>, which learns a global low-dimensional representation and personalized head for each client, FedCR <cit.>, which introduces a regularizer to encourage learning a shared representation, and FedPAC <cit.>, which performs class-wise feature alignment. Other methods like FedBABU <cit.> and FedRoD <cit.> also aim to learn a shared representation across clients. Although we focus on a common model in the federated setting in this work, we got inspired by training algorithm introduced by FedRep <cit.> to learn a low-rank representation for the intermediate features. We discuss the similarity and difference between the LoRA adapter and representation-head structure.
§ EXPERIMENTS
§.§ Setup
We show the hyper-parameter configurations for each dataset in Table <ref>.
In Table <ref>, we include the details about layers attached with LoRA adapters for different budget of finetuning parameters, for each dataset.
§.§ More Results
§.§.§ Communication Cost
In Table <ref>, we show the uplink communication cost for three methods for the layer set 𝒫_2 using rank=1.
§.§.§ Finetuning Dynamics of the Setup with Severe Data Heterogeneity
In Figure <ref>, we show the convergence of three methods under severe data heterogeneity with 50 clients. RoLoRA demonstrates superior convergence speed compared to the other two methods.
§.§.§ DeBERTa-XLarge Results
In Table <ref>, we show the results with DeBERTa-XLarge (900M). For both FFA-LoRA and RoLoRA, we modify the communication costs by equipping twice as many layers with LoRA adapters compared to the standard LoRA method. So the communication costs are aligned for three methods.
§.§.§ Performance when QLoRA is Applied
In Table <ref>, we quantize the frozen pre-trained weights to 8 bit and 4 bit for each client, and apply QLoRA<cit.>. The relative accuracy is computed as 𝖠𝖼𝖼_𝖥𝖯-𝖠𝗏𝗀(𝖠𝖼𝖼_8b+𝖠𝖼𝖼_4b)/𝖠𝖼𝖼_𝖥𝖯. We use sufficient finetuning parameters and the selected layer set is 𝒫_1 to study the effect of the quantized foundation model in the federated setting.
|
http://arxiv.org/abs/2409.03101v1 | 20240904220145 | Deciphering the influence of neutron transfer in Si-based fusion reactions around the Coulomb barrier | [
"Rinku Prajapat"
] | nucl-th | [
"nucl-th",
"nucl-ex"
] |
^1GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt 64291, Germany
^2Astronomy and Physics Department, Saint Mary's University, Halifax B3H C3C, Canada
§ ABSTRACT
Background: The enhancement in sub-barrier fusion cross-sections caused by different intrinsic degrees of freedom, such as inelastic excitations and deformations, has been well explored recently. However, the influence of positive Q-value neutron transfer (PQNT) channels and their microscopic understanding on fusion dynamics have still been far from complete.
Purpose: We aim to investigate the role of a few neutron transfer channels on the dynamics of fusion reactions around the Coulomb barrier by judicially selecting 11 different ^28,30Si-induced systems. These reactions are chosen in such a way that they possess positive and negative Q-values for neutron transfer channels to make the comparison more apparent. Furthermore, a comparative study on fusion barrier parameters using different proximity potentials and parametrizations is also a prime goal.
Method: A channel coupling approach within the framework of a semiclassical model is being used to investigate the role of multi-neutron transfer with positive Q-values on fusion phenomena near and below the Coulomb barrier. The fusion barrier parameters have been extracted and analyzed within the framework of seven different potential models.
Results: The sub-barrier fusion enhancement compared to the one-dimensional barrier penetration model (uncoupled) is investigated by considering collective excitations in colliding nuclei and multi-neutron transfer channels with Q > 0 within the channel coupling model. Furthermore, GRAZING calculations are performed to predict the cross-section of target-like fragments after 2n pickup transfer.
Conclusion: All the fusion excitation functions (EFs) have been successfully explained by the coupled channel calculations using the channel coupling model. Only the significant effect of up to 2n pickup transfer with Q > 0 was found on sub-barrier fusion. Despite having positive Q values for transfer channels, no noticeable impact of more than 2n transfer was observed. GRAZING predictions are grossly in the same order as the quantitative contribution of 2n transfer channels observed by channel coupling model calculations. All potential models successfully explain the experimental barrier height and radius within 5 % and 20 % differences except for Prox 77 potential. Also, we have discussed the role of deformations after 2n transfer and its impact on sub-barrier fusion cross-section.
25.60.Dz (Interaction and reaction cross sections), 25.60.Pj(Fusion reactions), 25.70.-z (Low and Intermediate energy heavy-ion reactions), 25.70.Gh (Compound nucleus), 25.40.Hs (Transfer reactions)
Deciphering the influence of neutron transfer in Si-based fusion reactions around the Coulomb barrier
Rinku Prajapat^1,2[Corresponding Author: [email protected]]
September 9, 2024
=====================================================================================================
§ INTRODUCTION
Heavy-ion fusion and multinucleon transfer (MNT) reactions are two of the essential pathways to understand the diverse phenomena of nuclear physics, such as the production mechanism of exotic nuclei and associated structural peculiarities, and stellar nucleosynthesis around the Coulomb barrier <cit.>. In particular, fusion reactions are of special interest, governed by passing over or quantum penetration through the Coulomb barrier and influenced by various structural and dynamical processes. Regardless of the paramount importance of such reactions, their microscopic understanding has still been far from complete. For example, different degrees of freedom such as collective excitations (rotation or vibration), static deformations, and neutron transfer couplings distribute the single barrier into multiple fusion barriers, which give rise to enhancement in sub-barrier fusion cross-sections <cit.>. Such dramatic process of enhancement is somewhat explored within the framework of collective excitations and deformations <cit.>. However, the role of neutron transfer with positive Q-value is not entirely understood due to the complexity of considering full-fledged transfer channel coupling in coupled channel calculations.
Thus, a series of theoretical calculations <cit.> and experimental measurements <cit.> have been performed to understand the impact of PQNT channels on sub-barrier fusion cross-sections. For instance, for the first time, Beckerman et al. <cit.> demonstrated the fusion enhancement caused by PQNT in ^58,64Ni+^58,64Ni reactions around the Coulomb barrier energies. Later, a set of experiments proclaimed the role of PQNT on sub-barrier fusion enhancement. However, no such enhancement due to the PQNT channel was witnessed in multiple reactions, including ^30Si+^156Gd <cit.>, ^32S+^112,116Sn <cit.>, ^132Sn+^58Ni <cit.>, ^18O+^74Ge <cit.>, despite having the PQNT channel, indicating the necessity of seminal experimental and theoretical understanding in this domain.
At the onset of theoretical understanding, Stelson et al. <cit.> followed by Rowely et al. <cit.>, proposed a pragmatic approach by introducing the phenomenological models to incorporate the flow of neutrons between the colliding nuclei and their co-relation with sub-barrier fusion enhancement. Neutron flow occurs in a short interaction time of 10^-22 s. In his study, Rowley et al.<cit.> found that neutron transfer with a negative Q-value broadens the barrier distribution (BD), which builds up neck formation between fusing participants. However, the same study also evidenced the antinecking configuration with the PQNT channel. Later, the quantum mechanical coupled channel (QCC) approach emerged to explain the dramatic nature of fusion enhancement. Nonetheless, it has been realized that incorporation of neutron transfer in QCC is a complex job owing to their consideration in the total coupled channel Schrödinger equations for a decomposition of the total wave function <cit.>. Therefore, a strong motivation emerged to develop such an approach that can consider inelastic excitations as well as multi-neutron transfer in its calculation kernel. Because of this, an empirical channel coupling (ECC) approach <cit.> was proposed, allowing both collective excitations and multi-neutron transfer with Q > 0 simultaneously. This approach is well-pronounced and has recently been used in several near-barrier fusion studies with neutron rearrangement <cit.>. Apart from such studies, other reaction phenomena such as complete-incomplete fusion <cit.> and pre-equilibrium emission <cit.> are also emerging in recent time using light-heavy-ion induced reactions around the Coulomb barrier.
In addition, sincere efforts have been made to populate neutron-rich heavy nuclei via the MNT approach since fission and fragmentation reactions have very low production cross sections <cit.>. This is also a fundamental crux of upcoming/existing Radioactive Ion Beams (RIBs) facilities <cit.>, more likely FRS/SFRS at GSI/FAIR in Germany, RIBF at RIKEN, and NSCL at MSU.
In the past, semi-classical GRAZING and dinuclear system (DNS) models have been successfully utilized to describe the cross-section, mass, and charge distribution of projectile/target-like fragments (PLFs/TLFs) from MNT reactions <cit.>. Despite these models' success, several measurements differ significantly from their predictions <cit.>. Hence, having the theoretical input using GRAZING before conducting such large-scale experiments is always desirable.
Different theoretical model approaches in heavy-ion collisions play a prominent role in extracting the fusion observables, such as barrier height and radius. Such parameters are also essential for synthesizing the superheavy elements and availability of RIBs worldwide <cit.>. To understand such intriguing interactions, different parametrized models and proximity potentials such as prox 2000, prox 2010 <cit.>, and Bass 1980 <cit.>, Kumari et al. <cit.>, Zhang et al. <cit.> become available in recent times and need attention to benchmark the data.
The facts mentioned above suggest that robust theoretical calculations are needed to investigate the role of a few neutron transfer channel couplings with Q > 0 in addition to collective excitations of colliding nuclei. For this purpose, eleven different reactions are selected with the following motivations: (i) each pair of reactions has positive (^28Si+^62,64Ni, ^30Si+^58Ni, and ^28Si+^92,94,96Zr) and negative (^28Si+^58Ni, ^30Si+^62,64Ni, and ^28Si+^90Zr) Q-value transfer channels so that the impact of PQNT channels can be disentangled using the ECC model, (ii) projectiles are kept similar (^28,30Si) in order that structural effect of targets and PQNT couplings (^28Si ⟷ ^30Si after 2n transfer) can be studied, (iii) GRAZING calculations will be performed for those reactions which have the evidence of neutron transfer, and (iv)fusion barrier parameters will be extracted from available data and their comparison with a different theoretical model to deepen the current understanding.
The paper is organized as follows: Sec. <ref> presents the theoretical formalism; results and interpretations are discussed in Secs. <ref> to <ref>. Finally, Sec. <ref> concludes the work.
§ THEORETICAL FORMALISM
The fusion of two atomic nuclei at sub-barrier energies is governed by quantum tunneling phenomena <cit.> and regulated by the real interaction potential, which consists of Coulomb, nuclear, and centrifugal terms. For low energy reactions in the light/intermediate-mass region, fusion cross-section (σ_fus), at a particular center of mass energy (E_c.m.) and angular momentum (ℓ) can be expressed as the sum of all partial waves [Eq. <ref>]:
σ_fus(E_c.m.) = π^2∑_ℓ=0^∞(2ℓ+1) T_ℓ(E_c.m.)
The absorption probability of ℓ^th-partial wave T_ℓ^HW(E_c.m.)=[1+exp{(2π/ħω_ℓ)(V_bℓ–E_c.m.)}]^-1 has been determined using Hill-Wheeler approach <cit.>. Here, V_bℓ and ħω_ℓ are the barrier height (in MeV) and curvature for l^th partial wave, respectively. However, the curvature and radius (R_bl) are independent of angular momentum in case of ℓ=0. In such scenario, they can be followed as the S-wave values ħω_l=ħω, and R_bl=R_b. Therefore, the fusion barrier can be extracted by fitting the measured fusion cross section with Wong's formula <cit.> (Eq. <ref>)
σ_fus(E_c.m.) = R_b^2ħω/2E_c.m.ln{1+exp[2π/ħω(E_c.m.- V_b)]}
At the energies well above the Coulomb barrier, i.e. (E_c.m.-V_b)≥ ħω/2π, Eq. <ref> can be approximated to a simplified classical expression as follows [Eq. <ref>]
σ_fus(E_c.m.) = π R_b^2(1-V_b/E_c.m.),
The linear dependence of σ_fus on 1/E_c.m. is already known at the above barrier energies.
From the linear fit of the data, the values of V_b and R_b can be calculated for the different systems to compare with those obtained from the different potential models (see Sec. <ref>).
But, the interaction potential not only depends on the relative separation between the colliding partners but also on their deformation characteristics (β_p, β_t) and mutual orientations (θ_p, θ_t), here p and t refers to projectile and target nucleus, respectively. Hence, the potential energy can be written as follows <cit.>:
V_p,t(r, β_p, β_t, θ_p, θ_t) = V_C(r, β_p, β_t, θ_p, θ_t)
+ V_N(r, β_p, β_t, θ_p, θ_t) + 1/2C_p(β_p-β_p^0)^2 + 1/2C_t(β_t-β_t^0)^2
The parameters β_p, t, β_p, t^0, and C_p,t are the dynamic and static quadrupole deformation, and stiffness parameters of projectile-target nuclei, respectively. The stiffness is calculated within the framework of the liquid drop model. Here, θ_p, t are the orientations of the symmetry axes, as shown in Fig. <ref>.
In such a scenario, mainly there are two cases of potential energy such as (i) the interaction potential for two spherical nuclei (β_p, t = 0) is almost close to the Bass barrier, (ii) for deformed nuclei, the potential barrier will be calculated at different relative orientations. However, for representation purposes, only two limiting cases of θ_p, t =π/2 (side-by-side orientation) or θ_p, t = 0 (tip-to-tip orientation) are shown in Fig. <ref>. Furthermore, a multidimentional character of potential barrier for ^28Si+^90Zr(^28Si; deformed and ^90Zr; spherical) is shown in Fig. <ref>. Similarly, the two-dimensional interaction potential for the deformed nucleus ^28Si (β_2 = -0.407, β_4 = +0.25) and the spherical nucleus ^90Zr is shown in Fig. <ref> as a function of distance (r) and relative orientation.
One can observe the difference in different barrier observables (height, position, and curvature) from the same figure (Fig. <ref>) influencing the fusion cross-sections.
To simulate such multidimensional character of potential barriers, one has to solve a multidimensional Schrödinger equation.
Within the ECC model, for the spherical nuclei which depend only on the single degree of freedom, coupling to their surface vibrations, the transmission probability [T_ℓ(E_c.m.)] is calculated by averaging the barrier height V_b (Eq. <ref>)
T_ℓ(E_c.m.) = ∫ f(V_b) T_ℓ^HW(V_b; E_c.m.) dV_b
where f(V_b) is the barrier distribution function <cit.> and can be estimated using the normalization condition ∫ f(V_b)dV_b = 1.
Meanwhile, the sub-barrier fusion cross-section and associated dynamics of spherical and statistically deformed shape nuclei mainly depend on the coupling of their surface vibrations and the mutual orientation of colliding partners. Hence, the penetration probability is average over the deformation-dependent barrier height for the deformed nuclei, as follows [Eq. <ref>]:
T_ℓ(E_c.m.) = 1/4∫_0^π∫_0^π T_ℓ^HW(V_b(β_p,t; θ_p,t), E_c.m.)
× sinθ_1 sinθ_2 dθ_1 dθ_2
To contemplate the multi-neutron transfer (rearrangement) with Q > 0, where incoming flux may penetrate the multidimensional
Coulomb barrier for the different neutron transfer channels, the penetration probability was calculated by Eq. <ref> or <ref> in which T_ℓ^HW has to replace by the following expression [Eq. <ref>]:
T_ℓ^HW(V_b; E_c.m.) = 1/N_tr∑_x = 0^4∫_- E_c.m.^Q_xnα_k(E_c.m., ℓ, Q)
× T_ℓ^HW(V_b; E_c.m. + Q)dQ
N_tr and Q_xn are the normalization constant and Q value of x neutron transfer from the ground state of one participant to the ground state of other participating nuclei. Then, the probability of x neutron transfer with Q > 0 can be estimated using the following expression [Eq. <ref>]:
α_k(E_c.m., ℓ, Q) = N_k e^-C(Q-Q_opt)^2 e^-2δ(D(E_c.m., ℓ) - D_0)
where D(E_c.m., ℓ) and D_0 are the distance of closest approach of two colliding partners and d_0(A_p^1/3 + A_t^1/3), respectively with d_0 = 1.2 fm <cit.> and A_p, A_t are the mass number of projectile and target nuclei. The parameter δ is δ = δ(ϵ_1) + δ(ϵ_2) + δ(ϵ_3) +...+ δ(ϵ_k), for the sequential transfer of k neutrons with δ(ϵ_k) = √(2μ_nϵ_k/ħ^2), where ϵ_k is the sepration energy of the k-th transfered neutron. The parameters C = R_bμ_p,t/4δħ^2 V_b with μ_p,t as the reduced mass of projectile and target and Q_opt is the optimum Q value for nucleon transfer. The nucleon transfer in heavy-ion-induced reactions is regulated mainly by the Q_opt. However, the charge of colliding partners doesn't change after neutron transfer channels, leading to Q_opt = 0 <cit.>.
Thus, the probability for k neutron transfer at E_c.m. and ℓ, in the entrance channel to the final state with Q ≤ Q_0(k), where Q_0(k) is the Q-value for the ground state to ground state transfer reaction, can be as written as follow [Eq. <ref>]:
α_k(E_c.m., ℓ, Q) = N_k e^-CQ^2 e^-2δ(D(E_c.m., ℓ) - D_0)
where N_k is expressed as :
N_k(E_c.m.) = [ ∫_-E_c.m.^Q_0(k) exp(-CQ^2)dQ]^-1
However, N_k and second exponent of Eq. <ref> should be replaced by 1 if D(E_c.m., ℓ) < D_0.
As one can notice from Eq. <ref>, the gain in energy due to positive Q-value neutron transfer channels may lead to enhancement in fusion probability at the sub-barrier energy domain. However, it is to be noted that up to four neutron transfer channels are taken into account in present calculations as no significant effect can be visible for 5n and 6n transfer.
§ RESULTS AND DISCUSSION
The calculations have been performed for different judicially selected systems using the empirical channel coupling approach <cit.>, which considers the couplings to collective states of colliding nuclei and multi-neutron transfer channels with Q > 0. To perform such calculations, the standard Woods-Saxon ion-ion nuclear potential with Akyüz-Winther (AW) parametrization has been used. The AW parameters, well depth (V_0), radius parameter (r_0), diffuseness (a), and Bass barier curvature (ħω_b) are listed in Table <ref>. The parameters of the vibrational excitations are taken from the NRV experimental database <cit.>. Furthermore, the other parameter necessary for ECC calculation is stiffness, which was taken into account from the liquid drop model <cit.>. The deformation parameters for the rotational excitation in ^28Si are considered from Ref. <cit.>. A detailed discussion of each selected system is as follows.
§.§ ^28Si+^58,62,64Ni
Figure <ref>(a, b, c) depicts the comparison between the theoretical predictions with uncoupled, channel coupling (with and without neutron transfer), and the experimental fusion excitation functions for ^28Si+^58,62,64Ni <cit.>, respectively. Stefanini et al. <cit.> hinted that the cause of enhancement may be due to the presence of the PQNT channel and sought to perform rigorous theoretical calculations to confirm the same. Therefore, we have performed robust theoretical calculations to deepen the underlying mechanism. For theoretical calculations, ^28Si is considered as pure rotor with quadrupole deformation β_2 = -0.407 and hexapole deformation β_4 = +0.25 <cit.> whereas vibrational couplings with stiffness parameters C = 7.149 (^58Ni), 7.164 (^62Ni), and 7.127 (^64Ni) are considered in all the target isotopes of Ni. One can notice that ECC uncoupled (without any coupling) calculations are in reasonable agreement with experimental data below the Coulomb barrier V_b (arrow in Fig. <ref>(a)) whereas slightly overpredicting the experimental data at above barrier energies. Further inclusion of empirical couplings in projectile-target nuclei along with and without neutron transfer channel overestimates the experimental data at below barrier energies, whereas explaining it quite well at above barrier energies. The predicted cross-sections are slightly higher than the experimental data at belowe barrier region, possibly due to the rotational couplings in projectile and mutual excitations in projectile-target nuclei.
However, the ECC predictions are almost identical when we consider with and without neutron transfer couplings for ^28Si+^58Ni, which might be due to the absence of any PQNT channels (see Table <ref>). It is imperative to mention that ^28Si+^58Ni possesses negative Q values for all the neutron transfer channels (up to 6n), as can be observed from Table <ref>. Therefore, even after consideration of neutron transfer channels along with channel couplings, no effect has been observed on fusion cross-sections, and the predicted cross-sections coincide with the channel couplings without neutron transfer curve. As a result, it can be concluded that transfer channels do not influence fusion cross-sections in ^28Si+^58Ni system.
Similarly, the ECC calculations was performed for ^28Si+^62,64Ni systems as shown in Fig. <ref>(b) and (c), respectively. One can observe that ECC predictions without considering neutron transfer are in excellent agreement with experimental fusion data for ^28Si+^62Ni, whereas it was found to be
lower compared to the experimental data for ^28Si+^64Ni. This means that the fusion data do not show any significant enhancement due to Q_+2n = +0.7 MeV for ^28Si+^62Ni system as compared to ^28Si+^64Ni having Q_+2n = +2.6 MeV. This could be due to the fact that Q_+2n value is around four times larger for ^28Si+^64Ni than ^28Si+^62Ni reaction. This is also demonstrated theoretically using the ECC model calculations in Fig. <ref> for ^28Si+^64Ni system. As if one halved the Q_+2n value such as Q_+2n = +2.6/2= +1.3 MeV, then the barrier distribution becomes slightly narrow and increases the barrier height (see Fig. <ref>(b)). As a result, enhancement in fusion cross-section is reduced and becomes much closer to ECC calculations without neutron transfer. Moreover, if the Q_+2n value is doubled, such as Q_+2n = 2×(+2.6) = +5.2 MeV, broaden the BD and suppress the barrier height, as shown in the figure <ref>(b). This reflects the stronger effect on predicted sub-barrier fusion cross-sections and drastically increases the fusion enhancement, as shown in Fig. <ref>(a).
Furthermore, the σ_fus and E_c.m. was scaled by geometrical cross-section (πR^2) and Bass-barrier height (V_B), respectively, as shown in Fig. <ref>(d). By doing so, the effect of different barrier heights and positions can be removed to make the comparison more apparent and to avoid any other effect. The radius (R) was calculated as R = 1.2*(A_p^1/3+A_t^1/3), where A_p and A_t are the mass numbers of projectile and target, respectively. One can observe the considerable amount of enhancement for ^28Si+^64Ni on the reduced scale as compared to ^28Si+^58,62Ni reflecting the significant role of the PQNT channel in the first system.
§.§ ^30Si+^58,62,64Ni
Figure <ref>(a, b, c) shows the comparison between experimental fusion EFs <cit.> and those calculated using the empirical channel coupling model with no coupling (uncoupled), with and without neutron transfer channels for ^30Si+^58,62,64Ni systems. For theoretical calculations, both projectile and target isotopes are considered as vibrators. It can be observed that ECC calculations without considering neutron transfer couplings successfully explain the experimental data throughout the energy domain for all three systems. However, no effect of fusion enhancement was observed when neutron transfer channels were taken into account, which might be due to the fact that all transfer channel Q -values are negative (see Table <ref>) for ^30Si induced reaction except the case for ^30Si+^58Ni (Q_-2n = +1.3). Furthermore, a comparison between all three systems on the reduced scale of σ_fus and E_c.m. shows the overlapping pattern in Fig. <ref>(d), revealing no effect of 2n stripping transfer channel in ^30Si+^58Ni system despite having PQNT channel.
One can conclude from the facts mentioned above that the availability of transfer channels with positive Q-values for ^28Si+^62, 64Ni is readily possible and explained based on ECC model calculations by considering two neutron pickup transfer (^28Si → ^30Si). However, a similar approach cannot apply for ^28Si+^58Ni despite having a positive Q-value for two neutron stripping (^30Si → ^28Si). Similary no significient effect of PQNT has been reported in ^30Si+^156Gd <cit.> system despite of having low Q_+2n = +0.8 MeV using ECC model.
§.§ ^28Si+^90, 92, 94, 94Zr
In order to disentangle the role of the PQNT channels on sub-barrier fusion dynamics, ^28Si+^90, 92, 94, 94Zr systems <cit.> are selected, which are the admixture of positive and negative ground state Q-value for neutron transfer channels, as can be seen in Table <ref>. It is good to mention that it has been pointed out in Refs. <cit.> that robust theoretical calculations incorporating the multi-neutron transfer channels are needed, as the coupled channel calculations failed to explain the sub-barrier fusion data. Therefore, we present detailed calculations using the ECC model, which can consider both the multi-neutron transfer channels and inelastic excitations in colliding nuclei. Rotational couplings in ^28Si and vibrational couplings in target nuclei are being adopted for theoretical calculations.
Figure <ref>(a) shows the comparison between experimental data <cit.> and predicted values using the no coupling (uncoupled), ECC without neutron transfer, and ECC with neutron transfer for the ^28Si+^90Zr system (transfer Q values are negative). The apparent signature of Fig. <ref>(a) is the significant enhancement in the sub-barrier fusion cross-sections as compared to no coupling limit (uncoupled). Another striking feature is that ECC calculations without neutron transfer slightly overpredict the experimental fusion cross-sections below the Coulomb barrier, whereas explaining the data nicely at above barrier energies.
However, the scenario is different for the case of ^28Si+^92Zr system having Q_+2n = +3.3 MeV. Thus, ECC with neutron transfer predictions has a larger enhancement than ECC with no transfer due to the presence of Q_+2n > 0 (see Fig. <ref>(b)). Moreover, these predictions slightly underpredict the experimental data and need a deeper understanding.
One can say that the fusion enhancement due to neutron transfer is more minor for systems with a weak coupling impact to collective states.
Furthermore, a comparison is made between the experimental data <cit.> and ECC predictions for ^28Si+^94,96Zr reactions, having positive Q values up to four and six-neutron pickup transfer channels, respectively (see Table <ref>).
In the ECC model, neutron transfer channels with positive Q-values, along with collective excitations in participating nuclei, are taken into account one after the other (see Fig. <ref>(c,d)). It can be observed from the same figures that ECC predictions with up to 2n transfer are reasonally explaining the experimental fusion data throughout the energy range. The aforementioned argument supports the conclusion that the role of up to 2n transfer with Q > 0 on sub-barrier fusion is significant for these reactions, and further inclusion of 3n and 4n transfer channels did not help much, which describes the no role of multi neutron transfer (more than 2n) on fusion data in these systems. Therefore, the curves for 1-3n and 1-4n are coinciding with 1-2n transfer channel calculations.
Similar observaions was also reported for ^28Si+^158Gd <cit.>, ^40Ca+^70Zn <cit.>, and ^32S+^90,96Zr <cit.> systems recently.
Also, it is good to mention that transfer probability decreases with an increasing number of neutron transfers. Hence, a significant effect is visible only for one or two neutron transfer channels.
One can also see that enhancement in fusion cross-section is significantly large for ^28Si+^96Zr as compared to ^28Si+^90Zr, signifying the crucial role of two-neutron pickup transfer with positive Q-value in the first system.
§.§ ^28Si+^144Nd
To decipher further the impact of multi-neutron transfer on sub-barrier fusion dynamics, a relatively heavy (Z_pZ_t = 840) system with the same projectile ^28Si and a spherical target ^144Nd is chosen with positive Q-values of up to four neutron pickup transfer (see Table <ref>). The experimental data has been taken from Ref. <cit.>. In their studies <cit.>, the coupled channel approach in CCNSC/CCMOD code was broken down when the 2n transfer couplings and higher order excitations in colliding partners were considered, and the experimental data could not be explained.
Thus, the empirical channel coupling calculations are performed by considering no-coupling (uncoupled), without transfer, and with transfer channels, as shown in Fig. <ref>. One may observe that the main effect in fusion enhancement comes from 1n+2n channels on sub-barrier fusion, whereas further inclusion of transfer channels (3n and 4n) does not affect the fusion enhancement probability. This may occur because coupling to the neutron transfer channel with positive Q-values only significantly affects the fusion probability if the process of neutron exchange happens before overcoming the Coulomb barrier.
Thus, it can be concluded that the transfer of only a few neutrons (one or two) influences the transfer probability to a great extent, whereas multi-neutron (>2) transfers do not alter it much.
The sub-barrier fusion cross-sections are quite sensitive to the β_2 of participating nuclei. Therefore, we have checked the β_2 before and after the 2n transfer for those systems with the PQNT channel, and the deformation parameters are taken from Ref. <cit.>. Moreover, Sargsyan et al. <cit.> proposed that the reactions having +Q_2n should show an enhancement in sub-barrier fusion cross-sections if the β_2 of colliding partners increases and the mass asymmetry decreases after 2n transfer. It also pointed out that neutron transfer weakly influences the sub-barrier fusion cross-section if deformations do not alter or slightly decrease.
Therefore, we have shown the deformation (β_2) of nuclei before and after the 2n transfer for those reactions in which the +Q_2n (see Table <ref>) as a function of first 2^+ energy of nuclei (before 2n transfer). As it can be noticed from the Fig. <ref>, ^28Si+^64Ni and ^28Si+^94,96Zr deformation increases after 2n transfer and enhancement in sub-barrier fusion cross-section also observed, follows the systematics made by Sargsyan et al. <cit.>. Whereas, slight increment in β_2 observed for ^62Ni and no effect of PQNT observed in ^28Si+^62Ni reaction. Moreover, ^30Si+^58Ni and ^28Si+^144Nd do not follow the Sargsyan et al. <cit.> systematics, similar behaviour was also reported for ^30Si+^142Ce <cit.>. It is to be noted that mass-asymmetry (η) decreases for all the systems after the 2n transfer except for the ^30Si+^58Ni where +Q_2n exist for the stripping channel.
§ GRAZING CALCULATIONS
GRAZING code <cit.> is designed to study different observables such as mass and charge, and the energy and angular distributions of MNT products in the grazing regions of heavy-ion collisions. It is based on the semiclassical model of Aage Winther <cit.> in which the theoretical description is the approximate solution of CC equations governed by the exchange of particles between the nuclei in a mean-field approximation. Furthermore, the collective excitations and a few nucleon transfers between colliding partners are incorporated using the form factors.
The nucleon transfer probabilities within this model depend on the single particle-level densities of nucleons, which are defined in terms of free model parameters δ^n (for neutrons) and δ^p (for protons). These free parameters can be tuned to reproduce the measured data within the limits of δ^n≥ 5 and δ^p≤ 10, whereas the default values are δ^p = δ^n = 8. In the present calculations, the default and free parameters of collective excitations are being used.
The GRAZING calculations are done for ^28Si+^64Ni, ^28Si+^94,96Zr, and ^28Si+^144Nd reactions where the evidence of 2n transfer observed. Thus, the cross-section of target-like fragments, e.g., ^62Ni, ^92,94Zr, and ^142Nd, which are produced after 2n pickup by their respective projectiles from targets is calculated and shown in Fig. <ref>(a-d), respectively.
The predicted cross-sections are grossly close to the difference between those with (2n) and without neutron transfer predictions by the ECC model. This reflects the reliability of these calculations with two different models.
Moreover, the predicted cross-sections of TLFs from the above-mentioned reactions can also be used as an input to measure them experimentally.
Furthermore, such an advanced understanding of the MNT characteristics of such stable nuclei can be applied to low-intensity radioactive secondary beams <cit.>.
§ FUSION BARRIER PARAMETERS
The estimation of fusion observables such as barrier height V_b and radius R_b are the substantial quantities that benchmark the reliability of the theoretical models and assure the quality of measured data. These experimental barrier parameters are derived using experimental fusion data from literature <cit.> at the above barrier energies by comparing the derived slopes and intercepts (from σ_fus vs. 1/E_c.m. relation) with Eq. <ref>.
These potential barriers are the admixture of the long-ranged repulsive Coulomb term and a short-ranged attractive nuclear term. Thus, on the theoretical front, we calculated the nuclear part of the ion-ion potential using different proximity potentials such as Prox 77, Prox 88, Prox 00, Prox 10, etc., and the addition of Coulomb term gives the total interaction potential V(r) (in MeV).
Afterward, the fusion barrier parameters were extracted using the using the pocket formulas of different versions of proximity potentials; Prox 77, Prox 88, Prox 00, Prox 10 <cit.>, Bass potential <cit.>, and two parametrized forms; Kumari et al. <cit.>, Zhang et al. <cit.>, barrier parameters were calculated and compared with the experimental parameters as listed in Table <ref>.
Furthermore, to access the quality and predictive power of different forms of potentials, we determine the percentage difference of fusion barrier height (Δ V_b) as a function of Z_pZ_t/(A_p^1/3+A_t^1/3), where Z_p, Z_t and A_p, A_t are the atomic and mass numbers of projectile and target, respectively, defined as following:
Δ V_b (in%) = V_b^theor-V_b^expt/V_b^expt× 100
where V_b^expt and V_b^theor are the experimentally derived and theoretically calculated fusion barrier height. One can observe from Table <ref> that the different potentials are in good agreement within 5 % (see Fig. <ref>) difference with the experimentally derived barrier height for all 11 systems. However, Prox 77 potentially reproduces the experimental barrier height within a 10 % difference, and we do not find any correlation with the asymmetry parameter of the involved reactions.
Similarly, the % difference for barrier position was determined, and it was found that theoretical models can reproduce barrier positions within 20 % deviations. This can be due to the fact that there is enormous variation in extracting barrier positions experimentally. Also, it has been realized that certain factors such as the addition or removal of neutrons <cit.> and deformation (oblate/prolate) <cit.> in colliding nuclei change the fusion barrier position. This comparison between experimental and theoretical data could be helpful in further refinements in different parameterized forms of V_b and radius R_b. Also, it can be used to predict theoretical fusion cross-sections.
§ CONCLUSION
The influence of a few neutron transfer channels with Q > 0 on sub-barrier fusion cross-sections is studied for several systems within the ECC model framework. A good agreement between model calculations and experimental data is achieved. It is found that experimental data is well reproduced for ^28Si+^62Ni, ^30Si+^58,62,64Ni, and ^28Si+^90Zr systems by considering inelastic excitations such as rotational coupling in ^28Si and vibrational couplings in ^30Si and target nuclei involved in these reactions. However, the fusion data can be explained only by incorporating the neutron transfer channels along with inelastic excitations for ^28Si+^64Ni, ^28Si+^92,94,96Zr, and ^28Si+^144Nd reactions which have the PQNT channels. Therefore, it can be concluded that the role of up to 2n transfer is sufficient to explain the fusion data for such reactions, and neutron transfer (> 2n) is found insignificant despite having PQNT channels. Furthermore, the GRAZING calculations were performed to estimate the cross-section of target-like fragments after 2n transfer for those reactions in which the role of 2n is found significant. These calculations are also foreseen for MNT and quas-elastic experiments in the near future.
The fusion barrier parameters, such as barrier height and radius, were derived using the experimental data from literature for several reactions and compared with different potential models. It is found that all potential models are in good agreement within 5 % difference with the experimentally derived barrier height for all 11 systems. However, Prox 77 potentially reproduces the experimental barrier height within a 10 % difference.
The % difference for barrier position was 20 % between model predictions and experiment.
10
Watanabe2015 Y. X. Watanabe, Y. H. Kim, S. C. Jeong, Y. Hirayama, N. Imai, H. Ishiyama, H. S. Jung, H. Miyatake, S. Choi, J. S. Song et al., https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.172503Phys. Rev. Lett. 115, 172503 (2015).
Commara2000 M. La Commara, J. Gómez del Campo, A. D'Onofrio, A. Gadea, M. Glogowski, P. Jarillo-Herrero, N. Belcari, R. Borcea, G. de Angelis, C. Fahlander et al., https://www.sciencedirect.com/science/article/pii/S0375947499008143Nucl. Phys. A 669, 43 (2000).
Tea2022 T. Mijatović, https://www.frontiersin.org/articles/10.3389/fphy.2022.965198/fullFront. Phys. 10, 965198 (2022).
Jiang2021 C. L. Jiang, B. B. Back, K. E. Rehm, K. Hagino, G. Montagnoli, and A. M. Stefanini, https://link.springer.com/article/10.1140/epja/s10050-021-00536-2Eur. Phys. J. A 57, 235 (2021).
Back2014 B. B. Back, H. Esbensen, C. L. Jiang, and K. E. Rehm, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.86.317Rev. Mod. Phys. 86, 317 (2014).
Beckerman1988 M. Beckerman, https://iopscience.iop.org/article/10.1088/0034-4885/51/8/001Rep. Prog. Phys. 51, 1047 (1988).
Dasgupta1998 M. Dasgupta, D. J. Hinde, N. Rowley, and A. M. Stefanini, https://www.annualreviews.org/doi/10.1146/annurev.nucl.48.1.401Annu. Rev. Nucl. Part. Sci. 48, 401 (1998).
Chauhan2020 A. Chauhan, R. Prajapat, G. Sarkar, M. Maiti, R. Kumar, Malvika, Gonika, J. Gehlot, S. Nath, A. Parihari et al., https://journals.aps.org/prc/abstract/10.1103/PhysRevC.102.064606Phys. Rev. C 102, 064606 (2020).
Prajapat2022 R. Prajapat, M. Maiti, R. Kumar, M. Sagwal, Gonika, C. Kumar, R. Biswas, J. Gehlot, S. Nath, and N. Madhavan, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.105.064612Phys. Rev. C 105, 064612 (2022).
Deepak2021 D. Kumar, M. Maiti, R. Prajapat, A. Chauhan, R. Biswas, J. Gehlot, S. Nath, R. Kumar, N. Madhavan, G. N. Jyothi et al., https://journals.aps.org/prc/abstract/10.1103/PhysRevC.104.014602Phys. Rev. C 104, 014602 (2021).
Stefanini2021 A. M. Stefanini, G. Montagnoli, M. D'Andrea, M. Giacomin, C. Dehman, R. Somasundaram, V. Vijayan, L. Zago, G. Colucci, F. Galtarossa
et al., https://iopscience.iop.org/article/10.1088/1361-6471/abe8e2/metaJ. Phys. G: Nucl. Part. Phys. 48, 055101 (2021).
Tripathi2001 V. Tripathi, L. T. Baby, J. J. Das, P. Sugathan, N. Madhavan, A. K. Sinha, P. V. Madhusudhana Rao, S. K. Hui, R. Singh, and K. Hagino, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.65.014614Phys. Rev. C 65, 014614 (2001).
Esbensen2007 H. Esbensen and Ş. Mişicu, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.76.054609Phys. Rev. C 76, 054609 (2007).
Zagrebaev2007 V. I. Zagrebaev, V. V. Samarin, and W. Greiner, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.75.035809 Phys. Rev. C 75, 035809 (2007).
Zagrebaev2004 V. I. Zagrebaev and V. V. Samarin, https://link.springer.com/article/10.1134/1.1788037 Phys. Atom. Nucl. 67, 1462, (2004).
Umar2012 A. S. Umar, V. E. Oberacker, and C. J. Horowitz, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.85.055801 Phys. Rev. C 85, 055801, (2012).
Scarlassara2000 F. Scarlassara, S. Beghini, G. Montagnoli, G. F. Segato, D. Ackermann, L. Corradi, C. J. Lin, A. M. Stefanini, and L. F. Zheng, https://www.sciencedirect.com/science/article/pii/S0375947400000567Nucl. Phys. A 672, 99 (2000).
Stefanini2013 A. M. Stefanini, G. Montagnoli, F. Scarlassara, C. L. Jiang, H. Esbensen, E. Fioretto, L. Corradi, B. B. Back, C. M. Deibel, and B. Di Giovine, https://link.springer.com/article/10.1134/1.1788037 Eur. Phys. J. A 49, 63 (2013).
Jiang2014 C. L. Jiang, K. E. Rehm, B. B. Back, H. Esbensen, R. V. F. Janssens, A. M. Stefanini, and G. Montagnoli, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.89.051603 Phys. Rev. C 89, 051603(R) (2014).
Vandenbosch1997 R. Vandenbosch, A. A. Sonzogni, and J. D. Bierman, https://iopscience.iop.org/article/10.1088/0954-3899/23/10/019/meta J. Phys. G: Nucl. Part. Phys. 23, 1303 (1997).
Kolata2012 J. J. Kolata, A. Roberts, A. M. Howard, D. Shapira, J. F. Liang, C. J. Gross, R. L. Varner, Z. Kohley, A. N. Villano, H. Amro et al., https://journals.aps.org/prc/abstract/10.1103/PhysRevC.85.054603 Phys. Rev. C 85, 054603 (2012).
Henning1987 W. Henning, F. L. H. Wolfs, J. P. Schiffer, and K. E. Rehm, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.58.318Phys. Rev. Lett. 58, 318 (1987).
Prajapat2023 R. Prajapat, M. Maiti, R. Kumar, M. Sagwal, Gonika, C. Kumar, R. Biswas, J. Gehlot, S. Nath, and N. Madhavan, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.107.064616 Phys. Rev. C 107, 064616 (2023).
Kaur2024 A. Kaur, A. Kumar, C. Sharma, N. Dhanda, Raghav, N. Madhavan, S. Nath, J. Gehlot, Gonika, C. Kumar et al., https://www.sciencedirect.com/science/article/abs/pii/S037594742300194X Nucl. Phys. A 1024, 122791 (2024)
Beckerman1980 M. Beckerman, M. Salomaa, A. Sperduto, H. Enge, J. Ball, A. DiRienzo, S. Gazes, Y. Chen, J. D. Molitoris, and Mao Nai-feng, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.45.1472Phys. Rev. Lett. 45, 1472 (1980).
Kohley2011 Z. Kohley, J. F. Liang, D. Shapira, R. L. Varner, C. J. Gross, J. M. Allmond, A. L. Caraley, E. A. Coello, F. Favela, K. Lagergren et al., https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.107.202701Phys. Rev. Lett. 107, 202701 (2011).
Jia2012 H. M. Jia, C. J. Lin, F. Yang, X. X. Xu, H. Q. Zhang, Z. H. Liu, L. Yang, S. T. Zhang, P. F. Bao, and L. J. Sun, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.86.044621Phys. Rev. C 86, 044621 (2012).
Stelson1988 P. H. Stelson, https://www.sciencedirect.com/science/article/abs/pii/0370269388916474Phys. Lett. B 205, 190 (1988).
Stelson1990 P. H. Stelson, H. J. Kim, M. Beckerman, D. Shapira, and R. L. Robinson, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.41.1584Phys. Rev. C 41, 1584 (1990).
Rowley1992N. Rowley, I.J. Thompson, M.A. Nagarajan, https://www.sciencedirect.com/science/article/abs/pii/037026939290638KPhys. Lett. B 282, 276 (1992).
Zagrebaev2001 V. I. Zagrebaev, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.64.034606 Phys. Rev. C 64, 034606 (2001); https://journals.aps.org/prc/abstract/10.1103/PhysRevC.67.061601 Phys. Rev. C 67, 061601(R) (2003); http://nrv.jinr.ru/nrv/webnrv/fusion/http://nrv.jinr.ru/nrv/webnrv/fusion/
Zhang2010H. Q. Zhang, C. J. Lin, F. Yang, H. M. Jia, X. X. Xu, Z. D. Wu, F. Jia, S. T. Zhang, Z. H. Liu, A. Richard et al., https://journals.aps.org/prc/abstract/10.1103/PhysRevC.82.054609Phys. Rev. C 82, 054609 (2010).
Khushboo2019 Khushboo, N. Madhavan, S. Nath, A. Jhingan, J. Gehlot, B. Behera, S. Verma, S. Kalkal, and S. Mandal, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.100.064612Phys. Rev. C 100, 064612 (2019).
Adel2012 A. Adel, V.A. Rachkov, A.V. Karpov, A.S. Denikin, M. Ismail, W.M. Seif, A.Y. Ellithi, https://www.sciencedirect.com/science/article/abs/pii/S0375947412000061Nucl. Phys. A 876, 119 (2012).
Prajapat2020_LiY R. Prajapat and M. Maiti, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.101.024608Phys. Rev. C 101, 024608 (2020).
Prajapat2020_LiZr R. Prajapat and M. Maiti, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.101.064620Phys. Rev. C 101, 064620 (2020).
Prajapat2021_6Li+Y R. Prajapat and M. Maiti, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.103.034620Phys. Rev. C 103, 034620 (2021).
DKumar2017 D. Kumar, M. Maiti, and S. Lahiri, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.96.014617Phys. Rev. C 96, 014617 (2017).
Prajapat2020_PEQ R. Prajapat, M. Maiti, D. Kumar, and A. Chauhan, https://iopscience.iop.org/article/10.1088/1402-4896/ab784e/metaPhys. Scr. 95, 055306 (2020).
Zagrebaev2008V. Zagrebaev and W. Greiner, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.101.122701Phys. Rev. Lett. 101, 122701 (2008).
Son2023Y. Son, Y. H. Kim, Y. Cho, S. Choi, J. Park, S. Bae, K. I. Hahn, A. Navin, A. Lemasson, M. Rejmund et al., https://www.sciencedirect.com/science/article/pii/S0168583X23001544?casa_token=k62XWFt77V0AAAAA:Fif5O2S66RdXJSUosJsaMU_lusHwiHVGM8Hzm7i5dTtL_37OhUSbnY128EI73ApDOWAoCZiIQ9ENucl. Instrum. Methods Phys. Res. B 540, 234 (2023).
Watanable2015Y. X. Watanabe, Y. H. Kim, S. C. Jeong, Y. Hirayama, N. Imai, H. Ishiyama, H. S. Jung, H. Miyatake, S. Choi, J. S. Song et al., https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.172503Phys. Rev. Lett. 115, 172503 (2015).
Kubo2016T. Kubo, https://www.sciencedirect.com/science/article/abs/pii/S0168583X16001713Nucl. Instrum. Methods Phys. Res. B 376, 1 (2016).
Mijatovic2016T. Mijatović, S. Szilner, L. Corradi, D. Montanari, G. Pollarolo, E. Fioretto, A. Gadea, A. Goasduff, D. Jelavić Malenica et al., https://journals.aps.org/prc/abstract/10.1103/PhysRevC.94.064616Phys. Rev. C 94, 064616 (2016).
Galtarossa2018 F. Galtarossa, L. Corradi, S. Szilner, E. Fioretto, G. Pollarolo, T. Mijatović, D. Montanari, D. Ackermann, D. Bourgin, S. Courtin et al., https://journals.aps.org/prc/abstract/10.1103/PhysRevC.97.054606Phys. Rev. C 97, 054606 (2018).
Adamian2010 G. G. Adamian, N. V. Antonenko, V. V. Sargsyan and W. Scheid, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.81.057602Phys. Rev. C 81, 057602 (2010).
Welsh2017 T. Welsh, W. Loveland, R. Yanez, J. S. Barrett, E. A. McCutchan, A. A. Sonzogni, T. Johnson, S. Zhu, J. P. Greene, A. D. Ayangeakaa et al., https://www.sciencedirect.com/science/article/pii/S0370269317304070Phys. Lett. B 771, 119 (2017).
Ichikawa2005 T. Ichikawa, A. Iwamoto, P. Möller, and A. J. Sierk, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.71.044608Phys. Rev. C 71, 044608 (2005).
Gharaei2019 R. Gharaei and G. L. Zhang,
https://www.sciencedirect.com/science/article/pii/S0375947419301733Nucl. Phys. A 990, 294 (2019).
Ghodsi2013 O. N. Ghodsi and F. Lari,
https://www.worldscientific.com/doi/abs/10.1142/S0217732313501162 Mod. Phys. Lett. A 28, 1350116 (2013).
Dutt2010 I. Dutt and R. K. Puri, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.81.064608 Phys. Rev. C 81, 064608 (2010); https://journals.aps.org/prc/abstract/10.1103/PhysRevC.81.064609 Phys. Rev. C 81, 064609 (2010).
Bass1977 R. Bass, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.39.265 Phys. Rev. Lett. 39, 265 (1977).
Kumari2015 R. Kumari and R. K.Puri,
https://www.sciencedirect.com/science/article/pii/S0375947414005442Nucl. Phys. A 933, 135 (2015).
Zhang2016 G. L. Zhang and M. Pan, https://www.worldscientific.com/doi/abs/10.1142/S0218301316500828 Int. J. Mod. Phys. E 25, 1650082 (2016).
Balantekin1998 A. B. Balantekin and N. Takigawa, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.70.77Rev. Mod. Phys. 70, 77 (1998).
HillWheeler1953D. L. Hill and J. A. Wheeler, https://journals.aps.org/pr/abstract/10.1103/PhysRev.89.1102 Phys. Rev. 89, 1102 (1953).
Wong1973 C. Y. Wong, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.31.766 Phys. Lett. 31, 766 (1973).
Oertzcn1987W. von Oertzcn, H.G. Bohlen, B. Gebauer, R. Kiinkel, F. Piihlhofer, and D. Schiill, https://link.springer.com/article/10.1007/BF01289551 Z. Physik A - Atomic Nuclei 326, 463 (1987).
Henning1978W. Henning, Y. Eisen, H.-J. Körner, D. G. Kovar, J. P. Schiffer, S. Vigdor, and B. Zeidman, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.17.2245 Phys. Rev. C 17, 2245(R) (1978).
NRV NRV: Near and sub-barrier fusion reactions of atomic nuclei, http://nrv.jinr.ru/nrv/webnrv/fusion/http://nrv.jinr.ru/nrv/webnrv/fusion/
Bohr1998A. Bohr and B. R. Mottelson, https://books.google.de/books?hl=en lr= id=NNZQDQAAQBAJ oi=fnd pg=PP1 dq=A.+Bohr+and+B.+R.+Mottelson,+Nuclear+Structure,+Vol.+2+(World+Scientific,+Singapore,+1998) Nuclear Structure, Vol. 2 (World
Scientific, Singapore, 1998
Stefanini1984 A. M. Stefanini, G. Fortuna, A. Tivelli, W. Meczynski, S. Beghini, C. Signorini, S. Lunardi, and M. Morando, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.30.2088Phys. Rev. C 30, 2088 (1984).
Kalkal2010 S. Kalkal, S. Mandal, N. Madhavan, E. Prasad, S. Verma, A. Jhingan, R. Sandal, S. Nath, J. Gehlot, B. R. Behera et al., https://journals.aps.org/prc/abstract/10.1103/PhysRevC.81.044610Phys. Rev. C 81, 044610 (2010).
Khushboo2017 Khushboo, S. Mandal, S. Nath, N. Madhavan, J. Gehlot, A. Jhingan, N. Kumar, T. Banerjee, G. Kaur, K. R. Devi et al., https://journals.aps.org/prc/abstract/10.1103/PhysRevC.96.014614Phys. Rev. C 96, 014614 (2017).
Sinha1997 A. K. Sinha, L. T. Baby, N. Badiger, J. J. Das, S. K. Hui, D. O. Katari, R. G. Kulkarni, N. Madhavan, P. V. Madhusudhana Rao, I. Majumdar et al., https://iopscience.iop.org/article/10.1088/0954-3899/23/10/022/meta J. Phys. G: Nucl. Part. Phys. 23, 1331 (1997).
Raman2001 S. Raman, C. W. Nestor, JR., and P. Tikkanen, At. Data Nucl. Data Tables 78, 1 (2001).
Sargsyan2012 V. V. Sargsyan, G. G. Adamian, N. V. Antonenko, W. Scheid, and H. Q. Zhang, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.85.024616Phys. Rev. C 85, 024616 (2012).
Grazing_code http://personalpages.to.infn.it/ nanni/grazing/http://personalpages.to.infn.it/ nanni/grazing/
Winther1994 A. Winther, http://personalpages.to.infn.it/ nanni/grazing/Nucl. Phys. A 572, 191 (1994); A 594 (1995) 203.
Michimasa2014 S. Michimasa, Y. Yanagisawa, K. Inafuku, N. Aoi, Z. Elekes, Zs. Fülöp, Y. Ichikawa, N. Iwasa,
K. Kurita, M. Kurokawa et al., https://journals.aps.org/prc/abstract/10.1103/PhysRevC.89.054307Phys. Rev. C 89 (2014) 054307.
Nicolis2004 N. G. Nicolis, https://link.springer.com/article/10.1140/epja/i2003-10211-3Eur. Phys. J. A 21, 265 (2004).
Puri1998 R. K. Puri, M. K. Sharma and R. K. Gupta, https://link.springer.com/article/10.1007/s100500050178Eur. Phys. J. A 3, 277 (1998).
Denisov2007 V. Yu. Denisov and N. A. Pilipenko, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.76.014602Phys. Rev. C 76, 014602 (2007).
|
http://arxiv.org/abs/2409.03481v1 | 20240905124323 | Physical Modelling of Piano Sound | [
"Haifan Xie"
] | physics.class-ph | [
"physics.class-ph"
] |
UTF8
Physical Modelling of Piano Sound
Haifan XieThird-year master student, School of Accounting, Guangdong University
of Foreign Studies, Guangzhou, China. Email: [email protected].
Received XXX; accepted ZZZ
====================================================================================================================================================
§ ABSTRACT
This paper develops a comprehensive physical model and numerical implementation
schemes for a grand piano, building upon the prior works of Chabassier
et al. The model encompasses various subsystems, including hammer
felt, hammer shank, string, soundboard, air and room barriers, each
modelled in three dimensions to approach their realistic dynamics.
A general framework for 3D elastic solids accounting for prestress
and prestrain is introduced, particularly addressing the complexities
of prestressed piano strings. The study also examines coupling between
subsystem through mechanisms of surface force transmission and displacement/velocity
continuity. To facilitate numerical simulations, strong PDEs are translated
into weak ODEs via a flexible space discretization approach. Modal
transformation of system ODEs is then employed to decouple and reduce
DOFs, and an explicit time discretization scheme is customized for
generating digital audio in the time domain. The study concludes with
a discussion of the piano model’s capabilities, limitations, and potential
future enhancements.
§ INTRODUCTION
This study aims to present a detailed physical model of a grand piano
and its numeric implementation schemes. The main features of our model
and numeric schemes include:
* As 3D as possible. Subsystems including hammer felt, hammer shank,
soundboard, air and room barriers are all fully 3D geometrically modelled.
This also means unless for rigid bodies (hammer shank), their 3-directional
displacements fully vary with their 3D positions. Our geometric model
of piano strings is semi-3D, viz. a cylinder geometry described by
coordinates of one axis, but accounts for the strings' 3-directional
displacements.
* A general framework for 3D elastic solid. Rooted in the 3D elasticity
theory, this framework not only follows the classic stress-strain
relationship, but also introduces a simple yet intuitive model for
prestrain and prestress. For significantly prestressed structures
like piano strings, we offer a more “naturally linear” prestress
model compared with the elaborated geometrically exact nonlinear stiff
string model in <cit.>.
* Consideration of coupling between subsystems of the piano. Two mechanisms
are considered here: surface force transmission and displacement/velocity
continuity. Furthermore, nonlinear coupling due to collision rather
than fixation is considered in the interaction between hammer felt
and string, and discussed (but failed to implement) in the interaction
between string and two bridge pins.
* A straightford framework for weak forms and system ordinary differential
equations (ODE). For the sake of numeric simulation via the finite
element method (FEM), we transform the strong form partial differential
equations (PDE) into first or second order system ODEs for each subsystem
of the piano. This is achieved via a flexible space discretization
framework, which can easily incoporate Dirichlet boundary conditions.
* Modal transformation for solving system ODEs. By solving generalized
eigenvalue problems of the mass and stiffness matrices, we decouple
the coupled system ODEs and significantly reduce the large vector
of degrees of freedom (DOF) into a much smaller vector of modal DOFs.
* An explicit time discretization scheme customized for solving modal
ODEs exhibiting source term coupling. This scheme is relatively efficient
in that no mid steps are evaluated and no inversion of large non-diagonal
matrices is needed. This scheme is relatively accurate in that rhs
source terms of the next time step are used for approximate integration
whenever possible, and more than one iteration of numeric integration
at each time step may be run to improve convergence.
The rest of this paper starts from a basic 3D elastic prestressed
solid model in section 2, which will be applied in several cases later.
Subsequently sections 3 to 5 introduce models for subsystems of the
piano: soundboard, string, air and room barriers. Section 6 investigates
the mechanisms for coupling between these subsystems, where hammer
felt and shank models are introduced. Section 7 presents the modal
superposition method and explicit time stepping schemes to solve the
derived system ODEs numerically, and a comprehensive computation procedure
for running the simulation. Section 8 gives a brief summary of the
whole model and discusses current limitations and future outlooks.
Here we explain some notation patterns that will appear throughout
this paper. If not explicitly specified, symbols defined only apply
to the current section. In cases where avoiding symbol conflict is
necessary, superscripts ^(a),^(b),^(c),^(d),^(e),^(f)
refer to the string, soundboard, air, hammer shank, hammer felt parts,
room barriers of a piano physical system respectively. For the convenience
of notation, (x,y,z) and (x_1,x_2,x_3), (u,v,w) and
(u_1,u_2,u_3) are used interchangeably.
§ PRELIMINARY: 3D LINEAR ELASTIC SOLID WITH PRESTRESS
In this study, the physical models of piano strings and soundboard
are based on the full or reduced versions of the linear theory of
elasticity <cit.>. In a 3D Cartesian coordinate
system (x,y,z), consider a material defined over a space Ω⊂ℝ^3
with a boundary Γ⊂ℝ^3, with homogenous or
heterogenous density ρ(x,y,z), and dynamic displacements in
the 3 directions as
u(x,y,z,t)=[u(x,y,z,t),v(x,y,z,t),w(x,y,z,t)]^⊤.
Based on the displacement field, we shall perform a force analysis
of the material, considering two kinds of forces: surface force and
body force.
Stress is a major source of surface force for elastic material, and
can be derived from the stress-strain relation. The strain matrix
(second-order symmetric tensor) is expressed by the gradients of displacements
as
ϵ=[[ ϵ_11 ϵ_12 ϵ_13; ϵ_12 ϵ_22 ϵ_23; ϵ_13 ϵ_23 ϵ_33 ]]=[[ ∂_xu 1/2(∂_yu+∂_xv) 1/2(∂_zu+∂_xw); 1/2(∂_yu+∂_xv) ∂_yv 1/2(∂_zv+∂_yw); 1/2(∂_zu+∂_xw) 1/2(∂_zv+∂_yw) ∂_zw ]],
which is the linearized Green-Lagrange strain. It can also be written
in vector form as
ϵ=[[ ϵ_11; ϵ_22; ϵ_33; 2ϵ_12; 2ϵ_13; 2ϵ_23 ]]=[[ ∂_xu; ∂_yv; ∂_zw; ∂_yu+∂_xv; ∂_zu+∂_xw; ∂_zv+∂_yw ]]=∑_i=1^3H_i∇ u_i,
where
H_1=[[ 1 0 0; 0 0 0; 0 0 0; 0 1 0; 0 0 1; 0 0 0 ]], H_2=[[ 0 0 0; 0 1 0; 0 0 0; 1 0 0; 0 0 0; 0 0 1 ]], H_3=[[ 0 0 0; 0 0 0; 0 0 1; 0 0 0; 1 0 0; 0 1 0 ]].
From the generalized Hooke's law, the stress-strain relationship is
(here the stress is Cauchy stress)
σ=[[ σ_11; σ_22; σ_33; σ_12; σ_13; σ_23 ]]=Dϵ, D=[[ D_11 D_12 D_13 D_14 D_15 D_16; D_22 D_23 D_24 D_25 D_26; D_33 D_34 D_35 D_36; D_44 D_45 D_46; D_55 D_56; Sym. D_66 ]],
and the stress matrix (second-order symmetric tensor) is defined as
σ=[[ σ_11 σ_12 σ_13; σ_12 σ_22 σ_23; σ_13 σ_23 σ_33 ]].
As shown in figure <ref>, there are 9 stress forces
acting on the 3 positive surfaces (as well as 9 stress forces on the
3 negative surfaces not shown) of an infinitesimal volume of material.
D is called the constitutive matrix and its inverse
D^-1 is called the compliance matrix. The i th
row or column of the stress matrix can be expressed as
σ_i=∑_j=1^3A_ij∇ u_j, A_ij=H_i^⊤DH_j, A=[[ A_1; A_2; A_3 ]]=[[ A_11 A_12 A_13; A_21 A_22 A_23; A_31 A_32 A_33 ]],
where block matrix A is symmetric. As σ_i
represents surface forces, converting it to body force would result
in ∇·σ_i.
Prestress is considered as the initial stress of material in static
equilibrium, which is not included in the above analysis of stress.
Incorporating prestress into our analysis would require some additional
work as presented in appendix <ref>,
which we shall refer to. Define the static tension field matrix (second-order
symmetric tensor) and its vector form as
T(x,y,z)=[[ T_11 T_12 T_13; T_12 T_22 T_23; T_13 T_23 T_33 ]], T(x,y,z)=[[ T_11; T_22; T_23; T_12; T_13; T_23 ]],
which can be visualized by figure <ref> similar
to stress. Since T is the prestress in static equilibrium,
∇·T=0 should hold (ignoring
gravity). As per (<ref>), the contribution of
prestress to the total stress consists of the static prestress T,
and the dynamic prestress τ which is also a symmetric
tensor. Denote T_i and τ_i
as the i th row or column of T and τ
respectively, then we find
τ_i=∑_j=1^3B_ij∇ u_j, B_ij=H_i^⊤DΨH_j, B=[[ B_1; B_2; B_3 ]]=[[ B_11 B_12 B_13; B_21 B_22 B_23; B_31 B_32 B_33 ]],
where block matrix B is non-symmetric. Similar to
σ_i previously discussed, T_i
and τ_i contain the 3 sources of prestress in
the same i th axis. For static equilibrium, ∇·T_i=0
should hold; then in dynamic states, ∇·τ_i
is the body force of prestress in the i th axis.
Besides stress and prestress which are conservative, non-conservative
forces like damping force often exist. For elastic non-metallic solid,
viscoelastic damping is often the predominant damping <cit.>.
We hereby adopt a simple viscoelastic model: the viscous damping force
is positively proportional to the first-order time derivative of stress
and prestress[An explanation for this is that stress and prestress, along with viscous
damping force, are resistance to deformation, and so their relations
should be positive. ]. Occurring on surfaces of small volumes, the damping forces can be
visualized by figure <ref> similar to stress. The
vector of damping forces parallel to the x_i axis is defined
as
ς_i=2μ∂_t(σ_i+τ_i),
where 2μ>0 is the damping coefficient. Other damping models can
also be incoporated into our model, e.g. structural damping that adds
frequency-dependent imaginary parts to elasticity coefficients <cit.>.
As for non-conservative forces besides damping, we represent them
as a single force F=[F_1,F_2,F_3]^⊤ and
will investigate them for different physical systems later.
Having derived all the relevant forces acting on an infinitesimal
volume of solid material, we can invoke Newton's second law to derive
3 PDEs as
ρ∂_ttu_i=∇·G_i+F_i, i=1,2,3.
where vector G_i=σ_i+τ_i+ς_i
is defined as the i th row or column of symmetric tensor G
representing all surface forces except the static prestress. The above
equation can also be derived via a Lagrange formulation of the virtual
work and the variations of kinetic and potential energy. We then seek
for a weak form of it via variational formulation. Multiplying an
arbitrary test function vector ψ(x,y,z) on both
sides of (<ref>) and integrating over Ω yields
∫_Ωψρ∂_ttu_idV-∫_Ωψ∇·G_idV =∫_ΩψF_idV
∫_Ωψρ∂_ttu_idV+∫_Ω(∇ψ)G_idV =∫_ΩψF_idV
where dV=dxdydz and applying
the gradient operator ∇ to a vector results in its Jacobian
matrix. In the above formulation, integration by parts is utilized
with Neumann boundary condition imposed as
G_i·dΓ=0,
where dΓ is the outward normal vector
of the tangent plane of any point on Γ. Note that this condition
applies only when the test function is non-zero on the boundary, viz.
Neumann boundary conditions need not be satisfied at points where
Dirichlet boundary conditions are present.
In the next following sections, we shall apply the above 3D elastic
solid material model to various parts of the piano physical system,
including soundboard, string, room material and hammer felt. Also,
we will use space discretization and the Galerkin method to transform
(<ref>) into weak forms of second-order ODE.
§ MODEL FOR PIANO SOUNDBOARD
The symbols in this section follow section 2 if not explicitly defined.
Definitions for scalars, vectors and matrices with subscript _0,
if not explicited stated, are automatically inferred from their counterparts
without subscript _0.
The study treats the grand piano soundboard as a multi-layer plate
with irregular geometry, and physically models it as a 3D structure.
In <cit.>,
the soundboard was modelled as a Reissner-Mindlin plate, accounting
for the ribs and bridges by making the thickness, density, and elastic
coefficients position-dependent; however, it is unclear how the different
orthotropic angles of each layer were handled. In <cit.>,
the soundboard was modelled as a Kirchhoff-Love plate, considering
a 90 degrees orthotropic rotation of the ribs; however, as the Kirchhoff-Love
plate specifies only 1 unknown for the 3D displacement field, it may
not be accurate enough, especially regarding the ribs and bridges
which are much thicker than the board. Moreover, the role of rim and
lid in shaping the piano timbre seems under-evaluated.
To better account for the soundboard's multi-layer feature, the current
study implements a fully 3D soundboard model. Figure <ref>
shows a simplified soundboard sketch based on <cit.>.
In our physical model, the soundboard is an entity composed of 5 parts:
board, ribs, bridges, rim, lid. The board is parallel to the xOy
plane, with the ribs attached to its under side and the bridges attached
to its upper side. The horizontal fibers of the board and bridges
are approximately parallel, while the horizontal fibers of ribs are
almost orthogonal to that of the board. The rim consists of 2 parts:
the inner rim connects the board's boundary from below using bolts
and dowels, and the outer rim encases the board and the lower rim.
The lid, as large as the board and with an angle to it, is an extension
of the outer rim through a wooden stick (considered part of the lid)
and hinges. All parts of the soundboard are treated as a whole in
computation, meaning that the displacements of part intersections
are uniform.
We define the occupied space of soundboard as Ω⊂ℝ^3,
and the boundaries of soundboard as Γ_1,Γ_2⊂Γ⊂ℝ^3.
Here Γ contains all the 2D surfaces of the 3D volume of soundboard;
Γ_1 contains only the underside surfaces of the inner and
outer rims, as marked red at the right of figure 2; Γ_2
is Γ excluding Γ_1, the vibrating parts. We shall
see in the following that Γ_1 and Γ_2 are where
the Dirichlet boundary conditions apply to the soundboard and the
air respectively.
Based on (<ref>), define the displacement field of
the soundboard as u^*≈u=[u,v,w]^⊤
(here superscript ^* means the exact solution) as
u(x,y,z,t) =φ_1(x,y,z)·[S_1ξ(t)+S_0,1ξ_0(t)],
v(x,y,z,t) =φ_2(x,y,z)·[S_2ξ(t)+S_0,2ξ_0(t)],
w(x,y,z,t) =φ_3(x,y,z)·[S_3ξ(t)+S_0,3ξ_0(t)],
where the sizes of vectors and matrices are
ξ:N×1, ξ_0:N_0×1,
φ_1:N_1×1, S_1:N_1× N, S_0,1:N_0,1× N_0,
φ_2:N_2×1, S_2:N_2× N, S_0,2:N_0,2× N_0,
φ_3:N_3×1, S_3:N_3× N, S_0,3:N_0,3× N_0.
In (<ref>), each scalar displacement unknown
is expressed as the dot product of a space function vector and time
function vector. In FEM, φ_i is the vector
of shape functions (also known as interpolation functions) for variable
u_i, and ϑ_i=S_iξ+S_0,iξ_0
is the vector of coefficients of shape functions, often interpreted
as nodal displacements. The only unknown here is ξ(t),
the vector of unknown DOFs, whereas other vectors and matrices are
known. ξ(t) is defined so that through some linear
transformation by S_1 and S_0,1ξ_0(t),
the nodal displacements ϑ_i can be obtained.
The reason we do not simply define u_i=φ_i·ξ_i
but consider a linear transformation is that it provides additional
flexibility when imposing Dirichlet boundary conditions. For instance,
when u+v rather than u or v is known to be some nonzero functions
on the space boundary, ξ only needs to contain u
and u+v is incorporated in ξ_0. We can thus
see that total DOF, viz. the number of time functions we need to solve,
is N that does not necessarily equal to N_1+N_2+N_3. Often
in simple cases, matrix S=[S_1^⊤,S_2^⊤,S_3^⊤]^⊤
is diagonal with many ones and some zeros on the diagonal.
To express the displacement and its gradient in the DOF vector, we
find the below relations:
u_i=P_iξ+P_0,iξ_0, ∇ u_i=Q_iξ+Q_0,iξ_0,
P=[[ P_1; P_2; P_3 ]]=[[ φ_1^⊤S_1; φ_2^⊤S_2; φ_3^⊤S_3 ]], P_0=[[ P_0,1; P_0,2; P_0,3 ]]=[[ φ_1^⊤S_0,1; φ_2^⊤S_0,2; φ_3^⊤S_0,3 ]],
Q=[[ Q_1; Q_2; Q_3 ]]=[[ (∇φ_1)^⊤S_1; (∇φ_2)^⊤S_2; (∇φ_3)^⊤S_3 ]], Q_0=[[ Q_0,1; Q_0,2; Q_0,3 ]]=[[ (∇φ_1)^⊤S_0,1; (∇φ_2)^⊤S_0,2; (∇φ_3)^⊤S_0,3 ]].
Given that all parts of the soundboard are mainly made of wood, an
orthotropic material whose orthotropic directions are determined by
fibers <cit.>, the compliance matrix writes
D_orth^-1=[[ 1/E_x -ν_xy/E_x -ν_xz/E_x 0 0 0; 1/E_y -ν_yz/E_y 0 0 0; 1/E_z 0 0 0; G_xy 0 0; G_xz 0; Sym. G_yz ]].
Note that different layers would have different elastic coefficients.
It is then necessary to consider that for a certain layer like ribs,
the global axes x,y may need to be rotated by an angle α,
as denoted in figure <ref>, to become the material
orthotropic axes x',y' and for a certain layer. The actual constitutive
matrix is then
D=Z^⊤D_orthZ, Z=[[ C^2 S^2 0 SC 0 0; S^2 C^2 0 -SC 0 0; 0 0 1 0 0 0; -2SC 2SC 0 C^2-S^2 0 0; 0 0 0 0 C S; 0 0 0 0 -S C ]],
where S=sinα,C=cosα. Readers may refer to <cit.>
for a detailed deduction. After rotation, the constitutive matrix
would have 13 non-zero entries for its upper triangular part.
Combining (<ref>) with (<ref>)
(<ref>) , the following relations are found:
σ_i =A_iQξ+A_iQ_0ξ_0,
τ_i =B_iQξ+B_iQ_0ξ_0,
Now we subsitute (<ref>) (<ref>)
into (<ref>) (<ref>) (<ref>),
and use P_i^⊤ as a vector of test functions
for the i th PDE. These would yield 3 groups of weak-form equations,
each group having N equations. Summing the 3 groups into 1 group,
we derive a system of N coupled second-order ODEs as
Mξ̈(t)+Cξ̇(t)+Kξ(t)=f(t),
where
M=∫_ΩρP^⊤PdV, M_0=∫_ΩρP^⊤P_0dV,
K=∫_ΩQ^⊤(A+B)QdV, K_0=∫_ΩQ^⊤(A+B)Q_0dV,
C=2μK, C_0=2μK_0,
f(t)=∫_ΩP^⊤FdV-M_0ξ̈_0(t)-C_0ξ̇_̇0̇(t)-K_0ξ_0(t),
are called the mass matrix, stiffness matrix, damping matrix and force
vector respectively. In order that reasonable solutions can be found,
equation (<ref>) is constrained by the following Dirichlet
boundary condition and initial conditions:
u|_x∈Γ_1=0,
u|_t=0=∂_tu|_t=0=0.
The choice of this boundary condition stems from that the whole soundboard
is hard supported by beams and legs on the under side of rim (see
figure 2). Thanks to the flexiblility of our 3D model, the rim can
be treated as a natural extension of the main board, thereby capturing
more nuanced boundary conditions compared to the clamped or simply
supported cases in 2D models. Given approriate boundary conditions,
the mass and stiffness matrices are symmetric positive definite.
§ MODEL FOR PIANO STRINGS
The symbols in this section follow sections 2 and 3 if not explicitly
defined. Definitions for scalars, vectors and matrices with subscript
_0, if not explicited stated, are automatically inferred from
their counterparts without subscript _0.
The study treats a grand piano string as a cylinder material, and
physically models it as a 1D beam prestressed longitudinally. As shown
in figure <ref>, the string is fixed at the agraffe end
(immobile) and coupled to the bridge end (mobile) <cit.>
through pins. Define that the central line of string range start from
the tunning pin [-L_0,0,0]^⊤, then go through the agraffe
x_0=[0,0,0]^⊤, the front bridge pin x_1=[L_1,0,0]^⊤,
the rear bridge pin x_2=[L_2,0,0]^⊤, and
finally the hitch pin x_3=[L_3,0,0]^⊤, where
L_1 is the so-called “speaking length”; let r be the radius
of cross-section and ρ be the homogenous density per unit volume.
It is known that the vibration of piano strings is primarily vertical
(z direction) <cit.>. Nevertheless, logitudinal
vibration (x direction) <cit.> and horizontal
vibration (y direction) <cit.> may also be essential,
as they contribute respectively to the sound precursor <cit.>
and double-decay <cit.> phenomena. Two more kinds of
vibration that require attention are the rotations of the string's
cross-section towards the y and z axes, so as to account for
the stiffness of piano strings that may result in slightly inharmonic
sounds <cit.>. To incorporate all these essential
kinds of vibrations, define the displacement field of a piano string
as u^*≈u=[u,v,w]^⊤ in a
similar notion to (<ref>) (thus some details
are omitted here) as
u(x,y,z,t) =φ_1(x)·[S_1ξ(t)+S_0,1ξ_0(t)]-yφ_4(x)·[S_4ξ(t)+S_0,4ξ_0(t)]
-zφ_5(x)·[S_5ξ(t)+S_0,5ξ_0(t)],
v(x,y,z,t) =φ_2(x)·[S_2ξ(t)+S_0,2ξ_0(t)],
w(x,y,z,t) =φ_3(x)·[S_3ξ(t)+S_0,3ξ_0(t)],
where subscripts _1,_2,_3,_4,_5 refer to vectors or matrices
defined for longitudinal, horizontal, vertical, horizontal rotational
and vertical rotational vibrations respectively, as marked u,v,w,α,β
in figure <ref>. From a 3D perspective, it can be seen
that with regard to the y and z axes, the x displacement
is first-order modelled, while the y and z displacements are
zeroth-order modelled. For the rest of this section, we only discuss
the particularities of piano string model relative to those presented
in sections 2 and 3.
The below relations hold
u_i =P_iξ+P_0,iξ_0, ∇ u_i=Q_iξ+Q_0,iξ_0,
P =[[ P_1; P_2; P_3 ]]=[[ φ_1^⊤S_1-yφ_4^⊤S_4-zφ_5^⊤S_5; φ_2^⊤S_2; φ_3^⊤S_3 ]],
Q =[[ Q_1; Q_2; Q_3 ]]=[[ (∇φ_1)^⊤S_1-y(∇φ_4)^⊤S_4-z(∇φ_5)^⊤S_5; (∇φ_2)^⊤S_2; (∇φ_3)^⊤S_3 ]],
where P_0 and Q_0 are defined
similar to P and Q. From (<ref>)
(<ref>), it can be found that ϵ_22=ϵ_33=2ϵ_23=0,
thus the strain vector reduces to ϵ=[ϵ_11,2ϵ_12,2ϵ_13]^⊤,
and (<ref>) can be redefined as
H_1=[[ 1 0 0; 0 1 0; 0 0 1 ]], H_2=[[ 0 0 0; 1 0 0; 0 0 0 ]], H_3=[[ 0 0 0; 0 0 0; 1 0 0 ]].
As the steel used to manufacture piano strings can be considered as
isotropic material, the stress-strain relation in (<ref>)
reduces to
σ=[[ σ_11; σ_12; σ_13 ]]=[[ D 0 0; 0 G 0; 0 0 G ]][[ ϵ_11; 2ϵ_12; 2ϵ_13 ]]=Dϵ,
where D=E(1-ν)/(1+ν)(1-2ν), G=E/2(1+ν);
E is the young's modulus, ν is the Poisson's ratio, and G
is the shear modulus.
It now suffices to derive a system of N coupled second-order ODEs
simply by substituting (<ref>) into (<ref>)
(<ref>). In order that reasonable solutions
can be found, equation (<ref>) is at least constrained by
the following Dirichlet boundary conditions:
u|_x=0=u|_x=L_3=0,
which means the motion of string vanishes at the agraffe point and
the hitch point. Another Dirichlet boundary condition at the coupling
point x_1 will be discussed in section <ref>.
The initial conditions are
u|_t=0=∂_tu|_t=0=0.
§ MODEL FOR SOUND RADIATION IN THE AIR
§.§ Model for the air
The study adopts a 3D acoustic wave model to simulate piano sound
radiation in the air. For simulating wave propagation from the piano
soundboard to a listener situated at a particular location, <cit.>
used Rayleigh integral to compute the acoustic pressure field, which
is able to simulate the different delays and decays of sound at different
positions, but unable to account for boundary conditions. <cit.>
used 2 first-order linear acoustic equations for 4 coupled unknowns:
acoustic pressure (1 unknown) and acoustic velocity (3 unknowns).
This approach excels at capturing reflections at spatial boundaries
like walls and soundboard-air coupling, but seems unable to produce
decaying room impulse response due to the absence of damping terms[It seems the acoustic model in <cit.> relies on
soundboard-air coupling to produce the damping. When the acoustic
equations are fully coupled to the soundboard equations, the soundboard's
damping mechanisms may apply to the air as well. This means even if
the acoustic equations contains no damping, the overall energy may
be stll stable and waves with infinite amplitudes would not occur.
However, we choose to add viscous damping terms to the acoustic equations
for two reasons. Firstly, it can be observed in audio recording that
a very short impulse in a room gets a decaying response, which attributes
to the reverberation effect. Secondly, the coupling computation approach
we shall introduce later actually relies on decoupling and iterative
strategies, meaning that without the room itself's damping mechanisms
the computed pressure may exhibit infinite amplitudes.].
Fluid dynamics exhibit both linear and nonlinear behaviours as decribed
by the Navier-Stokes equation, but considering that sound pressure
fluctuations in air are often small, linearized models for sound radiation
in the air can achieve a satisfactory level of accuracy for us. The
acoustic radiation equations we utilize are based on the linearized
fluid dynamics equations <cit.>, including conservation
of mass and momentum. It works for isotropic, compressible, viscous,
adiabatic flow with homogenous ambient states, in Eulerian description.
Damping due to heat conduction is simplified away here and readers
may refer to <cit.> for incorporating
thermal damping. The governing PDEs write
1/ρ c^2ṗ+∇·u=0,
ρu̇=∇·G+F,
where u(x,y,z,t) is the acoustic velocity field; p(x,y,z,t)
is the perturbation of acoustic pressure field, also should be the
final digital audio signal; ρ is the air density; c is the
sound propagation speed in the air; G is the symmetric
surface force tensor; the gradient operator ∇ applied to a
vector returns its Jacobian matrix, and the divergence operator ∇·
applied to a matrix returns a vector of each row's divergence; F=[F_1,F_2,F_3]^⊤
is the external force (viz. body force) treated as zero in this study.
According to the Navier-Stokes equation, the surface force tensor
G is defined as
G=-pI+μ_1(∇u̇+(∇u̇)^⊤)+μ_2(∇·u̇)I,
where μ_1 and μ_B are dynamic viscosity and bulk viscosity
coefficients accounting for energy dissipation, and μ_2=μ_B-2/3μ_1.
To express the divergence of surface force using acoustic velocity
gradients, we find
∇·G_i=-∇ p+∇·ς_i, ς_i=∑_j=1^3(μ_1A_ij+μ_2B_ij)∇ u_j,
A=[[ A_1; A_2; A_3 ]]=[[ A_11 A_12 A_13; A_21 A_22 A_23; A_31 A_32 A_33 ]], B=[[ B_1; B_2; B_3 ]]=[[ B_11 B_12 B_13; B_21 B_22 B_23; B_31 B_32 B_33 ]],
A=[[ 2 0 0 0 0 0 0 0 0; 0 1 0 1 0 0 0 0 0; 0 0 1 0 0 0 1 0 0; 0 1 0 1 0 0 0 0 0; 0 0 0 0 2 0 0 0 0; 0 0 0 0 0 1 0 1 0; 0 0 1 0 0 0 1 0 0; 0 0 0 0 0 1 0 1 0; 0 0 0 0 0 0 0 0 2 ]], B=[[ 1 0 0 0 1 0 0 0 1; 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0; 1 0 0 0 1 0 0 0 1; 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0; 1 0 0 0 1 0 0 0 1 ]],
where G_i, ς_i is the
i th column or row of G, ς;
here ς is the damping force tensor. The 2
PDEs (<ref>) (<ref>) then become[It is also viable to decouple the acoustic equations into a single
second-order equation containing only u as unknown.
We choose to not do so because it would result in a damping matrix
not diagonalizable by the mass or stiffness matrices. To fully decouple
the system ODEs would then require doubling the DOFs (equivalent to
6 variables) to transform into a first-order system. This appears
suboptimal to us as the original first-order system of acoustic equations
has only 4 varaibles.]
ρu̇_i-∇·ς_i+∂_x_ip=F_i (i=1,2,3),
ṗ+ρ c^2∇·u=0,
where the divergence of velocity can be expressed in velocity gradients
as
∇·u=∑_j=1^3H_j∇ u_j,
H=[[ H_1 H_2 H_3 ]]=[[ 1 0 0 0 0 0 0 0 0; 0 0 0 0 1 0 0 0 0; 0 0 0 0 0 0 0 0 1 ]].
For acoustic modes the vorticity ∇×u=0
may be assumed zero so that a decoupled equation containing only p
as unknown may be obtained, but we choose to not do so because the
velocity field of any solid to couple with may not satisfy zero vorticity.
Now we seek for the weak forms of (<ref>)
(<ref>) via the Galerkin method. Multiplying
an arbitrary test function vector ψ on both sides
of (<ref>) yields
∫_Ωψρu̇_idV-∫_Ωψ(∇·ς_i)dV +∫_Ωψ∂_x_ipdV=∫_ΩψF_idV
∫_Ωψρu̇_idV+∫_Ω(∇ψ)ς_idV +∫_Ωψ∂_x_ipdV=∫_ΩψF_idV.
When using integration by parts in the above, Neumann boundary condition
is imposed as
ς_i·dΓ=0,
where dΓ is the outward normal vector
of the tangent plane of any point on the boundary of acoustic space.
This condition means the damping force should vanish on the non-Dirichlet
boundaries. Similarly for (<ref>), multiplying
a test function vector results in
∫_ΩψṗdV+∫_Ωψρ c^2(∇·u)dV=0.
To derive the ODEs, we define space discretization similar to (<ref>),
but with one more variable p, as
u(x,y,z,t) =φ_1(x,y,z)·[S_1ξ(t)+S_0,1ξ_0(t)],
v(x,y,z,t) =φ_2(x,y,z)·[S_2ξ(t)+S_0,2ξ_0(t)],
w(x,y,z,t) =φ_3(x,y,z)·[S_3ξ(t)+S_0,3ξ_0(t)],
p(x,y,z,t) =φ_4(x,y,z)·[S_4ξ(t)+S_0,4ξ_0(t)].
And similar to (<ref>), we define convenience matrices
to express that
u_i=P_iξ+P_0,iξ_0, ∇ u_i=Q_iξ+Q_0,iξ_0,
P=[[ P_1; P_2; P_3 ]]=[[ φ_1^⊤S_1; φ_2^⊤S_2; φ_3^⊤S_3 ]], P_0=[[ P_0,1; P_0,2; P_0,3 ]]=[[ φ_1^⊤S_0,1; φ_2^⊤S_0,2; φ_3^⊤S_0,3 ]],
Q=[[ Q_1; Q_2; Q_3 ]]=[[ (∇φ_1)^⊤S_1; (∇φ_2)^⊤S_2; (∇φ_3)^⊤S_3 ]], Q_0=[[ Q_0,1; Q_0,2; Q_0,3 ]]=[[ (∇φ_1)^⊤S_0,1; (∇φ_2)^⊤S_0,2; (∇φ_3)^⊤S_0,3 ]],
P_4=φ_4^⊤S_4, P_0,4=φ_4^⊤S_0,4, Q_4=(∇φ_4)^⊤S_4 ,Q_0,4=(∇φ_4)^⊤S_0,4.
Then similar to (<ref>), we find
∑_j=1^3A_ij∇ u_j=A_iQξ+A_iQ_0ξ_0,
∑_j=1^3B_ij∇ u_j=B_iQξ+B_iQ_0ξ_0,
∂_x_ip=∂_x_iφ_4·S_4ξ+∂_x_iφ_4·S_0,4ξ_0,
so that space partial derivatives can be expressed as linear transformations
of the DOF vector ξ(t) or its time derivatives.
It now sufficies to derive the system of second-order ODEs similar
to (<ref>) (<ref>).
We use P_i^⊤ as a vector of test functions
for the i th weak form equation in (<ref>)
and use P_4^⊤ to test the weak form (<ref>),
which would yield 4 groups of equations, each group having N equations.
Summing the 4 groups into 1 group, we derive a system of N coupled
first-order ODEs as
Mξ̇(t)+Kξ(t)=f(t),
where
M=∫_Ω(ρP^⊤P+P_4^⊤P_4)dV,
M_0=∫_Ω(ρP^⊤P_0+P_4^⊤P_0,4)dV,
K=∫_Ω[Q^⊤(μ_1A+μ_2B)Q+ρ c^2P_4^⊤HQ+P^⊤Q_4]dV,
K_0=∫_Ω[Q^⊤(μ_1A+μ_2B)Q_0+ρ c^2P_4^⊤HQ_0+P^⊤Q_0,4]dV,
f(t)=∫_ΩP^⊤FdV-M_0ξ̇_̇0̇(t)-K_0ξ_0(t).
Here M and K may not be as meaningful
as the mass and stiffness matrices in the second-order ODEs case.
The initial conditions are
u|_t=0=0, p|_t=0=0.
The acoustic space is often finite, which means boundary conditions
are essential to shape the solutions of acoustic equations. Let Γ^(c)∈ℝ^3
be the acoustic space's whole solid boundary, of which Γ_1^(c),Γ_2^(c)∈Γ^(c)
are the room boundary (grounds, walls, ceilings) and piano soundboard
parts respectively, as shown in figure <ref>. In <cit.>,
the walls were assumed rigid, which can produce sound reflection effects
but may miss sound absorption effects. This motivates our introduction
of vibratory acoustic boundaries to account for various phenomena
like reflection, scattering, absorption, diffusion, resonance etc.
§.§ Model for the room
In this section, we delve into the specification of room boundary
models, which serve as the cornerstone for the subsequent analysis
of solid-air coupling dynamics. For the sake of conciseness, we adopt
a simplified representation of the room environment as a shoebox-shaped
3D space accommodating a piano. It is worth noting, however, that
this room model remains versatile and can be readily adapted to accommodate
irregular room geometries. The room's boundaries—encompassing floors,
walls, and ceilings—play pivotal roles in shaping the acoustic
behavior within, acting as both reflectors and absorbers of sound
waves. In the context of physical modeling, these boundaries are treated
as 3D elastic materials, akin to the piano's soundboard model. This
methodological alignment facilitates the straight application of the
general 3D elasticity model and the soundboard model to the room,
obviating the need for reformulations of the governing equations and
weak forms.
The primary specialization within this framework lies in the stipulation
of boundary conditions of room barriers. As highlighted red in figure
<ref>, the inner side of ground, wall and ceiling (blue
color), defined as Γ_1^(f), is the room material's interface
with the air; the underside of ground material (red color), defined
as Γ_2^(f), is where the Dirichlet boundary condition
applies. Firmly rooted in the earth, the ground material should experience
zero displacement along Γ_2^(f). This condition is written
as
u^(f)|_x∈Γ_2^(f)=0.
In case the inner concrete layers of walls and ceilings are considered
rigid, displacements on the surfaces of them should be zero too. Another
specialization is that prestress need not be considered for room barriers.
Even if it exists, the dynamic deformation may not be large enough
to induce significant effect of prestress on vibration.
§ MODEL FOR COUPLING BETWEEN DIFFERENT PARTS OF THE PIANO
§.§ Model for hammer-string coupling
The piano hammer positioned below the string functions the excitation
of string vibration, which is known to be a highly nonlinear process
<cit.>. We consider two main parts of the hammer
relevant to this excitation process: the wooden hammer shank that
can be approximated as non-deformable; the hammer felt that is deformable,
impacting and exerting force on the string. Normally, the hammer moves
in a circle when triggered by piano player's key action, striking
the string at certain point in during a very short period of time,
and is also pushed back by the vibrating string.
§.§.§ One hammer striking one string
For the case of one hammer striking one string, the study follows
the 0D nonlinear hammer-string interaction model in <cit.>,
with some refinements: for the case of one hammer striking one string,
the rotational movement of hammer <cit.> and
the horizontal interaction force are taken into account.
As shown in figure <ref>, xyz is the coordinate system
of piano string, where the x axis of the string system has an angle
α_0 to the ground. To better describe the hammer shank motion
which is rotation around point P_1, we will not often the use
the xyz system. Instead, we define a static coordinate system x'y'z'
with origin P_1, y' axis the same as y axis, z' axis
with negative direction the same as gravity, and x' axis perpendicular
to the y'O'z' plane. The motion of hammer shank as a rigid body
can then be described as rotating around the y' axis and not moving
in the y' direction. We can thus use a single variable θ(t),
the angle of P_1P_3 to the x' axis, to
fuuly describe the hammer shank motion. The hammer shank drives the
overall motion of the shank head P_2 and felt head P_4.
Nevertheless, when the felt head is in contact with the string's Q_0
point, it undergoes a compression ϑ=[ϑ_1,ϑ_2,ϑ_3]^⊤
from P_4 to P_6, which also overlaps with the dynamic string
point Q_1, which also contributes to its motion. To perform a
reasonable orthogonal decomposition of this compression, we define
a dynamic coordinate system x”y”z” with origin P_3, x”
axis with positive direction P_5Q_0, y”
axis with positive direction the same as y axis, and z” axis
with positive direction P_5P_2. The hammer
compression can then be decomposed onto the x”,y”,z” axes as
marked as marked ϑ_1,ϑ_2,ϑ_3 respectively
in figure <ref>.
Denote the displacement of string point Q_0 with coordinate x_0^(a)
in the xyz system as u_0^(a)=u^(a)(x_0^(a),t).
Denote the lengths of Q_0P_1, P_0P_1, P_1P_3,
P_2P_3, P_1P_2, P_2P_4 as L, L_0, L_1,
L_2, L_3, L_4 respectively, and the angle of P_1Q_0
to the x' axis as α_1. Our geometric analysis finds the
coordinates of Q_1 (equivalent to P_6 in case of compression)
in the x'y'z' system and x”y”z” system, denoted x_1^(a')
and x_1^(a”), are
x_1^(a')=r_0+R_0u_0^(a), x_1^(a”)=r+R(θ)x_1^(a'),
r_0=[[ Lcosα_1; 0; Lsinα_1 ]], R_0=[[ cosα_0 0 -sinα_0; 0 1 0; sinα_0 0 cosα_0 ]],
r=[[ -L_1; 0; -L_2 ]], R(θ)=[[ cosθ 0 sinθ; 0 1 0; -sinθ 0 cosθ ]].
And converting x_1^(a”) to x_1^(a')
results in
x_1^(a')=s(θ)+S(θ)x_1^(a”),
s(θ)=[[ L_1cosθ-L_2sinθ; 0; L_1sinθ+L_2cosθ ]], S(θ)=[[ cosθ 0 -sinθ; 0 1 0; sinθ 0 cosθ ]].
As previously discussed, we assume Q_0 and P_2 is non-deformable
whereas P_4 is deformable. This means when the hammer is in contact
with string, P_4 moves to P_6; when contact is absent, the
position of P_4 is determined by the shank but not the string.
The condition to satisfy that the hammer is in contact with string
should be z_1^(a”)≤ L_4, viz. the z” direction distance
between string point and shank head is less than the static thickness
of hammer felt. The hammer felt compression is then
ϑ={[ x_1^(a”)-L_4, z_1^(a”)≤ L_4; 0, z_0^(a”)>L_4 ].
where L_4=[0,0,L_4]^⊤; negative value of
ϑ_i means the hammer felt is compressed towards the negative
direction of x_i” axis, and vice versa. Due to compression,
the hammer felt exerts an interaction force F^(a”)=[F_1^(a”),F_2^(a”),F_3^(a”)]^⊤
(in x”y”z” system) on the string. Reciprocally, the hammer felt
suffers -F^(a”) from the string according to Newton's
third law[Note that we ignore here the string's normal and tangent stress and
prestress on the interaction surface exerting on the hammer felt (as
well as the relevant strain on the string side), which should already
be zero per the Neumann boundary condition specified in (<ref>)
if DOFs are given to string displacements on this surface. Even if
they should not be zero, it is relatively acceptable to ignore them
as they are probably less contributive than the hammer's compression
force. Treating them as non-zero would require imposing Dirichlet
boundary conditions on some relevant DOFs of the string's side, which
is a computational challenge.]. Based on <cit.>, the hammer-string interaction
force in the x”_i axis is modelled as a nonlinear function of
compression as
F_i^(a”)=-sgn(ϑ_i)[k_i|ϑ_i|^p_i+r_ik_i∂_t(|ϑ_i|^p_i)], i=1,2,3,
where k_i is the stiffness of hammer felt; p_i is a positive
exponent accounting for nonlinearity; r_i is the relaxation coefficient
accounting for the hammer's hysteretic and dissipative behaviour;
function sgn(·) returns the sign of a real number.
Converting the interaction force to x'y'z' and x'y'z' coordinates
yields F^(a')=S(θ)F^(a”)
and F^(a)=R_0^⊤F^(a').
To couple the hammer force with string motion, we need equations in
the x'y'z' system governing the motion of hammer shank, a rigid
body. Denote the coordinates of shank mass centre P_0, shank
rotation centre P_1, shank head P_2 in the x'y'z' system
as x_0^(d'), x_1^(d'), x_2^(d')
respectively. It can be found that x_0^(d')=[L_0cos(β_0+θ),0,L_0sin(β_0+θ)]^⊤
and x_2^(d')=s(θ), where β_0
is the angle of P_1P_0 to the P_1P_3.
Define the total mass and homogenous line density of the hammer shank
as m, ρ. It can be found that m=ρ(L_1+L_2), and
the axis-free position of center of mass P_0 projected on P_1P_3
and P_2P_3 are (1/2L_1^2+L_1L_2)/(L_1+L_2)
and 1/2L_2^2/(L_1+L_2) respectively. It is obvious
that the shank as a rigid body suffers these forces: string reaction
force F^(d')=-F^(a') at P_2;
a rotation constraint force at P_1, which has zero torque with
respect to the y' axis; gravity [0,0,-mg]^⊤ at P_0,
where g is the scalar gravitational acceleration. It follows that
the total torque with respect to the y' axis is mgL_0cos(β_0+θ)+s_2(θ)·F^(d')
(only 1 dimension of torque is needed here), where
s_2(θ)=[[ L_1sinθ+L_2cosθ; 0; -L_1cosθ+L_2sinθ ]];
the moment of inertia with respect to the y” axis, denoted I,
is
I=∫_0^L_1ρ l^2dl+∫_0^L_2ρ(l^2+L_1^2)dl=ρ(1/3L_1^3+1/3L_2^3+L_1^2L_2).
Applying Newton's second law for rotation, the differential equation
of hammer shank motion is
Iθ̈=-μθ̇+mgL_0cos(β_0+θ)+s_2(θ)·F^(d'),
where -μθ̇ is the simplified damping force. Here t=0
is the moment when the felt head first contacts the string and compression
is still zero, and initial angle and angular velocity at this time
should be known.
From the aforementioned deductions, we can abstract the hammer-string
interaction force F^(e) as a nonlinear operator
ℱ(u_0^(a),θ):(ℝ^3,ℝ)→ℝ^3.
This force should be added to the rhs of (<ref>) as a non-conservative
force. Then, (<ref>) and (<ref>) forms
2 sets of coupled equations for 2 sets of variables (u^(a),θ),
which provides a foundation for obtaining a solution theoretically.
Due to the highly nonlinear nature of hammer-string coupling, particularly
in that Taylor series approximation may not work well for potentially
non-smooth functions in (<ref>), it would be inappropriate
to use the common perturbation method. In section <ref>
we shall introduce an explicit time discretization method to efficiently
solve the hammer-string coupling.
§.§.§ One hammer striking two or three strings
The case of one hammer striking multiple, say three strings can simply
be treated it as if three independent felt heads were striking their
corresponding strings. Three independent interaction forces F_i=1,2,3^(e')
are computed from three independent compressions. The string force
exerted on the shank is the sum F^(d')=-∑_i=1^3F_i^(e'),
assuming the 3 compression forces apply to the same point of shank
head. However, this approach may lose the dynamic interaction between
different striking points of the hammer felt.
A 3D hammer model may be more accurate in capturing the interplay
of different hammer striking points and the different positions of
compression forces. The hammer felt is now modelled as a 3D elastic
material with space discretization. Two kinds of spacial boundaries
consist in the hammer felt: one contains the felt's 3 potential contact
points with the 3 strings, represented as P_4,i with coordinates
x_4,i=1,2,3^(e”); another is the contact surface
Γ_1^(e”)∈ℝ^2 with the hammer shank. The
hammer shank is considered as a rigid 3D object, which means no inner
space discretization is needed for it. Above all, one needs to pay
attention to the choice of coordinate system for the 3D geometry of
felt. A recommended choice here is the dynamic x”y”z” system,
which eliminates the felt's rigid body movement component and is thus
suitable for FEM computation. Also, displacement fields u^(e”)(x^(e”),t)
for hammer felt and u^(d”)(x^(d”),t)
for hammer shank need to be defined.
Derivation of the felt compression dynamics is now based on the 3D
elastic material model and soundboard model previously introduced,
without the need for modelling prestress. We first consider the boundary
conditions needed for the felt model. Denote the coordinate of Q_1,i
in the x”y”z” system as x_1,i^(a”), which
can be computed similar to (<ref>).
To represent hammer felt compression, Dirichlet boundary condition
should be imposed on these 3 contact points as
u^(e”)(x_4,i^(e”),t)={[ x_1,i^(a”)-x_4,i^(e”), z_1,i^(a”)≤ L_4; 0, z_1,i^(a”)>L_4 ].,
where z_1,i^(a”)≤ L_4 means contact is present and otherwise
not. This determined displacement would occur on the rhs of system
ODEs like (<ref>) as a source term. This means
without contact, not only P_4,i but also the entire felt will
not experience deformation; with contact, the position of P_4,i,
i.e. P_6,i, is the same as Q_1,i, triggering motion and
compression of the hammer felt. As for another boundary Γ_1^(e”)
which interfaces the shank, Dirichlet boundary condition should be
imposed on as
u^(e”)(x^(e”),t)=0, x^(e”)∈Γ_1^(e”),
because the rigid shank should have no deformation.
We then consider the dynamics of string and shank impacted by the
felt. At the contact point with string, the felt's nonzero surface
forces G^(e”)(x_4,i^(e”),t) should
be transmitted to the string as
F_i^(a)=-[[ 0 Z_12 Z_13; Z_12 Z_22 Z_23; Z_13 Z_23 Z_33 ]][[ 1; 1; 1 ]],
{ Z_ij} =R_0^⊤R(θ)^⊤G^(e”)(x_4,i^(e”),t)R(θ)R_0,
where Z_11 is not transmitted considering that the hammer felt
seems to have no direct contact with the string's surface whose normal
is in the x axis (longitudinal direction). As for the felt's surface
force exerting on the shank along the boundary Γ_1^(e”)=Γ_1^(d”),
previous practice of directly transmitting the felt-string interaction
force to the shank is not applicable here due to the 3D nature of
felt and shank. For a point x^(e”)=x^(d”)
in Γ_1^(e”), with n^(e”)(x^(e”))
as the outward normal of its tangent plane, the felt would exert a
surface force
F^(d”)(x^(d”),t)=-G^(e”)(x^(e”),t)n^(e”)(x^(e”))
on the shank. Then the total contribution of felt force to the torque
of shank with respect to the y” axis is
T_1=∫_Γ_1^(d')(F_1^(d')z-F_3^(d')x)dV
where F_i^(d') is the i th entry of vector F^(d')(x^(d'),t)=S(θ)F^(d”)(R(θ)x^(d'),t),
and Γ_1^(d') is converted from Γ_1^(d”) using
(<ref>). The total contribution
of gravity to the torque of shank with respect to the y” axis
is T_0=mgx_0^(d') where x_0^(d') is the
coordinate of P_0 depending on θ. The moment of inertia
with respect to the y” axis is
I=∫_Ω^(d')ρ(x^2+z^2)dV,
where Ω^(d')∈ℝ^3 is the shank's volume domain.
Applying Newton's second law for rotation, the differential equation
of hammer shank motion is
Iθ̈=-μθ̇+mgx_0^(d')+∫_Γ_1^(d')(F_1^(d')z-F_3^(d')x)dV.
Now the whole model of 3D hammer-string coupling has been established.
The felt-string interaction force F_i^(e) in (<ref>)
is computed from the 3D hammer felt model with Dirichlet boundary
conditions (<ref>) (<ref>),
and added to the rhs of (<ref>) as a non-conservative force.
Then, (<ref>) and (<ref>) forms 2 sets
of coupled equations for 2 sets of variables (u^(a),θ),
which provides a foundation for obtaining a solution theoretically.
Despite the linearity of 3D elasticity model, the felt's force on
the string is still nonlinear because of the interaction judging condition.
§.§ Model for string-soundboard coupling
Piano strings are coupled to the soundboard's bridge part, which terminates
and transmits string vibration to the soundboard. Figure <ref>
visualizes this coupling at the bridge from 3 perspectives, where
xyz and x'y'z' are the coordinate systems for soundboard and
string respectively. Converting a vector (not a point) v
in the xyz system into v' in the x'y'z' system,
according to the angles α and β marked in figure <ref>,
yields v'=Rv where R=R_1R_2
and
R_1=[[ cosα 0 sinα; 0 1 0; -sinα 0 cosα ]], R_2=[[ cosβ sinβ 0; -sinβ cosβ 0; 0 0 1 ]]
are the rotation matrices. The orthogonal property of rotation matrix
yields v=R^⊤v'.
We conjecture from observations that the string-soundboard coupling
is achieved via several mechanisms. The first mechanism is “bridge
hump coupling”, as illustrated in the side perspective of figure
<ref>. The middle part of bridge is constructed a bit
higher than the agraffe, the starting point of the speaking string.
As a result, the string experiences some upward pressure from the
bridge and the bridge experiences some downward pressure from the
string, both statically and dynamically. The second mechanism is “bridge
pin horizontal coupling”, as illustrated in the top perspective
of figure <ref>. Two bridge pins for one string are drilled
into the bridge at positions that would bend the string a bit horizontally,
bringing mutual horizontal tension between the string and bridge,
and restricting the string's horizontal movement. Made of hard metal
like steel, brass or even titanium, the bridge pins can be treated
as rigid bodies with zero strain and stress. The third mechanism is
“bridge pin vertical coupling”, as illustrated in the front perspective
of figure <ref>. The two bridge pins lean towards their
respective string sides, blocking the string from moving upwards beyond
the pins. Also, the notches of bridge ensure that bridge hump coupling
and bridge pin coupling occur at almost the same position, viz. the
front bridge pin (the one closer to the agraffe).
On the string's side, the coupling point is defined as x_1^(a)=[L_1^(a),0,-r^(a)]^⊤,
the lowest point of string at the front pin. While other relevant
points for coupling may be [L_1^(a),-r^(a),0]^⊤, the
closest point of string to the bridge pin, the string's displacements
and surface forces at these points are identical, assuming u^(a)
does not vary with y^(a) or z^(a) at x^(a)=L_1^(a)
(no rotation) for simplicity[This is also true for the string's central point x_1^(a)=[L_1^(a),0,0]^⊤
previously defined.]. The coordinate of this coupling point on the soundboard's side is
denoted x_1^(b).
§.§.§ Static coupling
For coupling before motion is initiated, the string and soundboard
should have non-zero prestress fields that attain static balance.
It is sufficient to specify that the string prestress only acts perpendicular
to its cross-sections and is constant through its speaking length.
This means in the string's tension matrix T^(a),
T_11 is a constant and other entries are zero, immediately satisfying
the static balance. For the soundboard, a full space dependent tension
matrix T^(b) with 6 unique elements is necessary.
The static balance condition can then be expressed as 3 partial differential
equations
∇·T^(b)=0,
which can be solved numerically by FEM using nodal surface forces
as time-independent DOFs. Nonetheless, continuity between string and
soundboard tension fields is required. To write the continuity equations,
we define an intermediate coordinate system x”y”z” resulting
from rotating the x'y'z' system by R_1^⊤
or rotating the xyz system by R_2, as marked
blue in figure <ref>. At the coupling point, the string's
tension on surfaces with outward normals in the y” and z”
axes exert on the bridge, whereas that in the x” axis exerts on
the farther hitch pins that seem to not connect the soundboard. This
leads to one Dirichlet boundary condition for each string coupled
to the soundboard as
(R_2T^(b)(x_1^(b))R_2^⊤)_[22,33,12,13,23]=(R_1^⊤T^(a)(x_1^(a))R_1)_[22,33,12,13,23],
where subscript _[i_1,i_2,...] means taking vector or matrix
elements at indices i_1,i_2,... to form a vector. This condition
can be deduced into 5 equations, leaving 1 DOF for the coupling point.
§.§.§ Dynamic coupling
For coupling in motion, the string transmits its surface forces (dynamic
prestress, stress and damping force) to the soundboard. At coupling
point, the string's total surface force, excluding the static prestress
which is already accounted for on the soundboard's side, is G^(a)(x_1,t).
Similar to static coupling, only forces on surfaces with outward normals
in the y” and z” axes are transmitted to the soundboard at
the coupling point. This leads to
F^(b)(x^(b),t)=δ(x^(b)-x_1^(b))(R_2^⊤[[ 0 Z_12 Z_13; Z_12 Z_22 Z_23; Z_13 Z_23 Z_33 ]]R_2)[[ 1; 1; 1 ]],
{ Z_ij} =R_1^⊤G^(a)(x_1^(a),t)R_1,
where δ(x) is a 3D Dirac delta function indicating
a point load. The above equations treats the string's surface force
as a non-conservative force input to the soundboard system.
Another coupling condition arising from observation is that the string
and soundboard should have the same y” and z” direction displacements[Here displacement continuity should be equivalent to velocity continuity,
since both the string's and soundboard's displacements are based on
a [0,0,0]^⊤ point.] at the coupling point, written as
(R_1^⊤∂_tu^(a)(x_1^(a),t))_[2,3]=(R_2∂_tu^(b)(x_1^(b),t))_[2,3].
The reason for this is that the the aforementioned three coupling
mechanisms constitute support for the string's surfaces with outward
normals in the y” and z” axes at the coupling point. This
may not hold for the x” axis, but if it is needed[Maybe in case of high enough viscosity or friction, the string can
not slip more than the soundboard in the x” direction. Restricting
this movement may also make the solution of string's PDE more stable.
This needs to be tested.], the subscript _[2,3] in above equations can be removed to impose
a stronger displacement continuity condition. Anyway, restriction
of the string's x” axis (longitudinal) motion is always present
at the farther hitch point which may not connect the soundboard, see
(<ref>). The same-displacement condition also
provides a basis for establishing the string's displacement at the
coupling point (Dirichlet boundary condition), so that the string's
PDE (<ref>) has an appropriate solution.
§.§.§ More discussions on coupling
String-soundboard coupling exhibits complex mechanisms that we find
challenging to discover and describe. Some of these mechanisms that
we observe but not covered in the aforementioned coupling model are
discussed in this subsection. These discussions may not cover much
details of computation due to their complexities.
Firstly, we only specified the coupling point at the bridge's front
pin but ignored the rear pin. As coupling at the rear pin position
potentially exists, the string in our model may need to be extented
in length to cover the segment between the front pin position x_1=[L_1^(a),0,-r^(a)]^⊤
and the rear pin position x_2=[L_2^(a),0,-r^(a)]^⊤
(we call it “this segment” is this paragraph). At this segment,
the horizontal bending of string may be accouted for in the prestress,
but not necessarily represented in the geometrical volume for simplicity.
In static state, the prestress field coupling should be computed for
this segment paying attention that only at the two pin points are
y” axis surface forces coupled. In dynamic state, transmission
of the string's surface forces should be computed for this segment,
paying attention that only at the two pin points are y” axis surface
forces transmitted. A maybe more accurate computation of surface force
transmission is, that at the two pins only when the string's y”
axis surface forces are towards the front or rear pins should they
be transmitted, and that at between (excluding) the two pins only
when the string's z” axis surface forces are upwards should they
be transmitted. This may lead to a highly nonlinear function akin
to the case of hammer-string interaction (<ref>).
For displacement coupling at this segment, at the two pins when the
string's y” direction displacements should not exceed the pins,
whereas the z” direction displacements are fully coupled to the
pins; at between (excluding) the string's z” direction displacements
should not be under the soundboard plane, whereas the y” direction
displacements are unrestricted.
Secondly, we considered coupling as occurring at a point, but it may
actually occur in a small contact surface (we call it “this surface”
is this paragraph). In such a case, the Dirichlet boundary condition
of prestress field coupling should be specified over this surface;
the dynamic surface force transmission should be treated as surface
load rather than point load; the Dirichlet boundary condition of displacement
coupling should also be specified over this surface.
Thirdly, we still lack geometric details regarding the string's notable
vertical and horizontal bending at the bridge. This bending changes
the string's longitudinal direction, and thereby the 3 directions
of vibration. Though string vibration should be terminated at the
bridge pin, it seems only at the hitch pin that longitudinal vibration
is fully restricted. Therefore, the string's segment from front bridge
pin to hitch pin may require investigation, which may concern the
duplex scale phenomena <cit.>. We can design
a multi-segment geometric model for the string, each segment having
differently rotated constitutive matrices. Also, the position-dependent
static tension can be specified parallel to the central line of each
string segment.
Finally, we ignored the soundboard's normal and tangent surface forces
exerting on the string at coupling point. This is similar to the case
of hammer-string coupling where the string's surface forces exerting
on the hammer are ignored. This is mainly for practical considerations,
as we would want to solve the soundboard's equations for only once.
Since we did not impose Dirichlet boundary condition for the soundboard
at each coupling point with the around 200 strings, Neumann boundary
conditions apply here restricting the soundboard's conservative surface
forces to be zero at coupling points. It may be acceptable given the
intuitive feeling that “string → soundboard” transmission should
dominate “soundboard → string” transmission. If the latter is
essential, some DOFs may be need to be removed from the coupling point
on the soundboard's side, which is a challenge for imposing Dirichlet
boundary condition reasonably for both the string and the soundboard.
§.§ Model for soundboard-air and room-air coupling
Modelling soundboard-air and room-air coupling is crucial for arriving
at the final digital audio sound to the listeners. Generally in the
context of solid-fluid coupling dynamics, the interaction surface
should have continuous normal and tangential velocities, as well as
continuous normal and tangential surface forces on both sides <cit.>.
As per section <ref>, room-air
coupling occurs on surface Γ_1^(f) (room side) and Γ_1^(c)
(air side). Define coordinate transformation x^(f)=r_1+R_1x^(c)
from air coordinates to soundboard coordinates, where r_1
and R_1 are the shift vector and rotation matrix.
We impose Dirichlet boundary condition of velocity continuity that
u^(c)(x^(c),t)=R_1^⊤u̇^(f)(x^(f),t), x^(c)∈Γ_1^(c), x^(f)=r_1+R_1x^(c).
We also specify the contribution of air surface force to the room
equation's source term on the rhs of (<ref>) as
F_1^(f)(x^(f),t)=-G^(c)(x^(c),t)n^(c)(x^(c)), x^(f)∈Γ_1^(f), x^(c)=-R_1^⊤r_1+R_1^⊤x^(f),
where n^(c)(x^(c)) is the outward
normal of the tangent plane of x^(c).
As per sections <ref> and <ref>,
soundboard-air coupling occurs on surface Γ_2^(b) (soundboard
side) and Γ_2^(c) (air side). Define coordinate transformation
x^(b)=r_2+R_2x^(c)
from air coordinates to soundboard coordinates, where r_2
and R_2 are the shift vector and rotation matrix.
We impose Dirichlet boundary condition of velocity continuity that
u^(c)(x^(c),t)=R_2^⊤u̇^(b)(x^(b),t), x^(c)∈Γ_2^(c), x^(b)=r_2+R_2x^(c).
We also specify the contribution of air surface force to the soundboard
equation's source term on the rhs of (<ref>) as
F_1^(b)(x^(b),t)=-G^(c)(x^(c),t)n^(c)(x^(c)), x^(b)∈Γ_2^(b), x^(c)=-R_2^⊤r_2+R_2^⊤x^(b),
where n^(c)(x^(c)) is the outward
normal of the tangent plane of x^(c).
§ NUMERIC SCHEMES
§.§ Modal superposition method for solving coupled ODEs
§.§.§ Modal transformation of second-order ODEs
For solving coupled second-order ODEs like (<ref>),
time discretization using finite-difference is a direct approach.
However, given that damping matrix is diagonalizable by the mass and
stiffness matrices, decoupling and dimension reduction of the system
by means of modal superposition is preferred. To do so, we first define
the following eigenvalue problem
Kϕ_i=λ_iMϕ_i⟺M^-1Kϕ_i=λ_iϕ_i (i=1,...,N),
and the eigen decomposition
K=MΦΛΦ^-1⟺M^-1K=ΦΛΦ^-1,
where λ_i is a complex eigenvalue, ϕ_i
is a N×1 complex eigenvector, Λ=diag(λ_1,...,λ_N)
is a diagonal matrix of eigenvalues, Φ=[ϕ_1,...,ϕ_N]
is a N× N matrix of eigenvectors. Note that since both M
and K are sparse matrices with large dimensions, it
is preferrable to solve the generalized eigenvalue problem on the
left side of (<ref>), avoiding
explicit inversion of M that would otherwise result
in a large dense matrix. We can leverage existing softwares of sparse
eigensolvers like Arpack and FEAST to solve the generalized eigenvalue
problem. Nevertheless, for solving ODEs of the acoustic system with
large DOFs, it may be unwise to store all rows of eigenvectors at
the same time which may otherwise lead to memory overload. A re-implementation
of existing sparse eigensolver algorithms may be desired so as to
store partial rows or columns of eigenvectors in a “rolling” way.
Since eigenvectors are linearly independent, that is to say Φ
is invertible, we can write the solution in the form ξ(t)=Φq(t)
where q(t) is an N×1 vector we call as modal
DOFs. Note here ξ(t) should be real (in the complex
domain), but q(t) is complex because Φ^-1
is complex. Then (<ref>) can be decoupled as
MΦq̈(t)+2μKΦq̇(t)+KΦq(t) =f(t)
Φ^HMΦq̈(t)+2μΦ^HKΦq̇(t)+Φ^HKΦq(t) =Φ^Hf(t)
M̅q̈(t)+2μM̅Λq̇(t)+M̅Λq(t) =Φ^Hf(t)
q̈(t)+2μΛq̇(t)+Λq(t) =p(t)
where
M̅=Φ^HMΦ, p(t)=M̅^-1Φ^Hf(t);
are called the modal mass matrix and the modal force vector respectively.
Through eigen decomposition, the N coupled ODEs have now been transformed
into N uncoupled ODEs.
It is well-understood that the analytical solution of the i th
uncoupled second-order ODE without the rhs nonhomogenous term is a
sinusoidal signal with a single eigenfrequency positively correlated
to the magnitude of i th eigenvalue. Assuming eigenvalues in Λ
are ascendingly sorted by their real magitudes, we can select only
the lowest M eigenvalues and discard the rest, because most human
ears are insensitive to eigenfrequencies above a certain threshold
(often 10 kHz). This is also for practical considerations that the
number of DOFs is often too large for numeric computation, making
dimension reduction desirable. Consequently, we obtain a M×1
vector q'(t) with only the first M entries of q(t),
and the M+1 to N entries are all approximated as zeros and discarded.
Subsituting q(t) by q'(t) (more exactly,
q'(t) should be padded N-M zeros at the end) in
(<ref>), and omitting the last N-M
equations and unknowns, yields
S'q̈'(t)+2μS'Λ'q̇'(t)+S'Λ'q'(t) =Φ'^⊤f(t)
q̈'(t)+2μΛ'q̇'(t)+Λ'q'(t) =p'(t)
where Λ' has dimension M× M containing
the first M eigenvalues and Φ' has dimension
N× M containing the first M eigenvectors. The large numbe
of DOFs is now approximated as the linear combination of a smaller
number of modal DOFs as ξ(t)≈Φ'q'(t).
The reduced modal mass and modal force are
M̅'=Φ'^HMΦ', p(t)=M̅'^-1Φ'^Hf(t).
For around 2500 modes of the soundboard system as estimated in <cit.>,
the fully dense modal mass matrix (128 bit complex type) would require
about 95 MB memory which is normally acceptable in both storage and
computation aspects. If the mass and stiffness matrices are both symmetric,
then eigenvectors can be normalized so that the modal mass matrix
equals to an identity matrix, making its inversion easier. However,
this benefit is not achievable in our piano model because the constitutive
matrix would lose its symmetry in the presence of prestress.
Now we consider the analytical solution of the i th uncoupled nonhomogenous
ODE
q̈(t)+2μλq̇(t)+λ q(t)=p(t),
where the subscripts _i and superscripts ' are dropped for
convenience. Applying Laplace transform to this equation yields
s^2Q(s)+2μλ sQ(s)+λ Q(s) =P(s)+Q_0(s),
s^2G(s)+2μλ sG(s)+λ G(s)-G_0(s) =1
g̈(t)+2μλġ(t)+λ g(t) =δ(t)
where s is a complex variable, δ(t) is the Dirac delta
function; the initial parts of Laplace transforming derivatives are
defined as
Q_0(s) =sq(0)+q̇(0)+2μλ q(0)
G_0(s) =sg(0)+ġ(0)+2μλ g(0);
the well-known Green's function (frequency domain), the response to
a unit impulse, is defined as
G(s)=Q(s)[1+G_0(s)]/P(s)+Q_0(s)⇒ Q(s)=G(s)[P(s)+Q_0(s)]/1+G_0(s).
To solve g(t), we first specify that response should not exist
during zero and negative time, viz. g(t)=ġ(t)=0 for t≤0;
then notice when t>0, (<ref>) becomes
homogenous with solution
g(t)=C_1exp(z_1t)+C_2exp(z_2t), z_1,z_2=-μλ±√((μ^2λ-1)λ).
To determine complex constants C_1, C_2, we first notice
that g(t) should be continuous at t=0 in the presence of second
derivative in (<ref>), thus
g(0^+)=g(0)⇒ C_1+C_2=0.
Then, integrating (<ref>) over (-∞,+∞)
yields
∫_-∞^+∞[g̈(t)+2μλġ(t)+λ g(t)]dt=∫_-∞^+∞δ(t)dt
⇒ ∫_0^-^0^+g̈(t)dt=ġ(0^+)-ġ(0^-)=ġ(0^+)=1
⇒ C_1z_1+C_2z_2=1,
which is the so-called jump discontinuity condition for first derivative;
the continuity condition for the zero-order has been applied again
here. From (<ref>) (<ref>)
the complex constants are found to be
C_1=1/z_1-z_2, C_2=1/z_2-z_1.
Subsituting g(0)=0 and ġ(0)=0 into (<ref>)
yields G_0(s)=0. Then from (<ref>)
we have
Q(s) =P(s)G(s)+[q̇(0)+2μλ q(0)]G(s)+sq(0)G(s)
q(t) =p(t)*g(t)+[q̇(0)+2μλ q(0)]g(t)+q(0)ġ(t),
where * is the convolution operator over [0,+∞). For the
general case of initial conditions q(0)=q̇(0)=0, the solution
reduces to q(t)=p(t)*g(t). It is now clear that the nonhomogenous
ODEs can be solved by convolving the modal force p(t) (source signal)
with the Green's function g(t) (response signal), which can be
efficiently computed using fast Fourier transform (FFT) convolution
in the frequency domain. However, the analytical solution may not
be actually useful or efficient in the presence of coupling between
systems and the explicit time discretization scheme introduced in
section <ref> will be an alternative.
Nevertheless, it provides an understanding of the characteristics
of vibration modes.
§.§.§ Modal transformation of first-order ODEs
Solving coupled first-order ODEs like (<ref>) by means
of modal superposition is similar to the second-order case, and we
only discuss some particularities here. Firstly, the eigenvalue problem
is defined in the same way for M (now for first-order)
and K. The decoupled ODEs write
M̅q̇(t)+M̅Λq(t) =Φ^Hf(t)
q̇(t)+Λq(t) =p(t)
where
M̅=Φ^HMΦ, p(t)=M̅^-1Φ^Hf(t)
are the modal mass and the modal force. Specific attension should
be paid to the analytical solution of the i th uncoupled nonhomogenous
ODE
q̇(t)+λ q(t)=p(t).
Laplace transform yields
sQ(s)+λ Q(s) =P(s)+q(0),
sG(s)+λ G(s)-g(0) =1
ġ(t)+λ g(t) =δ(t)
where the Green's function is defined as
G(s)=Q(s)[1+g(0)]/P(s)+q(0)⇒ Q(s)=G(s)[P(s)+q(0)]/1+g(0).
Note that g(t)=ġ(t)=0 for t≤0 is still required, but
continuity g(0)=g(0^+) is unnecessary because the highest order
of derivative in (<ref>) is only one.
Nevertheless, continuity of ∫ g(t)dt at t=0 can
be easily satisfied because the primitive function can have an arbitrary
constant added. Then integrating (<ref>),
we can find g(0^+)=1 and the solution of Green's function
g(t)=-1/λexp(-λ t).
It follows that
Q(s) =P(s)G(s)+q(0)G(s)
q(t) =p(t)*g(t)+q(0)g(t),
is the solution of the uncoupled nonhomogenous first-order ODE.
§.§ Explicit time discretization for coupling between systems
Having discussed the weak form ODEs for each subsystem of the physical
piano, our question is then how a numerical treatment of coupling
between system may be achieved that attains a sensible tradeoff between
feasibility and accuracy. For most subsystems presented in the previous
sections, there exists an external force term on the rhs of PDEs.
This, along with the predetermined displacements on the boundary,
form the main contributions to the rhs source term of system ODEs.
Nevertheless, to say “ODEs” here is actually indefensible, because
in many coupling cases discussed before the rhs source term f(t)
depends linearly or nonlinearly on the lhs unknown DOFs ξ(t)
too. In such cases, eigen decomposition of “pseudo ODEs” can only
obtain decoupled lhs operators on q(t) on the lhs,
but rhs operators on q(t) often remain coupled, not
only because of the nonlinearity of operators, but also because eigen
decomposition applies to the local subsystem but not the global system
where rhs sources come from. Hammer-string coupling is a typical example:
the string displacement depends on the felt's force, the felt's force
depends on its compression, but this compression depends on string
displacement. Another example is in soundboard-string coupling, the
soundboard displacement at the coupling point depends on the string's
surface force, the string's surface force depends on its displacement,
computing its displacement requires knowing the coupling point displacement
(Dirichlet boundary), but this displacement depends on the soundboard
displacement at the coupling point to satisfy the continuity condition.
An explanation for these seemingly odd relations is that a theoretically
sound way would seem to do computations as if all subsystems were
a single system, rather than to separate systems and reintroduce coupling
between them. This means all DOFs are integrated into a single vector
of DOFs, all mass (or stiffness) matrices are assembled into a bigger
one, displacements at coupling positions are expressed by the same
unique DOFs behind, in order for full coupling solutions. However,
this is often unacceptable not only because of the high costs of storage
and computation, but also because the inherent heterogeneities of
different subsystems, particularly the nonlinearity of hammer felt
compression, the first-order characteristic of acoustic system, the
different damping mechanisms, may actually not cohere well and may
even be tough to control in a single system. Therefore, we shall still
adopt the framework of separate systems with mutual coupling, and
seek for numeric schemes capable of achieving a level of accuracy
as close to full coupling schemes as possible.
The time domain scheme we shall introduce here is inspired by the
idea of velocity Verlet algorithms, but customized for our case of
modal-transformed first and second order ODEs, with second-order accuracy[Alternatively, one may consider transforming time domain into frequency
domain to explore coupling from the perspective of mobility <cit.>.]. Despite being lower-order compared to the complex time schemes in
<cit.>, our scheme
may be more efficient particularly in eliminating the need to invert
or solve big or dense matrices at each step, even when the mass matrix
is non-diagonal. Due to the existence of coupling, the analytical
solutions of first and second order ODEs discussed in section <ref>
may be not applicable here. Nevertheless, system decoupling and dimension
reduction via modal decomposition will be shown useful in improving
the efficiency of time-stepping algorithms.
§.§.§ Time stepping of second-order ODEs
In the previously derived second-order decoupled ODEs (<ref>),
the rhs source term can rewritten as
q̈(t)+2μΛq̇(t)+Λq(t)=p(t)=r[t,q(t),q̇(t)],
where r incoporate all DOF-independent and DOF-dependent
contributions to the source term; DOF-dependent contributions consist
of two major sources: non-zero Dirichlet boundary conditions; non-conservative
force (but excluding damping force already on the lhs).
To perform time discretization, we define the discrete time interval
as h=Δ t, which can be chosen as 1/44100 seconds for producing
44.1 kHz digital audio; define the n th discrete point in time
as t_n=nh. The notion of discrete points in time form the basis
of time stepping algorithms, where integrations are performed between
time steps. An integral over a small interval [x_0,x_0+Δ x]
can be approximated as area of trapeziud
∫_x_0^x_0+hf(x)dx≈h/2[f(x_0)+f(x_0+h)],
which can achieve a relatively high accuracy, though f(x_0+h)
may not be known beforehand in some contexts. If we seek for a first-order
approximation of f(x_0+h), then
∫_x_0^x_0+hf(x)dx≈ hf(x_0)+h^2/2f'(x_0)
is a less accurate approximation. Utilizing these integration strategies,
we integrate (<ref>) over [t_n,t_n+1] to
get
q̇(t_n+1)-q̇(t_n) =-2μΛ∫_t_n^t_n+1q̇(t)dt-Λ∫_t_n^t_n+1q(t)dt+∫_t_n^t_n+1p(t)dt
q̇(t_n+1)-q̇(t_n) ≈-hμΛ[q̇(t_n)+q̇(t_n+1)]-h/2Λ[q(t_n)+q(t_n+1)]+hp(t_n)+h^2/2ṗ(t_n).
Also it is obvious that
q(t_n+1)=q(t_n)+∫_t_n^t_n+1q̇(t)dt≈q(t_n)+h/2[q̇(t_n)+q̇(t_n+1)].
The solution of (<ref>) (<ref>)
is
q̇(t_n+1) ≈Z_1^-1[Z_0q̇(t_n)-hΛq(t_n)+hp(t_n)+h^2/2ṗ(t_n)],
where
Z_0=I-(hμ+h^2/4)Λ, Z_1=I+(hμ+h^2/4)Λ
are diagonal matrices easy to invert. (<ref>)
(<ref>) reveal that q(t_n+1),
q̇(t_n+1) can be computed with second-order
accuracy using q(t_n), q̇(t_n),
p(t_n), ṗ(t_n), but without
using unknowns of the t_n+1 step. This means our time discretization
scheme is explicit, though it remains to discover its energy identity.
The most significant accuracy loss of this scheme seems to be the
use of (<ref>) rather than (<ref>)
in approximating the integral of p(t), where would
expect the first-order approximation of p(t_n+1)
to be acceptable with small h. This actually implies that we rely
on some historical (t_n) displacement and velocity values to
predict the current (t_n+1) input sources, then the current input
sources are updated by the computed current displacement and velocity
values. This process can be repeated several times for one time step
if higher accuracy is desired, able to use (<ref>)
rather than (<ref>) in approximating the integral
of p(t).
§.§.§ Time stepping of first-order ODEs
Recall the previously derived second-order decoupled ODEs (<ref>),
the rhs source term can rewritten as
q̇(t)+Λq(t)=p(t)=r[t,q(t)],
where r incoporate all DOF-independent and DOF-dependent
contributions to the source term; DOF-dependent contributions consist
of two major sources: non-zero Dirichlet boundary conditions; non-conservative
force (but excluding damping force already on the lhs). Integrating
(<ref>) over [t_n,t_n+1], the approximate
solution of q(t_n+1) can be found as
q(t_n+1)-q(t_n) =-Λ∫_t_n^t_n+1q(t)dt+∫_t_n^t_n+1p(t)dt
q(t_n+1)-q(t_n) ≈-h/2Λ[q(t_n)+q(t_n+1)]+hp(t_n)+h^2/2ṗ(t_n)
q(t_n+1) ≈(I+h/2Λ)^-1[(I-h/2Λ)q(t_n)+hp(t_n)+h^2/2ṗ(t_n)].
Therefore, q(t_n+1) can be computed using q(t_n),
p(t_n), ṗ(t_n), but without
using unknowns of the t_n+1 step.
§.§.§ Time stepping of hammer shank rotation ODE
The equation of hammer shank motion (<ref>)
can be written in a more abstract way as
Iθ̈=-μθ̇+T(θ),
where T(θ) incoporate the contributions of gravity and felt
force to the total torque of shank, dependent on the zero-order value
of θ. Similar to second-order system ODEs, the time discretization
of (<ref>) is found to be
I[θ̇(t_n+1)-θ̇(t_n)] =-μ∫_t_n^t_n+1θ̇dt+∫_t_n^t_n+1T(θ)dt
I[θ̇(t_n+1)-θ̇(t_n)] ≈-h/2μ[θ̇(t_n)+θ̇(t_n+1)]+hT(t_n)+h^2/2Ṫ(t_n)
θ̇(t_n+1) ≈(I+h/2μ)^-1[(I-h/2μ)θ̇(t_n)+hT(t_n)+h^2/2Ṫ(t_n)],
and
θ(t_n+1)≈θ(t_n)+h/2[θ̇(t_n)+θ̇(t_n+1)].
Therefore, θ(t_n+1) and θ̇(t_n+1) can be
computed using θ(t_n), θ̇(t_n), T(t_n),
Ṫ(t_n), but without using unknowns of the t_n+1 step.
§.§ Concatenating the whole model
With modal transformation and time discretization of the ODEs established,
it now suffices to combine all parts of the piano model together.
The whole computation process consists of two major separate parts:
first do space discretization, then do time discretization. Rooted
in the time-space separation idea of FEM, these two numeric works
do not interfere so computation costs should be acceptable.
§.§.§ Space discretization
Derive ODEs for each subsystem.
Mξ̈(t)+Cξ̇(t)+Kξ(t)=f(t), superscripts:a,b,e,f
Mξ̇(t)+Kξ(t)=f(t), superscripts:c
θ̈=-μθ̇+T(θ), superscripts:d
Solve generalized eigenvalue problems.
Only find the lowest M eigenvalues and corresponding eigenvectors,
and integrate them into Λ, Φ.
Kϕ_i=λ_iMϕ_i, i=1,...,M, superscripts:a,b,c,e,f
Compute modal force transformation matrix (dense).
This is useful for transformation p(t)=Sf(t).
S=(Φ^HMΦ)^-1Φ^H, superscripts:a,b,c,e,f
§.§.§ Time discretization
Update schemes for modal DOFs
Below lists the previously introduced time discretization shemes to
be later referred to. For each time step n, schemes 1a/2a/3a are
preferred over 1b/2b/3b whenever possible, because they utilize values
of the rhs source term for next time step n+1.
* Update scheme 1a:
q(t_n+1)=(I+h/2Λ)^-1[(I-h/2Λ)q(t_n)+h/2p(t_n)+h/2p(t_n+1)]
* Update scheme 1b:
q(t_n+1)=(I+h/2Λ)^-1[(I-h/2Λ)q(t_n)+hp(t_n)+h^2/2ṗ(t_n)]
* Update scheme 2a:
q̇(t_n+1) =Z_1^-1[Z_0q̇(t_n)-hΛq(t_n)+h/2p(t_n)+h/2p(t_n+1)]
q(t_n+1) =q(t_n)+h/2[q̇(t_n)+q̇(t_n+1)]
* Update scheme 2b:
q̇(t_n+1) =Z_1^-1[Z_0q̇(t_n)-hΛq(t_n)+hp(t_n)+h^2/2ṗ(t_n)]
q(t_n+1) =q(t_n)+h/2[q̇(t_n)+q̇(t_n+1)]
* Update scheme 3a:
θ̇(t_n+1) =(I+h/2μ)^-1[(I-h/2μ)θ̇(t_n)+h/2T(t_n)+h/2T(t_n+1)]
θ(t_n+1) =θ(t_n)+h/2[θ̇(t_n)+θ̇(t_n+1)]
* Update scheme 3b:
θ̇(t_n+1) =(I+h/2μ)^-1[(I-h/2μ)θ̇(t_n)+hT(t_n)+h^2/2Ṫ(t_n)]
θ(t_n+1) =θ(t_n)+h/2[θ̇(t_n)+θ̇(t_n+1)]
Initialization
* Discrete time interval h, number of time steps n_1.
* Hammer shank: initial angle and angular velocity θ^(d')(t_0),
θ̇^(d')(t_0), from which the initial torque T^(d')(t_0)
and its derivative Ṫ^(d')(t_0) (gravity contribution
only) can be computed.
* Hammer felt & string & soundboard & air & room barraiers: initial
modal DOFs and modal forces
q^(*)(t_0)=q̇^(*)(t_0)=p^(*)(t_0)=ṗ^(*)(t_0)=0, *=a,b,c,e”,f
At each time step n=0,1,2,...,n_1
* Compute the next rotation of hammer shank using scheme 3b:
θ^(d')(t_n),θ̇^(d')(t_n),T^(d')(t_n),Ṫ^(d')(t_n)→θ^(d')(t_n+1),θ̇^(d')(t_n+1)
* Compute the current modal forces of hammer felt (non-zero boundary
condition) using (<ref>):
θ^(d')(t_n),θ̇^(d')(t_n),q^(a)(t_n),q̇^(a)(t_n)→p^(e”)(t_n),ṗ^(e”)(t_n)
Note: p^(e”)(t_n) relates to the displacement
of felt at the contact points with string.
* Compute the next modal DOFs of hammer felt using scheme 2b:
q^(e”)(t_n),q̇^(e”)(t_n),p^(e”)(t_n),ṗ^(e”)(t_n)→q^(e”)(t_n+1),q̇^(e”)(t_n+1)
* Compute the next torque of hammer shank using (<ref>):
θ^(d')(t_n+1),θ̇^(d')(t_n+1),q^(e”)(t_n+1),q̇^(e”)(t_n+1)→ T^(d')(t_n+1),Ṫ^(d')(t_n+1)
* Compute the next modal forces of string (non-conservative force) using
(<ref>):
q^(e”)(t_n+1)→p_1^(a)(t_n+1)
Note: p_1^(a) relates to hammer felt force, p_2^(a)
relates to bridge point displacement.
* Compute the next modal DOFs of string using scheme 2a for p_1^(a)
and scheme 2b for p_2^(a):
q^(a)(t_n),q̇^(a)(t_n),p_1^(a)(t_n),p_1^(a)(t_n+1),p_2^(a)(t_n),ṗ_2^(a)(t_n)→q^(a)(t_n+1),q̇^(a)(t_n+1)
* Compute the next modal forces of soundboard (non-conservative force)
using (<ref>):
q^(a)(t_n+1)→p_1^(b)(t_n+1)
Note: p_1^(b) relates to string force at the bridge,
p_2^(b) relates to air pressure on the soundboard.
* Compute the next modal DOFs of soundboard using scheme 2a for p_1^(b)
and scheme 2b for p_2^(b):
q^(b)(t_n),q̇^(b)(t_n),p_1^(b)(t_n),p_1^(b)(t_n+1),p_2^(b)(t_n),ṗ_2^(b)(t_n)→q^(b)(t_n+1),q̇^(b)(t_n+1)
* Compute the next modal forces of air (non-zero boundary condition)
using (<ref>):
q^(b)(t_n+1)→p_1^(c)(t_n+1)
Note: p_1^(c) relates to displacements at the
interface with soundboard, p_2^(c) relates to
displacements at interface with room barriers.
* Compute the next modal DOFs of air using scheme 1a for p_1^(c)
and scheme 1b for p_2^(c):
q^(c)(t_n),p_1^(c)(t_n),p_1^(c)(t_n+1),p_2^(c)(t_n),ṗ_2^(c)(t_n)→q^(c)(t_n+1)
* Compute the next modal forces of soundboard (non-conservative force)
using (<ref>):
q^(c)(t_n+1)→p_2^(b)(t_n+1),ṗ_2^(b)(t_n+1)
* Compute the next modal forces of room barriers (non-conservative force)
using (<ref>):
q^(c)(t_n+1)→p^(f)(t_n+1)
Note: p^(e”)(t_n) relates to air pressure on
the room barriers.
* Compute the next modal DOFs of room barriers using scheme 2a:
q^(f)(t_n),q̇^(f)(t_n),p^(f)(t_n),p^(f)(t_n+1)→q^(f)(t_n+1),q̇^(f)(t_n+1)
* Compute the next modal forces of air (non-zero boundary condition)
using (<ref>):
q^(f)(t_n+1),q̇^(f)(t_n+1)→p_2^(c)(t_n+1),ṗ_2^(c)(t_n+1)
* If higher accuracy is desired, repeat the above steps 1 to 14 several
times. Note that different from the first iteration, the subsequent
iterations always use schemes 1a, 2a, 3a and do not need to use schemes
1b, 2b, 3b. If no more repetition is needed, end the current time
step and move on to the next time step.
Audio output
Compute the acoustic pressure signals at certaining listening positions
using the stored modal DOFs q^(c)(t_n) (n=0,...,n_1)
and modal superposition. These signals are the final output digital
audio of piano simulation model.
§ CONCLUSION
This paper presented a detailed physical model for simulating acoustic
piano sounds. For solid parts of the piano system, viz. strings, soundboard,
room barriers, hammer felt, a 3D prestressed elasticity model is generally
applied. For fluid parts of the piano system, viz. sound radiation
in the air, conservation of mass and Navier-Stokes equation is applied.
For coupling between different subsystems of the piano, mechanisms
of surface force transmission and displacement/velocity continuity
are considered. For numeric simulation, modal superposition and explicit
time discretization schemes are utilized. Despite the complexity of
this whole piano model, we have paid efforts to a straightforward
presentation based more on system ODEs transformed from strong PDEs,
as well as a time domain simulation scheme balancing efficiency and
accuracy.
Below discusses the current study's limitations and our plans or recommendations
for future research.
* Waiting for numeric simulation results. Due to the complication of
our piano model, we choose to write down theoretic models first as
a guiding framework. Our next step involves implementing the computation
procedures using high-performance, expressive and well-structured
programming languages like Rust, as well as performing result analysis
using convenient and ecologically rich programming languages like
Python. Facing some unknown uncertainties in practice, our model needs
to be further tested and improved.
* In hammer felt-string coupling and string-soundboard coupling, the
contact was treated as occuring at a point rather than an area. This
simplification would probably lead to a loss in realism. But unlike
for soundboard-air coupling, the interface region is relatively small
for FEM meshes, thus special efforts may be required to realize surface
force transmission and displacement/velocity continuity along small
regions.
* The string-soundboard coupling mechanism was assumed of fixation but
not collision nature. A coupling model similar to the nonlinear hammer-string
interaction may be more suitable, considering that a string seems
to actually be supported between two distanced bridge pins. Here a
more general framework for collisions in musical instruments may be
applied <cit.> that may better be implemented
through implicit time schemes. It also remains to discover how the
relative positions of the two pins affect the vibrations of the string
and soundboard.
* Formulation of the piano model lacks some energy perspectives, since
the authors are currently unfamiliar with analyzing energy in dissipative
systems. Besides, a Lagrangian or Hamiltonian formulation of the model
may be a good complement to our Newtonian formulation, unveiling the
conservation and dynamics of kinetic and potential energy under the
least action and virtual work principles.
* Some observed phenomena of acoustic pianos are still missing their
representations in our model. For example, strings not striked by
the hammer may also vibrate as long as the sustain pedal is pressed,
which is often called sympathetic resonance. An explanation for this
is that the striked string transmits its vibration to other strings
through the variation of air pressure, which can actually lead to
a string-air coupling model. Also, the transmission of piano player's
key action into hammer shank movement requires further investigation.
* It remains to discover the relation between physical parameters of
the piano model and the objective (waveforms, spectrums) and subjective
(listener feel) aspects of the final ouput sound, so that these parameters
can be tuned to approach realism or even obtain new sounds without
a real world couterpart.
Finally, we hope this study could contribute to the understanding
of the vibroacoustics of a piano, and to the innovation of digital
musical instruments with desired characteristics of sound.
plain
§ PRESTRAIN AND PRESTRESS
Consider 3 configurations of a material: natural → initial → current
<cit.>. In the natural configuration, no prestrain
and prestress is present. In the initial configuration, prestrain
and prestress exist. In the current configuration, strain and stress
emerge in addition to prestrain and prestress. Under the same xyz
coordinate system, define ℝ^3 vectors x̅,
x, x̂ as the coordinates of the
same material particle in the natural, initial and current configurations
respectively. The following relation holds:
x =x̅+u̅(x̅),
x̂ =x+u(x),
where u̅ and u are the displacement
vectors from natural to initial and from initial to current configurations
respectively. Denote deformation gradients V̂=∂x̂/∂x̅,
V=∂x̂/∂x
and V̅=∂x/∂x̅;
denote Jacobian matrices J=∇u, J̅=∇u̅,.
We then have
V=I+J, V̅=I+J̅,
V̂=VV̅=(I+J)(I+J̅).
The Green-Lagrange strain tensor of current configuration with respect
to natural configuration is
Ê =1/2(V^⊤V-I)
=1/2(J̅+J̅^⊤+J̅^⊤J̅)+1/2(I+J̅^⊤)(J+J^⊤+J^⊤J)(I+J̅)
≈E=ϵ+E̅+(J̅^⊤ϵ+ϵJ̅+J̅^⊤ϵJ̅),
where E̅=1/2(J̅+J̅^⊤+J̅^⊤J̅)
is the prestrain tensor and ϵ=1/2(J+J^⊤)
is the strain tensor approximated as the well-known engineering strain;
the second-order 1/2J^⊤J
term is discarded for the sake of linearization. Assuming J̅={J̅_ij}
is symmetric, viz. no rigid body rotation, then we have eigen decomposition
J̅=Udiag(λ_1,λ_2,λ_3)U^⊤
and
J̅+1/2J̅^2 =E̅
U[[ λ_1+1/2λ_1^2; λ_2+1/2λ_2^2; λ_3+1/2λ_3^2 ]]U^⊤ =U[[ λ_1'; λ_2'; λ_3' ]]U^⊤,
from which λ_i=4√(λ_i'+4)-8 is the solution
and J̅ can be computed per eigen decomposition.
The uniqueness of this solution stems from that the principal stretches
λ_i'≥-1 should hold for normal cases, viz. no negative
compression, and λ_i should be close to λ_i'
to be consistent with the case of ignoring 1/2J̅^2.
If the prestrain is not large enough to induce non-negligible geometric
nonlinearity, 1/2J̅^⊤J̅
can be disregarded and J̅=E̅
simply holds. But generally for string instruments like piano, prestrain
may be large and even dominate the post-strain, thus we should cover
the geometric nonlinearity of prestrain. This, however, would not
necessarily make (<ref>) nonlinear with respect
to J, because strain deformation is normally small
enough to make 1/2J^⊤J negligible.
Consequently, with J̅ and E̅
known, the prestrain model (<ref>) is reasonably
linear with respect to the unknowns. Converting it into vector form,
we find
Ê ≈E=ϵ+E̅+Ψϵ,
Ψ =[[ J̅_11(J̅_11+2) J̅_12^2 J̅_13^2 J̅_12(J̅_11+1) J̅_13(J̅_11+1) J̅_12J̅_13; J̅_12^2 J̅_22(J̅_22+2) J̅_23^2 J̅_12(J̅_22+1) J̅_12J̅_23 J̅_23(J̅_22+1); J̅_13^2 J̅_23^2 J̅_33(J̅_33+2) J̅_13J̅_23 J̅_13(J̅_33+1) J̅_23(J̅_33+1); 2J̅_12(J̅_11+1) 2J̅_12(J̅_22+1) 2J̅_13J̅_23 J̅_11J̅_22+J̅_11+J̅_12^2+J̅_22 J̅_11J̅_23+J̅_12J̅_13+J̅_23 J̅_12J̅_23+J̅_13J̅_22+J̅_13; 2J̅_13(J̅_11+1) 2J̅_12J̅_23 2J̅_13(J̅_33+1) J̅_11J̅_23+J̅_12J̅_13+J̅_23 J̅_11J̅_33+J̅_11+J̅_13^2+J̅_33 J̅_12J̅_33+J̅_12+J̅_13J̅_23; 2J̅_12J̅_13 2J̅_23(J̅_22+1) 2J̅_23(J̅_33+1) J̅_12J̅_23+J̅_13J̅_22+J̅_13 J̅_12J̅_33+J̅_12+J̅_13J̅_23 J̅_22J̅_33+J̅_22+J̅_23^2+J̅_33 ]]
where all the vectors are defined similar to (<ref>).
It is now straight forward that the prestrain functions a linear transformation
(addition and scaling) of the strain at a first approximation. However,
as in (<ref>) ϵ̅_ij(x̅)
is expressed in natural coordinates, converting it into initial coordinate
representations to align with ϵ
would be preferrable. In order for this, define prestress vector σ̅(x̅)
and constitutive matrix D̅ similar to (<ref>),
then E̅=D̅^-1σ̅.
From (<ref>), an inverse mapping x̅=f(x)
should exist. If we let T(x)=σ̅(f(x)),
the vectorized tension field from (<ref>), and
subsititute E̅=D̅^-1T(x)
into (<ref>), then the total strain Ê
can be expressed in only the initial configurations. The advantage
of this is not only avoiding finding the intricate natural coordinates,
but also making it straightfoward to satisfy static equilibrium by
condition ∇·T(x)=0.
Given the total strain vector, the total stress vector is
σ̂(x)=DÊ≈DE=σ+T+τ, τ=DSϵ,
where τ is the dynamic prestress
vector as a secondary contribution of prestress to the total stress;
all vectors here can be converted to corresponding symmetric matrices.
The total stress here is the Cauchy stress, also known as the true
stress, which represents surface forces in the current configuration.
This may cause inconsistencies as σ̂
is still expressed in the initial configuration in (<ref>),
and a conversion to the current configuration may be inconvenient
to apply. Nevertheless, the differences from this should be negligible
under small deformations. It is now clear that the effect prestress
can be seen as bringing a modified non-symmetric constitutive matrix
D(I+S).
In the presence of large deformations and geometric nonlinearity,
it may be beneficial to use the second Piola-Kirchhoff stress tensor
(PK2) (V̂)V̂^-1σ̂V̂^-⊤
<cit.>. This alternative symmetric stress tensor
is derived from the energy conjugate to the Green-Lagrange strain,
capable of representing stress in a pre-deformation configuration,
but seems less physically clear and interpretable compared to the
Cauchy stress tensor. If using PK2, we would seek for the initial
configuration representation as it is used throughout the whole piano
model, thereby the PK2 stress is (V)V^-1σ̂V^-⊤,
which can be first-order or second-order approximated to reduce complexity.
Notice, however, that even when the prestrain is large, the total
stress is already represented in the initial configuration which exists
after deformation from the natural state, eliminating the need to
go back to the natural configuration. And also considering that the
“post” strain would normally be not too large, the difference
between the initial and current configurations should be under an
acceptable level. Therefore, the Cauchy stress tensor is appropriate
for our piano model. As for the choice between engineering strain
(linear) and Green-Lagrange strain (linear + nonlinear) in (<ref>),
it depends on whether prestrain or strain is large enough to induce
non-negligible geometric nonlinearity.
|
http://arxiv.org/abs/2409.02105v1 | 20240903175835 | Embedding theory contributions to average atom models for warm dense matter | [
"Sameen Yunus",
"David A. Strubbe"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"physics.comp-ph",
"physics.plasm-ph"
] |
[email protected]
Department of Physics, University of California, Merced, 5200 N. Lake Rd, Merced
§ ABSTRACT
Accurate modeling in the warm dense matter regime is a persistent challenge with the most detailed models such as quantum molecular dynamics and path integral Monte Carlo being immensely computationally expensive. Density functional theory (DFT)-based average atom models (AAM) offer significant speed-ups in calculation times while still retaining fair accuracy in evaluating equations of state, mean ionizations, and more. Despite their success, AAMs struggle to precisely account for electronic interactions – in particular, they do not account for effects on the kinetic energy arising from overlaps in neighboring atom densities. We aim to enhance these models by including such interactions via the non-additive kinetic potential v^ nadd as in DFT embedding theories. v^ nadd can be computed using Thomas-Fermi, von Weizsäcker, or more sophisticated kinetic energy functionals. The proposed model introduces v^ nadd as a novel interaction term in existing ion-correlation models, which include interactions beyond the central atom. We have applied this model to hydrogen at 5 eV and densities ranging 0.008 to 0.8 g/cm^3, and investigated the effects of v^ nadd on electron densities, Kohn-Sham energy level shifts, mean ionization, and total energies.
Embedding theory contributions to average atom models for warm dense matter
David A. Strubbe
September 9, 2024
===========================================================================
§ INTRODUCTION
Accurate models of warm dense matter (WDM) can lead to an improved understanding of material properties in conditions relevant to many important phenomena. Perhaps most pressing of which is the modeling of inertial confinement fusion as fuel is compressed and heated on its way to ignition <cit.>. Warm dense plasmas also occur in the cores of giant planets <cit.>, brown dwarfs <cit.>, and white dwarf envelopes <cit.> leading to novel astrophysics that can be studied in laboratories and through simulations. Studying warm dense matter is challenging due to the strong degree of Coulomb coupling and quantum degeneracy that needs to be accounted for in simulations. Naturally, the complex intertwining of such physics poses a theoretical challenge in what kinds of models can accurately describe WDM – here density functional theory-based average atom models <cit.> have been very successful at providing accurate and computationally efficient models of the complex dense plasma environment. We briefly introduce the current state of average atom models and a contribution in the electronic structure that can improve existing models.
Average atom models (AAM) alleviate the computational expense of detailed ab initio molecular dynamics models <cit.> by considering spherically averaged ion spheres each of which contain a central nucleus and an associated electron density thus reducing the full electronic structure calculation to that for a single atom with spherical symmetry. The plasma environment is partitioned according to the mean ion density, n_I^0, by the Wigner-Seitz radius and the resulting average atom can be solved to produce accurate electronic properties:
R_WS = ( 3/4π n_I^0)^1/3.
The earliest AAMs <cit.> were built on the Thomas-Fermi approximation <cit.> of continuous electron density for treating extreme conditions and metallic electrons. These provided analytic solutions for such electron densities but had no description of states – these were introduced in the first Kohn-Sham (KS) DFT-based AAM by Liberman in 1979 <cit.>. This model has been very successful in describing dense plasma electronic properties and can reproduce shell structure in the density as opposed to the earlier density-only Thomas-Fermi-like prescriptions. The interacting ions in the plasma environment are approximated as spherically symmetric ion spheres of size R_WS and the spherical symmetry is leveraged to solve single-coordinate radial KS equations. This class of “ion sphere models" are applied widely for evaluating equations of state in the WDM regime <cit.>.
It should be noted that while these are very efficient and fairly accurate models, they ignore inter-cell interactions between ion spheres and these are only partially accounted for through boundary conditions, which distinguish the ion sphere model from purely isolated atom models. These include interactions with neighboring atoms that include some limited details of electron-electron interactions between neighboring cells but do not in general contain any information about ion-correlations. The ion-sphere model is fairly robust at relatively high temperatures and low densities, but it becomes less accurate as density increases and electron-electron and electron-ion interaction effects become more important. In addition, ion sphere models are also subject to arbitrary choices of boundary conditions at the Wigner-Seitz radius which impacts observables in the calculation <cit.>.
Several models have been introduced to include correlations beyond the central atom, known as “ion correlation models." The strength of these models lies in their ability to incorporate correlations beyond the ion sphere, with pair correlation functions evaluated either through a semi-classical hyper-netted chain model <cit.> or self-consistently through plasma closure relations <cit.>. These treat a larger correlation sphere of size R_corr, typically 5-6 times the Wigner-Seitz radius <cit.>. This radius is chosen such that the ion-ion correlations (g_II) and ion-electron correlations (g_Ie), which describe the distribution of particles around a reference particle, approach unity around r ≲ R_corr <cit.> to include the long-range interactions.
We propose a re-framing of the ion-correlation models as an embedding theory problem where a central reference atom is embedded in an averaged background plasma contribution based on ion-ion pair-correlation functions g_II. This introduces a new contribution in the electronic structure of the average atom which is the non-additive kinetic potential v^nadd arising from the orthogonality condition of overlapping orbitals between the embedded system and the environment. We use the Mermin finite-temperature DFT formalism <cit.> to construct such an embedding theory-average atom model and study the contributions to electron densities and KS eigenvalues in the Octopus real-space DFT code <cit.>. The main goals of this paper are to explore how this embedding contribution changes DFT observables compared to an average atom model without v^nadd. The model is presented in Sec. <ref>, with a brief introduction to embedding theory and approaches to calculating v^nadd in Secs. <ref> and <ref>. The practical implementation to warm dense hydrogen is discussed in Sec. <ref> with treatment of continuum electrons briefly described in Sec. <ref> and other computational parameters in Sec. <ref>. Finally, in Sec. <ref> we apply the model to present some effects arising due to the embedding contribution, and then conclude in Sec. <ref>.
§ THE PROPOSED MODEL AND PARAMETERS
§.§ Embedding theory and the non-additive kinetic potential
Embedding theory is a general method used in chemistry to derive detailed properties of molecules embedded in averaged solvent environments. This approach involves making a detailed quantum mechanical treatment of the central molecule or protein while considering an average potential of the surrounding solvent. In particular, we are considering frozen density embedding theory <cit.> in which the full density is partitioned into the embedded subsystem and the environment. This is somewhat reminiscent of the ion correlation models <cit.> where the central ion sphere is embedded in a large correlation sphere which acts as the average environment potential of the plasma. However, these models do not typically consider the electrons as an embedded subsystem and neglect a non-additive contribution to their kinetic energy that comes from partitioning the density between the central atom and environment – this is the non-additive kinetic potential bi-functional or v^nadd <cit.>. Starting from the ordinary KS equations for the electron density of single particles in the full system,
[ -∇^2/2 + V_ext(r) + V_H[n^tot](r) +
V_xc[n^tot(r)]
] ϕ_i(r) = ϵ_i ϕ_i(r),
we partition the total density of the system into the embedded subsystem and the environment as n_tot=n_sub+n_env:
[ -∇^2/2 + V_ext^sub(𝐫) + V_ext^env(𝐫) + V_H[n^sub + n^env](r) +
V_xc[n_sub+n_env](r) + .
. v^nadd[ n^sub, n^env](r) ] ϕ_i(r) = ϵ_i ϕ_i(r)
The external potential (V_ ext), independent of density, and the Hartree (V_ H) and exchange-correlation (V_ xc) potentials, functionals of the density, are straightforward to partition. However, the partitioning of the exact kinetic energy of single-particle states into an environment and subsystem yields an embedding potential that must be approximated. In principle, if we had an exact expression for the kinetic-energy functional T_s[n] we would obtain an exact orbital-free expression for the embedding potential, without any dependence on KS states. In Frozen Density Embedding Theory (FDET), the constrained search yields an exact orbital-free expression for such an embedding potential, relying solely on the electron density and no other properties of the environment <cit.>:
v^nadd[ n^sub, n^env](r) =
δ T^nadd_s[n^sub, n^env](r)/δ n^sub(r) =
.δ T_s[n](r)/δ n(r)|_n=n^sub+n^env -
.δ T_s[n](r)/δ n(r)|_n=n^sub.
This term arises from the functional derivative of the non-additive part of the kinetic energy bi-functional with respect to the subsystem density:
T^nadd_s[n^sub, n^env]=T^nadd_s[n^sub+ n^env]-T^nadd_s[n^sub] - T^nadd_s[n^env].
In the limit of non-overlapping n^sub and n^env, T^nadd_s[n^sub, n^env]=0 and we recover the exact kinetic energy of a non-interacting system <cit.>.
An alternative, perhaps more intuitive way to conceptualize v^nadd, is to recognize that as we partition the system into overlapping densities, the orthogonality of eigenstates of the full system, due to the Pauli exclusion principle, must be maintained. This orthogonality constraint affects the curvature of the states especially near the regions of strong overlap which results in a contribution to the kinetic energy of the system that cannot be accounted for by the kinetic energies of states of the subsystems. Densities in the warm dense matter regime certainly enter into regions of strong orbital overlap, but average atom models do not have a prescription for explicitly including such effects in the electronic structure. Our goal with this work is to provide a v^nadd correction to existing AAMs and explore how this affects DFT observables across a range of densities and temperatures. This approach provides an approximate way of including hybridization in AAMs, which is not taken into account in existing models.
§.§ Kinetic energy functionals and their role in embedding theory
In principle, v^nadd should be exactly defined by an exact kinetic-energy functional T_s [n]; however, as with the XC functional, there is no known exact expression for this and approximations must be made (indicated by T̃_s). Kinetic energy functionals are much less developed than XC approximations but T̃_s approximations are ubiquitous in the orbital-free DFT literature for WDM <cit.>. The exact XC and kinetic energy functionals should include finite-temperature effects <cit.>, but our work so far uses only zero-temperature approximations to both. In our framework, we have applied the spherical symmetry of the average atom model to our embedding potential Eq. <ref>, which is reduced to just the radial coordinate r. We can now approximate the exact v^nadd of Eq. <ref> with our choice of approximate kinetic-energy functionals, T̃_s:
ṽ^nadd[ n^atom, n^env](r) =
.δT̃_s[n](r)/δ n(r)|_n=n^atom+n^env -
.δT̃_s[n](r)/δ n(r)|_n=n^atom.
A common starting point is the Thomas-Fermi kinetic energy functional <cit.> which is exact for the uniform electron gas and only depends on the local value of the density at any given point r:
T_s^TF[n(r)] = C^TF∫ n(r)^5/3 dr,
C^TF = 3/10 (3π^2)^2/3
The corresponding non-additive kinetic energy is then:
T̃_s^nadd(TF) [n^atom, n^env](r) = C^TF∫ (n^atom(r) + n^env(r))^5/3 - (n^atom(r))^5/3 - (n^env(r))^5/3 dr,
The non-additive kinetic potential bi-functional v^nadd is given by the functional derivative of T̃_s^nadd(TF) with respect to the density of the embedded subsystem (the central atom in our case):
ṽ^nadd(TF) [n^atom, n^env](r) = δ/δ n^atom(r)T̃_s^nadd(TF)[n^atom, n^env](r)
= 5/3 C^TF[ (n^atom(r) + n^env(r))^2/3 - (n^atom(r))^2/3]
Another common approximation with an analytical form is the von Weizsäcker functional <cit.> which is exact for one-electron or two-electron spin-compensated systems. This functional is equivalent to the analytically inverted potential from a one-orbital KS equation <cit.> and allows for an easy test case for the numerical implementation of our functionals since they are analytically solvable for hydrogen in the ground state. The exact von Weizsäcker kinetic functional
T_s^vW[n(r)] = ∫|∇_r n|^2/8n dr
yields the corresponding non-additive kinetic energy functional <cit.>,
T̃_s^nadd(vW)[n^atom, n^env](r) =
∫|∇ (n^atom+ n^env)|^2/8(n^atom+ n^env) dr -
∫|∇ n^atom|^2/8n^atom dr -
∫|∇ n^env|^2/8n^env dr
=-1/8∫|n^atom∇ n^env - n^env∇ n^atom|^2/n^atomn^env(n^atom+n^env) dr
and non-additive kinetic potential functional as follows after some manipulation of gradients:
ṽ^nadd(vW) [n^atom, n^env](r) = δ/δ n^atom(r)T̃_s^nadd(vW)[n^atom, n^env](r)
= |∇ (n^atom+n^env)|^2/8(n^atom+n^env)^2 - ∇^2 (n^atom+n^env)/4(n^atom+n^env) -
|∇ n^atom|^2/8(n^atom)^2 + ∇^2 n^atom/4n^atom
As mentioned, these are exact in the limits of a homogeneous electron gas (TF) and a one- or two-electron system (vW). The von Weizsäcker functional is shown <cit.> to provide a rigorous lower bound to the true kinetic energy and the Thomas-Fermi functional can be interpreted as a correction to this. For non-interacting particles in one spatial dimension (r) <cit.> the sum of these provides a rigorous upper bound:
T_s^vW[n] ≤ T_s ≤ T_s^vW[n] + T_s^TF[n].
On the other hand, the density-gradient expansion <cit.> yields the Thomas-Fermi term as the zeroth order with the von Weizsäcker term as the second-order term of the gradient expansion with an additional 1/9 coefficient:
T_s[n] = T_s^(0)[n] + T_s^(2)[n] + T_s^(4)[n] +...
T_s[n] ≃ T_s^TF[n] + 1/9T_s^vW[n]
This provides us a few functional approximations to explore for ṽ^nadd that are exact in some particular limits.
§.§ Implementation and application to hydrogen plasmas
We applied this method to hydrogen at WDM conditions for its relevance to ICF fuels <cit.> and in planetary interiors <cit.>, and because hydrogen has analytical solutions at the ground state for many of the properties that we evaluated. In general, the v^nadd contribution can be applied to an average atom model of any element with the use of pseudopotentials <cit.>. In fact, this model may prove to be more accurate for helium and beyond since the approximations used in DFT lead to some peculiarities for hydrogen that deviate from the true ground-state behavior for instance as discussed in Table 6.1 of Ref. <cit.>. Extreme pressures in the WDM regime can result in core orbital overlaps between adjacent atoms, especially in the stagnation pressures present in laboratory fusion studies <cit.>. For this reason, it can be important to consider semi-core pseudopotentials or use harder pseudopotentials with small cutoff radii <cit.>. The region where deviations are important lies between the pseudopotential cutoff r_cut and the Wigner-Seitz radius; radii smaller than R_WS do not contribute to the environment since no other atoms are allowed in this region. For radii greater than r_cut, the hydrogen pseudopotentials are explicitly constructed such that the eigenenergies and pseudo-wavefunctions agree with the all-electron calculation. We compared the kinetic potential due to the atom .δ T_s/δ n|_atom using the hydrogen pseudopotentials against an all-electron calculation using the Atomic Pseudopotential Engine <cit.> and found the kinetic potentials agree to within 10 meV; this implies that our use of pseudopotentials for the ion densities considered does not greatly impact the accuracy of our results. For larger densities, the results would benefit in accuracy from the use of smaller cutoff radius. We choose Octopus <cit.> for our calculations: as a real-space code, it allows us to study finite spherical systems which is impractical with plane-wave codes, and it allows the use of a fully user-defined piece-wise external potential, which is how the constructed AAM is passed to Octopus.
The model is introduced here by re-framing the embedding equation (Eq. <ref>) in terms of the relevant quantities for the average atom model. We introduce the electron subscript in the electron density n_e to distinguish it from the ion density n_I which will become relevant shortly. Additionally, we can separate the terms in the total potential into the contributions due to the KS potential of the central atom; the external, Hartee, and exchange-correlation terms from the averaged plasma environment; and finally the non-additive v^nadd term which accounts for the kinetic energy contribution arising from orbital overlaps between the atom and environment. It should be noted here that an unnecessary approximation has been made so far in linearizing the exchange-correlation contributions due to the embedded atom and environment, which will be corrected in further work. While this neglected non-additive XC term should be a small contribution overall, the approximation can lead to a slightly increased error in the DFT calculations of the total energy, densities, and KS states.
Our equation becomes:
[ -∇^2/2 + V_KS^atom(𝐫) +
V_ext^env(𝐫) + V_H[n_e^env](𝐫) + V_xc[n^env] + .
. v^nadd[n_e^atom, n_e^env](𝐫)
] ϕ_i(r) = ϵ_i ϕ_i(r),
V_KS^atom(𝐫) = V_ext^atom(𝐫) + V_H[n_e^atom](𝐫) + V_xc[n^atom].
How do we calculate each of these terms in practice? The KS potential V_KS^atom is that of the hydrogen atom from a Mermin finite-temperature DFT calculation with a hydrogen pseudopotential. The KS potential includes the electron-nucleus interaction, as well as Hartree and exchange-correlation terms for the atom – which in the case of hydrogen consist entirely of spurious self-interaction of the single electron. The external potential due to the environment, V_ext^env, is the result of convolving the neutral atom KS potential with the ion distribution informed by plasma conditions. This idea is inspired by the neutral pseudoatom molecular dynamics (PAMD) model proposed by Starrett, Daligault, and Saumon <cit.>.
PAMD involves treating a neutral pseudoatom within the average atom model framework and coupling it with classical MD simulations for the ionic structure. This approach allows for the evaluation of structural properties, such as the ion-ion pair correlation function g_II(r), at a fraction of the cost of DFT-MD methods. The neutral pseudoatom idea <cit.> is to solve the electron density of a full system n_e^full with a nucleus at the origin surrounded by a spherically averaged ionic configuration described by the ion-ion pair correlation function g_II(r). This same system is then calculated with the central atom removed n_e^ext; the pseudoatom density is then defined as the difference between these: n_e^PA = n_e^full - n_e^ext. This isolates the influence of one nucleus on the electron density. Our electronic model is compatible with the electronic part of the pseudoatom model sans the v^nadd term. Our interest is in investigating the electronic structure effects caused by this term due to strong atomic density overlaps in other compatible ion correlation models such as the PAMD or hypernetted chain approximations for ion closure <cit.>. The PAMD model is used to generate the g_II(r) used as an input to our average atom model as below. The ion data was provided by Dr. Charles Starrett from the original work <cit.> and additional calculations run for this work. This g_II(r) encapsulates the properties of the plasma environment such as the density and temperature. It is used along with the average ion density n_I^0 to get the distribution of ions around a central nucleus, and this is convolved with the atom KS potential as a kernel to produce the environment potential:
V_ext^env(r)=∫ n_I(r')V_KS^atom(|𝐫'-𝐫|)dr',
n_I(r)=n_I^0g_II(r)
Up to this term and the associated Hartree, exchange, and correlation terms, the AAM potential in Eq. <ref> is then fully defined and consistent with the pseudoatom model in <cit.> with the distinction that our system is built up from an external g_II(r) while their pseudoatom model includes this self-consistently. The model can be taken out to arbitrarily large radii which, for practicality, are chosen to be large enough such that the g_II(r) approaches unity. The total density is that of the isolated atom plus the environment similarly to the pseudoatom density, and the total density is a functional of the atomic density through a similar convolution with the g_II(r) as for the environment potential. n_e^atom and n_e^env are the terms required to evaluate the non-additive kinetic potential bi-functional v^nadd[n_e^atom, n_e^env](r), which is the remaining term in the AAM potential:
n_e^tot=n_e^atom+n_e^env[n_e^atom](r)
n_e^env[n_e^atom](r)=∫ n_I(r')n_e^atom(|𝐫'-𝐫|)dr'
§.§ Treatment of continuum electrons
Mermin DFT <cit.> includes finite-temperature effects by occupying electronic states according to the Fermi-Dirac occupation function. Correspondingly, the higher the electronic temperature in Mermin DFT, the more states are needed to appropriately distribute the electrons according to the occupation function at that temperature. This becomes harder with increasing k_BT_e: as this goes up the system gains more energy, wavefunctions tend to gain more curvature, and to describe this well, a finer spacing is needed. Likewise, as the system ionizes and charge density moves further away from the central nucleus, a larger radius will be needed. Lastly, the number of extra electronic states needed goes up as mentioned; as a general rule of thumb we aim to fill states up to 5k_BT_e above the Fermi level and treat the continuum states explicitly:
n_e(r)=∑_i=all f_i | ϕ_i(r) |^2
where,
f(ϵ_i)=1/e^(ϵ_i-ϵ_F)/k_ B T+1.
This becomes quickly intractable and a treatment for the continuum states is needed which does not rely on solving their KS states explicitly, such as the ideal approximation used in <cit.>. We will explore this as a direction for future work; no continuum treatment is implemented for this work.
§.§ Computational parameters
We use the LDA exchange-correlation functional <cit.> with version 0.4 of pseudodojo optimized norm-conserving Vanderbilt pseudopotential <cit.>. The GGA functional <cit.> is expected to give slightly more accurate results and will be considered in future works. For the isolated atom calculations, a grid spacing (in bohrs: a_0) of 0.2 a_0 is chosen as the converged spacing, and a sphere radius of 30 a_0 is used to allow the wavefunctions to go smoothly to zero. For the full average atom calculation, a grid spacing of 0.5 a_0 and a sphere radius of 30 a_0 is chosen. The larger spacing was verified to give very close agreement with the 0.2 a_0 case in energies, potentials, and electron densities but allows for much faster convergence. The ion temperature k_B T_ion is held fixed at 5 eV and the densities are 0.008, 0.08, 0.8 g/cm^3; these properties come into the model from the provided g_II(r)'s. The electron temperature k_B T_e is varied from 0 to 5 eV to explore the variation across a span of densities and temperatures in v^nadd.
§ RESULTS
To obtain results, we follow the above prescription to construct a user-defined potential for Octopus and solve the full average atom with and without an additional v^nadd term as in Eq. <ref> – this allows us to observe its effect in resulting outputs and observables.
We compare the electron density of the full average atom model and see the effects that arise from including v^nadd in the AAM. Fig. <ref> shows the electron radial distribution functions around the central atom with varying electron temperature between 0 to 5 eV, ion density between 0.008 to 0.8 g/cm^3, and different v^nadd treatments. Generally, for almost all cases the effect of including v^nadd is to push the electron density towards the central atom – this can be interpreted as the effect of the environment pushing in on the central atom due to overlapping densities, which otherwise is not accounted for. Across the temperature and density range, the contribution due to the vW functional is slightly higher than TF. The effect is more prominent for higher densities (bottom row) and is small or negligible for the 0.008 g/cm^3 case (top row). For reference, solid density hydrogen is about 0.08 g/cm^3.
The trend in temperatures shows that, as the plasma is heated, the v^nadd effect becomes less prominent. In each plot, as k_BT_e increases, the localized density around the central atom reduces, and more density is pushed towards larger radii or unbound states. These plots show the necessity of including a continuum treatment in the future, where the free electrons can be accounted with a much smaller sphere radius which would accelerate the speed of the calculations. Additionally, the clear minima in each radial distribution plot indicate a useful cutoff for the start of the quadratic behavior for continuum states, and thus an approach to calculating the mean ionization state Z^* according to:
Z^*=Z-∫_r_ min^∞ 4 π r^2 n_elec(r) dr,
where the integral over bound states is approximated by finding the minima in the radial distributions. This approximated Z^* for the various v^nadd treatments along with the Coulomb coupling parameters are given in Table <ref>. The Coulomb coupling parameter gives a scale of thermal to Coulomb energies and is roughly around one for warm dense matter, and is the regime where our model is most suitable,
Γ=Z^*^2/k_BT_ion R_WS.
Overall, we observe that v^nadd, though small, has a non-negligible effect on electron density. This effect becomes more prominent with increasing plasma density and less prominent with increasing electron temperature. The increase with plasma density makes sense, as v^nadd arises from strong density overlaps, and it should approach zero for non-overlapping densities. The reduction of the v^nadd effect with increasing temperature can be understood by considering the variations in electron density. v^nadd acts on these variations and has the most significant impact when there are large gradients in n_elec. In contrast, for a fully ionized plasma, where k_BT far exceeds interaction energies, the plasma density tends to be uniform and flat in all directions. When v^nadd acts on such a density, it simply produces a constant offset in the total energy, which does not affect observables. As k_BT_e increases, the electron density becomes more uniform, the plasma becomes increasingly ionized, and the effect of v^nadd decreases. We should also expect the contribution of v^nadd to be greater in systems with more electrons, where orbital overlaps can include core electrons—something that does not occur in a hydrogen atom.
While the v^nadd effects in electron density are interesting and novel to explore, these are very difficult scales to discern experimentally. Instead, we can consider some experimental observables based on the KS eigenvalues. While these eigenvalues of fictitious KS states do not have direct physical meaning, they can serve as crude (under)estimates of the true energy levels whose trends are likely to be correct. Therefore we can look at relative differences between energy levels and how these change with v^nadd. Such shifts can be resolved to sub-eV resolution with X-ray free electron lasers <cit.>. The energy levels of the first few core states are shown in Fig. <ref>; these are all offset to the 1s level to compare relative differences. The 2s/2p states show a slight splitting which is due to the breaking of radial symmetry in solving the hydrogen atom 1/r problem on a Cartesian grid in Octopus; we have not explored any fine structure effects in this study. Instead, we focus on the 1s-2p gap that widens when we include v^nadd for the higher densities. At the lower density limit, there doesn't seem to be any significant change in the gap which is indicative of very small or zero density overlaps. Again, the effect also appears to be more prominent for the von Weizsäcker functional compared to Thomas-Fermi; where Thomas-Fermi is more accurate in the limit of high density and many orbitals contributing as in a homogeneous electron gas, and von Weizsäcker for low density and only one orbital contributing.
The increase in the gap from including v^nadd can be explained by the energy levels of the simple particle-in-a-box model, E_n=n^2π/2L^2, which scale inversely with the size of the box L. We might be seeing an analogous effect of the system being squeezed (L effectively being reduced) as evident in the electron density plots Fig. <ref> showing the density being pushed towards the central atom. The resultant energy levels are spaced out as the system is confined to a smaller effective L. This variation in 1s-2p gap energies with temperature is plotted in Fig. <ref> for the different v^nadd treatments – the gap increases for increasing k_BT_e which indicates increasing repulsion from less localized states of the environment.
In atoms with multiple electronic states, ionization reduces the screening effect on the remaining bound electrons, which leads to an increase in binding energy as ionization increases and as k_BT_e rises. While this behavior is nonphysical for a hydrogen atom (as seen in the red curves), it is a real effect for helium and heavier elements. This highlights some of the peculiarities of using DFT for hydrogen. For instance, the LDA gap at k_BT_e=0 is around 7 eV, whereas the true gap (as known analytically) is closer to 10.2 eV.
Running independent electron calculations in Octopus, which do not include XC effects, yields total energies that agree to within 10 meV and 1s/2p gap energies that agree to sub-meV level with the analytical solutions.
There appear to be a few anomalous points in the trends which again need to be further investigated, for instance with the vW treatment at 1 eV, and the TF 0.8 g/cm^3 trend curving up then back down after 3 eV. The leftmost panel shows how the 1s-2p gap evolves for the AAM with no v^nadd effects – we can see the gap lowering as the plasma density increases from 0.008 g/cm^3 to 0.8 g/cm^3 (blue to orange to green). This could be an indicator of continuum lowering which predicts a squeezing together of bound states with increasing pressure <cit.>. When we include the TF or vW treatment of v^nadd, this trend of the gap decreasing with density no longer seems evident. This suggests that the v^nadd effect of confining the system, which pushes the energy levels apart, reduces continuum lowering which has the effect of squeezing levels together.
We also compare the variation in total energies with temperature from our model with different v^nadd
(Fig. <ref>). While the total energies are not necessarily measurable experimentally, they are closely related to the pressure, which is. We find v^nadd can make significant differences in the total energy curve. The temperature trend is consistently increasing in all cases, but as density increases the TF total energy splits off from the no-v^nadd treatment while the vW variation remains larger than both in all cases.
§ CONCLUSION
We have identified a missing electronic contribution in existing ion-correlation average atom models which is significant for overlapping electron densities, as occurs for hydrogen in the warm dense matter regime where matter is at typically solid densities and temperatures of several thousand Kelvins. This contribution comes from v^nadd which accounts for the non-additive part of the potential due to partitioning the exact kinetic energy of the fully interacting system into an embedded and environment contribution. This is applied in a finite-temperature real-space DFT-based AAM and we use approximate kinetic energy functionals to explore the effects due to v^nadd in our DFT observables. It shows a squeezing of densities towards a central atom embedded in a plasma and a spreading apart of energy levels consistent with this squeezing, and an enhanced variation of total energy vs. temperature. In the limit of small density overlaps, v^nadd goes to zero and we recover an ion-correlation model with no embedding contribution. In general, average atom models offer significant speed-ups in calculation times while still retaining fair accuracy in evaluating dense plasma properties. However, they struggle to precisely account for electronic interactions due to the averaging over complex environments, and as such there is a need in the field for improved accuracy while maintaining their efficiency. The contribution we have investigated is small but easy to implement since it can be fully defined by the electron density of the average atom. The non-additive kinetic potential from embedding theory can add a small piece of missing physics without compromising the efficiency of the models.
We acknowledge receipt of g_II(r) data and useful discussions about the PAMD model from Dr. C. E. Starrett. This work was supported with funding by the U.S. Department of Energy, National Nuclear Security Administration, Minority Serving Institution Partnership Program, under Award DE-NA0003984, and the Graduate Student Opportunity Program fellowship at the University of California, Merced. Computing resources were provided by the Pinnacles cluster at the University of California, Merced, supported by National Science Foundation Award OAC-2019144.
|
http://arxiv.org/abs/2409.02782v1 | 20240904145840 | High resolution observations of 12CO and 13CO(3--2) toward the NGC 6334 extended filament | [
"S. Neupane",
"F. Wyrowski",
"K. M. Menten",
"J. Urquhart",
"D. Colombo",
"L. -H. Lin",
"G. Garay"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.SR"
] |
I. Emission morphology and velocity structure
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany
[email protected]
Centre for Astrophysics and Planetary Science, University of Kent, Canterbury, CT2 7NH, UK
Argelander-Institut für Astronomie, Auf dem Hügel 71, 53121 Bonn
Departamento de Astronomia, Universidad de Chile, Casilla 36-D, Santiago, Chile
NGC 6334 is a giant molecular cloud (GMC) complex that exhibits elongated filamentary structure and harbours numerous OB-stars, H II regions and star forming clumps.
To study the emission morphology and velocity structure of the gas in the extended NGC 6334 region using high-resolution molecular line data, we made observations of the ^12CO and ^13CO J=3 → 2 lines with the LAsMA instrument at APEX telescope . The LAsMA data provided a spatial resolution of 20" (∼0.16 pc) and sensitivity of 0.4 K at a spectral resolution of 0.25 km s^-1.
Our observations revealed that gas in the extended NGC 6334 region exhibits connected velocity coherent structure over ∼80 pc parallel to the galactic plane. The NGC 6334 complex has its main velocity component at ∼ -3.9 km s^-1 with two connected velocity structures at velocities ∼ -9.2 km s^-1 (the `bridge' features) and ∼-20 km s^-1 (the Northern Filament, NGC 6334-NF). We observed local velocity fluctuations at smaller spatial scales along the filament that are likely tracing local density enhancement and infall while the broader V-shaped velocity fluctuations observed toward the NGC 6334 central ridge and G352.1 region located in the eastern filament EF1 indicate globally collapsing gas onto the filament. We investigated the ^13CO emission and velocity structure around 42 WISE H II regions located in the extended NGC 6334 region and found that most H II regions show signs of molecular gas dispersal from the center (36 of 42) and intensity enhancement at their outer radii (34 of 42). Furthermore most H II regions (26 of 42) are associated with least one ATLASGAL clump within or just outside of their radii
the formation of which may have been triggered by H II bubble expansion. Typically toward larger size H II regions we found visually clear signatures of bubble shells emanating from the filamentary structure. Overall the NGC 6334 filamentary complex exhibits sequential star formation from west to east. Located in the west, the GM-24 region exhibits bubbles within bubbles and is at a relatively evolved stage of star formation. The NGC 6334 central ridge is undergoing global gas infall and exhibits two gas `bridge' features possibly connected to the cloud-cloud collision scenario of the NGC 6334-NF and the NGC 6334 main gas component. The relatively quiescent eastern filament (EF1 - G352.1) is a hub-filament in formation which shows the kinematic signature of global gas infall onto the filament. Our observations highlight the important role of H II regions in shaping the molecular gas emission and velocity structure as well as the overall evolution of the molecular filaments in the NGC 6334 complex.
NGC 6334: I. Gas emission morphology and velocity structure
Neupane et al.
High resolution observations of ^12CO and ^13CO J = 3 → 2 toward the NGC 6334 extended filament
S. Neupane 1
F. Wyrowski 1
K. M. Menten1
J. Urquhart2
D. Colombo1,3
L.-H. Lin1
G. Garay4
Received 2024; accepted 2024
===================================================================================================================
§ INTRODUCTION
NGC 6334 is a giant molecular cloud (GMC) complex, harbouring a central dense filament and a large number of OB-type stars, H II regions/bubbles and star forming clumps in the extended region (e.g., ). This complex is located at a distance of ∼1.7 kpc (e.g., ) and its large-scale molecular appearance is dominated by a ∼70 pc long filamentary structure (within 352.6 ≥ l ≥ 350.2 (where l is Galactic longitude). The central dense ridge of NGC 6334, extending along the filament, contains sites of high-mass star formation in a sequence of evolutionary stages (e.g, ) and has been reported to be undergoing longitudinal collapse, possibly triggered by a past high-mass star forming event (e.g., ).
<cit.> studied the extended NGC 6334 region based on their ^13CO and C^18O J = 2→1 observations obtained with the APEX telescope and highlighted cases of multiple gas compression in the region. In addition, they found a large number of velocity coherent filaments (VCFs) and interpreted their formation resulting from large-scale compression by propagating shock fronts. Presenting a broader picture <cit.> proposed a cloud-cloud collision scenario in which a collision is happening from lower to higher longitudes of the extended NGC 6334 region, giving rise to the evolutionary sequence of star formation. A similar evolutionary sequence from lower longitude (GUM 1-24) to higher (in the NGC 6334 extended filament) is also reported by <cit.>.
The formation of the large scale filaments and their fragmentation processes are not understood well (see the review by ). High angular resolution continuum and molecular line observations are essential to understand the small to large scale gas kinematics through which gas flows along or onto the filament. Regarding the formation of molecular clouds in the filaments, <cit.> have highlighted the important role of gas compression due to H II regions (see also ). H II regions are signposts of (high-mass) star formation within molecular clouds (e.g., ). New H II regions are formed inside proto-stellar cores forming high mass stars (OB-type) and their influence is evident on the evolution of the cloud as they emerge. At evolved stages, H II regions can shape the parental cloud material into structures with bubble- or shell-like morphology, in which triggered cases of star formation can proceed (cf. ).
The young and evolved H II regions, either found in isolation or in groups, shape the gas emission and velocity structure around them, therefore, their role in natal cloud dispersal and in creating a new generation of star formation requires detailed observational study of the gas kinematics around them.
The extended NGC 6334 region (including GM-24) (see Figure <ref>) contains 42 infrared H II regions/bubbles reported in the catalog[WISE catalog V2.2: http://astro.phys.wvu.edu/wise/] of <cit.>, of which 14 are classified as `known' (K), 9 as `group' (G), 6 as `candidate' (C) and 13 as `radio quiet' (Q) H II regions. Identification of these regions were primarily based on the mid-infrared characteristics of the WISE all sky survey data (see for more details) and their sizes range between 0.2 pc to 12 pc (see Table <ref>).
In this work we aim to study the gas emission morphology and velocity structure of the NGC 6334 extended region using high-resolution observations of ^12CO and ^13CO J=3 → 2 molecular lines obtained with the LAsMA instrument on the APEX Telescope.
Our goal is to disentangle, with this new large-scale, sensitive and high resolution spectral line data, the different origins of the velocity structure in NGC 6334. This is done by investigating the impact of a large number of H II regions already formed in the giant molecular complex as well as the large scale inflow of gas onto and through the filaments.
The paper is organized as follows. In Section <ref> we describe the APEX LAsMA observations toward the region and provide an overview of the data reduction process and resulting sensitivities. In Section <ref> we present the results. In Section <ref> we present the velocity structure of the extended filament. In Section <ref> gas emission properties around the H II sources will be presented. The Sections <ref> and <ref> will include analysis and discussion of the results. Finally, in Section <ref> summarizes highlights of this work.
§ OBSERVATIONS AND DATA REDUCTION
§.§ LAsMA observations of ^12CO and ^13CO J=3→2
We mapped the NGC 6334 star forming complex in the ^12CO J = 3 → 2 and ^13CO J = 3 → 2 lines, using the Large APEX sub-Millimetre Array (LAsMA), a 7-pixel heterodyne array receiver installed on the Atacama Pathfinder EXperiment 12 meter submillimeter telescope (APEX) located on the Llano de Chajnantor (elevation of ∼5100 m) in the Atacama desert, Chile. The LAsMA receiver operates in the 870 μm (345 GHz) atmospheric window and its 7-pixels are arranged in a hexagonal shape with one central pixel and 40” spacing between pixels, which corresponds to ∼ 2 FWHM beam wdiths. The map was centered at Galactic coordinates l = 351.415^∘ and b = +0.66^∘. The area of the extended NGC 6334 region mapped in this project covers ∼2.5^∘ × 1.2^∘.
The observations were made during June to September 2021 under good atmospheric conditions of ≤1.5 mm precipitable water vapor (PWV) content. The mapped region was divided into 10^'× 10^'-sized sub-maps (`tiles') that were observed in on-the-fly (OTF) mode in both l and b directions. The ^12CO J = 3 → 2 and ^13CO J = 3 → 2 lines were observed simultaneously using a local oscillator frequency of 338.3 GHz. The ^13CO (ν_rest = 330.587 GHz ) line is observed in the lower side band and ^12CO (ν_rest = 345.796 GHz ) line in the upper side band. An advanced versions of the APEX Fast Fourier Transform Spectrometer (FFTS, ) was used as a backends, resulting in a spectral resolution of 0.1 km s^-1. At this observing frequency, the full width half maximum (FWHM) beam width of the telescope is ∼19”. The OTF observing time for each tile coverage was ∼35 minutes and the total time spent to complete this project, including overheads, was approximately ∼62 hours.
§.§ Data reduction process
Data reduction was performed using the GILDAS[https://www.iram.fr/IRAMFR/GILDAS/] software package. The following steps were taken to obtain the final spectral cubes of the full mapped region: first, we extracted the spectra in the -100 km s^-1 to +100 km s^-1 LSR velocity range and re-sampled them to a common velocity resolution of 0.25 km s^-1. The baseline subtracted data sets from different days and sub regions were then combined to produce spectral cubes for the full mapped region. We use the table and xy_map packages in CLASS-GILDAS to regrid and smooth the data to the desired pixel size and resolution. The pixel size of the final cubes is set to 6 × 6, chosen to affort better than Nyquist sampling as well as to match that of other complementary data sets. In this data reduction procedure, we carefully flagged and removed spectra with bad baseline and those that contain ripples and artifacts. The final spatial resolution for the cubes is 20, 0.16 pc at the distance of 1.7 kpc.
§.§ rms sensitivity
Figure <ref> presents the histogram of the root mean square noise (rms in corrected antenna temperature T^*_A K)[We used antenna temperature units T^*_A throughout this work unless otherwise stated in the text.] for both the ^13CO and ^12CO spectra estimated from the first order baseline fit to the spectra. The average rms in T^*_A (computed per channel per pixel at the velocity resolution of 0.25 km s^-1) for ^12CO and ^13CO is 0.39 K and 0.46 K, respectively. The rms distribution of the ^13CO emission is presented in Figure <ref>. The median rms in T^*_A for ^12CO and ^13CO is 0.34 K and 0.40 K, respectively. A 3σ detection in ^13CO roughly correspond to a column density of N_^13CO∼3e14 □ (N_H_2∼ 2 × 10^20 cm^-2), estimated with RADEX[RADEX: Non-LTE molecular radiative transfer in an isothermal homogeneous medium by <cit.>, also available online at https://personal.sron.nl/∼vdtak/radex/index.shtml] assuming a kinetic temperature of 20 K, density (n_H_2) of 10^4 cm^-3 and abundance ratios of ^12CO/^13CO=77 (), ^12CO/H_2∼8.5× 10^-5 ().
Previous studies of NGC 6334 in the ^12CO J = 2→1 line line using the NANTEN2 Telescope had a spectral resolution of ∼0.1 km s^-1 and rms noise level of ∼1.1 K per channel at an angular resolution of ∼90 (). Furthermore, previous ^13CO and C^18O J = 2→1 observations with APEX had a spectral resolution of 0.3 km s^-1 and sensitivity of about ∼0.5 K at an angular resolution of ∼30 covering 2.2^∘×0.7^∘ region (see, ). In comparison to both of these previous studies, we have at least 1.5 times higher spatial resolution, better sensitivity and map twice as large an area to cover the extended emission region. In addition, since we observe the higher excitation ^13CO J = 3→2 lines,
we also probe higher density gas (e.g., n_crit ∼ 1e4 cm^-3 at 10 K) directly participating in the star forming activity.
§ RESULTS
§.§ CO emission morphology
Figure <ref> presents the ^13CO and ^12CO J = 3→2 spectra as in red and black, respectively, averaged over the extended NGC 6334 filament.
Most of the ^13CO emission is confined within the velocity range of -15 to +5 km s^-1, while ^12CO emission is observed also at significantly blue-shifted velocities. In both lines, the emission peaks around -4 km s^-1, the systemic velocity of the gas in the main filament. In ^13CO, a less prominent peak is seen at around -9 km s^-1 blended with the main component. Another emission peak is seen at a redder velocity of +7 km s^-1 in both lines, which is associated with local molecular clouds (e.g., ). Blue-shifted ^12CO emission is found down to -60 km s^-1 and has two peaks around -20 km s^-1 and -40 km s^-1 that do not have clear ^13CO counterparts. The -15 to +5 km s^-1 range, corresponding to the bulk of the emission in the ^13CO line, is indicated by the dotted red lines and -30 to +20 km s^-1, corresponding to the bulk of the^12CO emission, is indicated by the dotted black lines in the Figure <ref>. We use these velocity ranges to compute the moment 0 maps.
Figure <ref> presents the ^13CO (top panel) and ^12CO (bottom panel) velocity integrated moment 0 maps of the extended NGC 6334 filament. The prominent features discussed in the text are also labelled in the top panel image. Also indicated in the figure are the OB stars from <cit.> (red circles) and ATLASGAL clumps from <cit.> (green plus markers).
The morphology of the CO line emission shows that the NGC 6334 central filament has a spatially concentrated dense gas reservoir. In both CO lines, the central filament is generally very bright and exhibits multiple bright emission spots. In areas that connect the central filament to the GM-24 region, ^13CO emission shows elongated and finger-like filamentary structures originating from the filament WF and spreading towards the south-west and north-west[This study employs the Galactic coordinate system. We use `east' (`west') to mean directions of higher (lower) Galactic longitude, while `north' (`south') mean higher (lower) latitude.]. These features merge and reveal wide-spread emission in ^12CO (see bottom panel). Some elongated finger-like (pillar-like) emission structures going north-ward and south-ward from the central filament are prominently seen in both lines. In the eastern region, however, ^13CO emission is tracing only the trunk of the extended filament while ^12CO exhibits a more extended emission morphology that connects NGC 6334 with the star forming region NGC 6357 (l=353.166, b=0.89, see Fig. 1 in ).
Most of the bright emission spots in the central filament correspond to the far-infrared (FIR) sources shown by the yellow markers in Figure <ref> (top panel). Numerous ATLASGAL 870 μm dust emission clumps are located in this region. One interesting feature of the emission north of the central filament is that the gas emission seem to exhibit a pinched morphology. We over-plotted the position of the OB type stars from <cit.> on the moment0 map of ^12CO. A circle of radius 8 pc (0.25 deg at 1.7 kpc) centered on the dominant O6.5 star is drawn in the map to match the arc-like emission morphology observed toward the NGC6334 central filament and it seems to perfectly match the morphological shape of the emission toward the north indicating that the high-mass star is likely interacting with the gas in the central ridge. However we also note that there are other OB stars, radio sources and H II bubbles located in the north to the ridge (see Figure <ref>) that could have contributed in shaping the emission morphology of the central ridge.
§.§ Gas velocity structure
§.§.§ Position-Velocity map
The middle and lower panels of figure <ref> presents the Position-Velocity (PV) maps constructed for the emission within stripe defined by the three red lines in the intensity map (top panel).
The longitudinal extent of the PV maps is ∼2.32 degrees. We find a ∼80 pc long filament with a well connected and coherent velocity structure that has an average velocity of ∼-4.0 km s^-1.
Since the ^13CO line is mostly optically thin, it is tracing the higher column density gas regions along the NGC 6334 extended filament. High-velocity emission is not observed in ^13CO. In the ^12CO lines, since it is optically thicker, more diffuse extended emission features at intermediate to high velocities are visible that are not detected in ^13CO. Most of the high-velocity emission is observed mainly from the FIR source along the central filament and GM-24 (indicated by the dashed black markers in Figure <ref>). This high-velocity emission is most likely tracing outflows since it shows a clear correspondence with (high-mass) star-forming FIR clumps/cores embedded in the central ridge.
Gas emission at velocities of ∼ [-25, -15] km s^-1 is observed at offsets from 0.3 to 1.3 deg. This emission is diffuse and exhibits no clear peaks in the PV map. The origin of this component is at present unknown. In lower excitation lines of CO, this velocity component is seen connected with the main filament (CO J = 2→1, 1→0 in , and J = 2→1 in ).
Figure <ref> presents PV maps toward two selected slices averaged along the perpendicular direction to the central filament. The slices L1 and L2 indicate same regions as MFS-cold and MFS-warm presented in (see Fig. 12 in the paper). We observe V-shaped (or inverted) velocity structure along the L1 and L2 slices in higher J transitions of ^13CO. In addition, we also present PV-maps (Fig. <ref>) toward six FIR sources in the central NGC 6334 ridge. Toward all the sources we observe similar V-shapes in the PV maps along the y-axis. While in FIR source I and II, V-shape appears in both velocity direction, FIR sources III and IV exhibit inverted V-shapes. interpreted these V-shaped emission features in PV maps as matter flowing within a sheet-like structure compressed by a propagating shock front. We further investigate the velocity structure around H II regions in Sec. <ref> whether shock compression due to H II regions show similar velocity features in the molecular shells/rings around them.
§.§.§ Intensity weighted velocity map
Figure <ref> presents the moment 1 map of the ^13CO emission. We used a 7σ cutoff in integrated intensity to make the map. The velocity of the gas in the NGC 6334 extended filament shows primarily two components, one at -3.9 km s^-1 and another component at -9.2 km s^-1. Figure <ref> clearly illustrates double peaked distribution at these velocities. The -3.9 km s^-1 velocity component reveals the coherent velocity structure of the extended filament while the -9.2 km s^-1 velocity component is mainly tracing gas around the GM-24 region. However, it also appears to be tracing the `bridge' structures. We see these features extending south from the FIR source I[N] and I and north-south from the central filament (from FIR sources IV and V, see Fig. <ref>). Widespread morphological existence of these two velocity components indicates that the molecular gas in NGC 6334 has at least two origins.
We also observe signs of velocity gradients in different regions. In the central filament, two `bridge' features are at bluer velocities compared to the trunk of the central ridge. In the central ridge itself, we observe west to east velocity gradient with velocities becoming redder toward the east. Regions EF1, EF2 as well as the SWF filament also show signs of velocity gradients along the filament. A velocity gradient toward the central FIR sources III from both east and west of the central filamentary ridge is seen, which has also been reported by <cit.> HCO^+ J = 3→2 line data and interpreted as a sign of global collapse.
§.§.§ Multi Gaussian fitting to the ^13CO line cube
Given the complex velocity structure of the region, we also fit the ^13CO line cube using the package (). is an automated fitting routine developed and tested to fit complex spectral profiles with a high degree of accuracy that was used in the analysis of the Galactic Ring Survey (GRS)() (see for more detail). In this routine the spatial coherence is taken into account while fitting multiple Gaussian components. We applied a signal-to-noise ratio (S/N) of 3.0 to constrain the minimum value of ^13CO peak intensity. Figure <ref> presents the number of Gaussian components fitted to the ^13CO spectra within the NGC 6334 extended region. We find that to fit profiles of the brighter CO emission lines arising from the denser regions along the filaments require multiple Gaussian components to fit the line profiles.
Figure <ref> (top panel) presents the resulting projected line of sight velocity components along the filament going from east to west in longitudinal direction (from now on called lv-plots). The velocities within the PV-sliced region shown in Figure <ref> is presented in the bottom panel. The median velocities for each longitude bin are also shown in dashed white contours. In general, the projected velocities decrease from the east of the mapped region to the western direction. Gas velocities in the east of the extended NGC 6334 region are redder (0 to 4 km s^-1) while in the western region (near GM24) gas at bluer velocities (-3 to -12 km s^-1) are found. The velocity gradient over the full filament length (∼80 pc) is much smaller than 1 km/s/pc, which also illustrates the velocity coherence of the filament.
Various features can be identified in the lv-plots, for example, smaller V-shapes, arch-like and semi-arch like shapes and semi-circular shapes highlighting the complex gas velocity structure in the region. Many of these features are possibly related to the feedback from the large number of H II regions located in the region. Indeed, complex velocity features around the H II regions are seen in lv and bv plots which will be discussed more in Section <ref>. We emphasize here that the amount of details of the velocity information one can see in the lv-plot, in which velocities are obtained from multi Gaussian fit to the spectra, is superior to the commonly used pv-diagram (see Figure <ref>).
The intensity and velocity along the dense ridge of the NGC 6334 extended filament show fluctuations or so called `wiggles' at smaller spatial scales and also some broader V-shaped features (see Fig. <ref>, bottom panel). Similar oscillatory velocity fluctuations are also observed toward other Galactic filaments, for example, in ^13CO(1→0) observations toward the California nebula molecular filament ().
Three broad V-shapes (or inverted V-shapes) in the lv-plot are indicated by the vertical dotted green lines in the figure at longitudes 352.1, 351.3 and 350.5 degrees. The base length of these V-shapes are broader than 3 pc. The V-shape around l=352.1^∘ is located in the eastern filament (EF1). Two others correspond to the central filament (toward FIR source III) and GM-24 region, respectively. In the GM-24 region, a closer look to the map shows that the velocity structure is rather complex and V-shape is only appearing in median velocity contour. Toward eastern filament (EF1) and central filament, we also observe similar V-shape in intensity variation but phase shifted with respect to the velocity structure. Typically the V-shapes in position velocity diagrams indicate gas compression (due to collision or due to H I/H II bubbles) (e.g., ) or a global infall/collapse if observed with phase shifted intensity and velocity gradients (e.g., ).
We will discuss these features in Section <ref> in conjunction with the observed CO emission morphologies in the channel maps.
§.§ Channel maps
Figure <ref> and <ref> present channel maps of ^13CO and ^12CO in the velocity range -15 to +5 km s^-1, respectively, with steps of 2 km s^-1. The dotted circles in magenta
indicate the H II regions and corresponding bubbles from <cit.>. In addition, we have drawn three vertical dotted lines in black at longitudes 352.1^∘, 351.3^∘ and 350.5^∘ in different sub-plots. These are the positions of V-shapes features observed in the longitude-velocity plot of Figure <ref>. Text labels for various emission features (same as in Fig. <ref>) are also indicated in the channel maps. Also presented in Figures <ref> to <ref> are the zoomed velocity channel maps toward the central filament, GM-24 region and G352 region.
The channel maps allow a more detailed, velocity resolved view of the gas emission morphology. For example, at velocities -10 to -5 km s^-1 in both CO lines, the Eastern (near I(N) and I) and Western (near IV and V) parts of the NGC 6334 central filament show bright emission with no connecting ridge structure indicating that from -10 to -5 km s^-1, the two sides are disconnected. Note that these velocities are blue-shifted with respect to the average filament velocity of -3.9 km s^-1. From -5, +3 km s^-1, we start seeing emission features connecting the two sides of the central filament. Only at velocities between -3 and +1 km s^-1, the connecting ridge is bright in both CO lines. One notable feature is that at these velocities and beyond on the redder side of the emission, the bright emission feature from FIR source V located in the Western part of the central filament is no longer visible.
Two parallel emission features, that we named the `bridges' in the moment0 map in Figure <ref>, extend south from the central ridge at velocities -15 to -3 km s^-1 in ^13CO. Both features are clearly visible and appear more extended in ^12CO.
Toward these bridge features, we observe CO emission in both the -3.9 and -9.2 km s^-1 velocity components (see Figure <ref>). The spatial coexistence of the two gas components hints at a possible collision or a merger of the clouds in the central filament. The Eastern bridge extends south from FIR-sources I[N] and I, and is more apparent in velocities from -9 to -3 km s^-1. The western bridge extends south from FIR sources IV and V and is also visible at redder velocities up to -1 km s^-1 in ^12CO. The latter emission feature also extends northward from source V up to latitude of ∼1 deg toward the North-West.
The GM24 region presents a clear example of bubbles within bubbles (see Fig. <ref>). In the channel maps this region is mostly bright in the -13 to -7 km s^-1 velocity range. The connecting filaments (WF, SWF, SF1,SF2) located to the west of the central filament however are visible at velocities -7 to -1 km s^-1.
Regions located east of the central filament (EF1, EF2, EF3 and G352.5) are seen at velocities larger than -5 km s^-1. The ^13CO emission at this side of the filament is noticeably weaker. A bright elongated filamentary feature at around l = 352.0 deg, b =0.7 deg is observed both in ^13CO and ^12CO at the velocities from -1 to +5 km s^-1. Emission at these velocities are also bright further east close to the NGC 6357 region at l = 352.5 deg and b = 0.8 deg. Finally to the South of the central filament (< 0.25 deg), there is little or no emission in velocities in between -15 and +5 km s^-1.
§ ANALYSIS
§.§ Impact of H II regions in shaping the molecular gas structure
To study the impact of H II regions on the surrounding molecular gas and its velocity structure, we first visually inspect the gas emission morphology toward the H II regions located in the extended NGC 6334 region (see Fig. <ref>). For most of the H II regions a molecular emission counterpart is detected. Average ^13CO velocities of the molecular gas toward the H II regions are presented in Table <ref>. The velocities are obtained using aperture extraction from the Gaussian fit velocities of ^13CO using radii of the H II regions. We observe a variety of emission morphologies around the H II regions such as bubbles exhibiting ring-like, arc-like shapes and/or central holes in the channel maps. Figure <ref> and <ref> show maps of ^13CO emission towards H II regions.
In the NGC 6334 central filament, eight H II regions are located along the ridge, most of which are associated with FIR sources (see Fig. <ref> and <ref>). Channel maps only towards the central region are also presented in Figure <ref>. H II region G351.348+0.593 is found in the South of the main ridge and G351.383+0.737 (GUM63/64) North to the ridge above sources I[N] and I. The central part of the ridge is disconnected up to bluer velocities of -5 km s^-1 and exhibits connected filament only at redder velocities of -5 km s^-1 (see Figure <ref>). Note that our identification of a broad V-shape in the lv-plot is associated with this region (see Figure <ref>). We also find pillar-like structures above FIR source II (associated with GUM63/64C) and toward IV, observable both in ^13CO and ^12CO (see Figure <ref>). Such pillars are identified in many PDR and H II regions, and thought to originate from the expansion of H II regions into a turbulent, non homogeneous medium (e.g., ).
To the west of the central filament, two optical H II regions, G350.995+0.654 (GUM61) and G351.130+0.449 (GUM62) are located south of the FIR sources IV and V. These regions are indicated and labelled in some panels in Fig. <ref>. In both CO channel maps in Fig. <ref> and <ref>, we observe gas emission mostly at their edges but no emission is seen in the center indicating that these H II bubbles have already cleared out the gas around them and are at evolved stages. At velocities -1 to +3 km s^-1, the emission at the southern edge of H II region G350.710+00.641 is bright in both ^13CO and ^12CO. In the velocity integrated intensity (mom0) maps (Fig. <ref>), this emission clearly appears filamentary, extending from source V to the South-West as a structure we named South-West Filament (SWF). These observations illustrate that H II shells could indeed play an important role in forming and shaping the morphology of filamentary cloud structures.
At the intersection of the H II regions G350.710+0.641 and G350.995+0.654 (GUM61), two smaller H II regions (G350.871+0.763 and G350.889+0.728) are seen. This is the region in which GM-24 southern filament 1 (GM24 SF1) departs westward from the West filament (WF). The filamentary emissions are clearly seen at velocities -5 to -1 km s^-1 in the channel maps (Fig. <ref> and <ref>).
The GM24 region is a clear case of bubbles within bubbles toward which we observe evidence of bubbles/shells interacting with gas around them and shaping the emission structure in the region. One example of such an interaction is seen where we observe two arc-like gas layers, one facing South and the other to the North at velocities -13 to -11 km s^-1 surrounding the central bright emission spot in GUM1-24 (Figure <ref> and <ref>, see zoomed maps of this region in Fig. <ref>). These arch-like features correspond well with the shells of the H II regions G350.710+00.641 and G350.675+00.832 in the South and G350.401+01.037 in the North. At velocities between -9 and -7 km s^-1, we observe gas emission in a shell structure centered slightly east to GUM1-24. This shell structure corresponds to the H II region G351.130+0.449, located east to the compact radio source G350.50+0.95 (). In addition, we also find this shell-like gas emission structure confined within the radius drawn for H II bubble G350.594+1.149. Two HII regions G350.710+0.641 and G350.240+0.654 also appear to act on the the gas from the South in GM-24 region.
We see spiral/arch-like filamentary gas emission at velocities -7 to -3 km s^-1, that is associated with the GM24-SF1 and GM24-SF2 filaments (see also zoomed maps in Fig. <ref>), connecting to the Western Filament (WF) and extending from south of the GM-24. At velocities -5 to -3 km s^-1, these filaments are more clearly visible in the channel maps (Figure <ref> and <ref>), more so in ^12CO than in ^13CO emission. In fact, these emission features seem to intersect at the far west at (l, b) = (350.4 deg, 0.8 deg). We note here that these velocity ranges are slightly bluer but similar to the velocity of the main filamentary structure in the NGC 6334 region. Based on the observed gas distribution of these filaments in the channel maps we suggest that the gas structure is shaped by the group H II regions (G350.594+01.149, G350.617+00.984, G350.710+00.641, G350.995+00.654) (see Fig. <ref>, <ref> and zoomed maps in Fig. <ref>).
East of the central filament, at around l = 351.5 deg and at a similar latitude of the central filament, we find multiple H II regions. In particular four H II regions (G351.651+0.510, G351.676+0.610, G351.693+0.671,G351.766+0.492) appear to be connected to the gas at velocities -9 to +1 km s^-1 (Figure <ref> and <ref>). The emission of ^13CO at this side of the filament is noticeably weaker. A bright elongated filamentary feature at around l = 352.0 deg, b =0.7 deg is observed both in ^13CO and ^12CO at velocities of -1 to +5 km s^-1. This is the same region (EF1) toward which we also observe the inverted V-shape in the lv-plot (see Figure <ref>), the peak of which is at a longitude of ∼ 352.1 deg. Zoomed channel maps toward this region are presented in Figure <ref>.
§.§.§ Radial profile and contrast parameter
To perform an unbiased search for shell/ring-like molecular structures around the H II regions, we plotted the azimuthally averaged radial ^13CO intensity profile of the H II regions (using a velocity range of -15 to +5 km s^-1). For this we adopt the position and radius of the H II regions from <cit.> (cols. 3, 4 and 5 in Table <ref>) and plot the ^13CO intensity profile up to twice these radii. The positions and radii were obtained by
encircling the WISE mid infrared emission of the H II regions (see for more details). In Figure <ref>, we present, as an example, the radial intensity profile toward H II region G350.482+0.951 associated with GUM1-24. This region exhibits a clear signature of a shell/ring like structure. In Figures <ref> and <ref>, we present profiles of the ^13CO emission toward all the H II regions located in NGC 6334 extended region. We individually inspect the emission morphology toward the regions (Fig. <ref> and <ref>) to interpret the radial profiles. We find two general categories of the intensity profiles plotted for full velocity range of -15 to +5 km s^-1.
The first category includes H II regions (22 of 42) that exhibit little emission or a flat emission profile toward their central regions and an increasing intensity profile outwards with respect to the H II radii. Some of these sources clearly exhibit a bumpy emission feature at corresponding H II radii (e.g., G350.482+0.951). In Figures <ref> and <ref>, radial profiles of these H II regions are presented. The second category of the H II regions (15 of 42) exhibits centrally peaked emission within the adopted H II radii and a decreasing intensity profile outwards. A few H II regions (5 of 42) do not fall into either of these categories and exhibit flat profiles up to twice the adopted radii and only exhibit diffuse ^13CO emission. Intensity profiles of these H II regions are presented in Fig. <ref>.
H II regions with little or flat profile emission toward the center and an increasing intensity profile outwards suggest that they are likely at later evolutionary stages since they appear to have cleared out the molecular gas from the center. More than half of the H II regions (14/22) with an increasing intensity profile exhibit a bumpy feature or an intensity maximum at/near their radii (Fig. <ref> and <ref>), indicating a molecular shell/ring structure.
On the other hand, centrally peaked ^13CO emission toward H II regions likely reflects an early evolutionary stages. However, it is also possible that these sources have line of sight contamination that caused their profiles appear centrally peaked. To investigate this we explore the ^13CO line profiles for emission averaged over different velocity ranges within -15 and +5 km s^-1 with a step of 5 km s^-1. Among the 20 H II regions that show either centrally peaked and decreasing or flat profiles for emission averaged over the full -15 to +5 km s^-1 range, we find that the profiles of 14 of them either increase outward or show a bumpy feature when plotted for different velocity ranges. These profiles are also shown in Figure <ref>. In total, we find 36 of 42 H II regions (86%) that show the signature of molecular gas having been cleared from their center and that exhibit an increasing intensity profile outward or a shell/ring-like molecular structure around them.
These characteristics of molecular line emission features are noted in col. 8 of Table <ref>. Centrally peaked and decreasing intensity profiles are annotated by `CP', increasing profiles are annotated by the letter `I', shell/ring-like structure are annotated by the letter `S' and sources with flat profiles are annotated by an `F'. The four 5 km s^-1 wide velocity ranges within -15 to +5 km s^-1 for which these structures are found are also noted, with V1 to V4.
We also the estimate sizes of the ^13CO emission associated with the H II regions from the ^13CO line profiles. For H II regions with clearly decreasing or increasing radial profiles we obtain the sizes at which the intensity is 50% of the peak emission (indicated by the vertical green lines in Figures <ref> to <ref>). For H II regions exhibiting a shell/ring like morphology, we adopt the radii where these bumpy or shell-like features are observed. In column 6 of Table <ref>, the sizes estimated from molecular line emission associated with the H II regions are presented. In general there is a tentative agreement between the sizes that we estimated from ^13CO emission profile and the sizes reported by <cit.>. The mean and standard deviation of the difference in radius are ∼15% and ∼40% with respect to the radius from <cit.>. The intensity profiles of H II regions with shell/ring like structure begins to increase at a certain inner radius, which varies from source to source (see Figure <ref>). We find that the inner radii can have values as small as ∼40% of the shell radii.
We employ a contrast measurement method that quantifies the enhancement of the molecular line intensities at the H II region/bubble radii. To do so, we define a contrast parameter as;
C = W_R_2 - W_R_1/W_R_2,
where, W_R_1 and W_R_2 give sum of the integrated molecular line intensities within the radii of sizes R_1 and R_2, respectively. R_1 represents the size of the inner ring and R_2 represents the size of the outer ring. The significance of the parameter for a homogeneous medium is straightforward and proportional to the ratio of the area of the ring to the area of the outer radius considered.
Column 9 of Table <ref> presents the ^13CO contrast parameter values for H II regions computed from the velocity range -15 to +5 km s^-1 for radius ranges of 0.6 to 1.2 times the radius. For H II regions with shell-like intensity profiles, we find that the inner radii at which the intensity starts increasing is up to 40% of the shell radii. In addition, since H II regions/bubbles are known to be eccentric (), these choices of radii between 0.6 R to 1.2 R to measure the contrast parameter incorporate the eccentric nature of the bubbles with eccentricity values of 0.86 to 1. Contrast parameter values larger than the expected value of 0.75 for the radii considered here are presented in col. 9 of Table <ref>. At these velocity ranges and radii, 21 H II regions show higher contrast parameter than 0.75.
However, at segmented velocity ranges from -15 to +5 km s^-1 with a step of 5 km s^-1, we already found that the majority of the H II regions
show centrally clear or flat profiles that either increase outward or show a bumpy feature at or near the corresponding radii. Therefore, we measured the contrast parameter for velocity ranges from -15 to +5 km s^-1 with a step of 5 km s^-1. The maximum value of the contrast parameter (C_max) in these velocity ranges are given in Col. 10 of Table <ref>. In total, using the contrast measurement method we identify 34 of 42 H II regions (81%) that exhibit intensity enhancement of the molecular emission at the H II radii.
We performed a two sample between contrast parameters (C_max) measured for the H II regions and contrast parameters measured toward a randomly created 1000 shells/rings in the mapped region. Sizes of these randomly created rings range from 20" to 700", similar to the sizes of the H II regions. The (0.25) and (0.015) suggest that the sample of H II regions is indeed distinct from that representing the randomly created ring structures. Figure <ref> presents the empirical cumulative distribution function (eCDF), which clearly shows that H II regions have a higher probability of getting higher contrast values. Even though most H II regions in our sample show a contrast parameter above the expected value, it is wise to explore the significance of that parameter. To do so we exploited the contrast values derived for the randomly sampled shells/rings and fit a Gaussian distribution with mean and standard deviation of 0.78 and 0.09, respectively (Fig. <ref>). Then we define a contrast threshold above 1σ from the mean, which is 0.87.
14 H II regions show contrast values above 1σ threshold in at least one velocity ranges (-15 to +5 with a step of 5 km s^-1) for a shell/ring radii of 0.6 to 1.2 times the H II radius. We repeated the analysis considering smaller shell/ring sizes (0.6R–1.0R and 0.8R–1.2R). In total, we find 22 H II regions (52%) show a contrast parameter above 1σ threshold in at least one radii and velocity ranges.
Caveats of using contrast method are that for the H II regions that are too young to have created a shell/ring like structure and are still embedded in their dense natal gas, or exhibit a complex morphology, e.g., shaped by a champagne flow, this method may not provide the meaningful results. In addition, both the radial profile and the contrast method applied in this study cannot distinguish the true morphological features if a shell has a full-ring, half-ring or has a clumpy edge. To study the morphology in detail, 2D or 3D emission maps (channel maps) have to be examined. For a larger shell/ring sizes, radii at which contrast is being measured should be carefully selected. Despite the caveats, the contrast method in conjunction with radial profiles can be a useful tool to study H II bubbles/regions, in particular to search for shell-like structures, using large scale Galactic molecular line and continuum surveys.
§.§.§ Longitude-velocity (lv) and latitude-velocity (bv) plots toward H II regions
In figures <ref> to <ref>, we present longitude-velocity (lv) and latitude-velocity (bv) plots for the H II sources. For the plots, we used the Gauss fitted ^13CO velocities instead of the commonly used intensity averaged pv-plot. We again used the radii range of ±1.2R in both longitudinal and latitudinal direction, where R is the H II region radius. To investigate if we observe higher intensities around the H II regions, we made the plots with intensity color wedges. Toward some H II regions (12/42) we find the velocity structure to have to follow a (partial) elliptical shape either in lv or bv plots. For those, we also present approximate expansion velocities inferred from visually fitted ellipses in Table <ref>. We also observe broader V-shaped velocity feature, with open cavities or with filled emission. An example of the lv plot toward H II region G350.594+1.149 is shown in figure <ref> in which the observed V-shaped emission structure is also indicated by the red lines. Toward 15 H II regions (in lv or bv plots), we observed such features (Fig. <ref> to <ref>). In particular these features were observed toward the sources located in and around the filaments (see <ref>). We interpret these velocity structures to represent shock compressed gas layers driven by H II regions in the filament or located nearby. To further investigate the features around these H II regions, we made pv maps along different slices (L3 to L12) shown in <ref>. These pv maps are constructed along perpendicular direction of the filament axis or the clumpy regions located at the edges of the H II regions. In all slices (except for L7 and L12) we observe V-shaped emission velocity feature. While in slices L7 and L12, we only observe a velocity gradient. Overall the lv and bv plots highlight complex velocity structure around H II regions and in particular their role in shaping the gas velocity structure around them. These results further illustrate multiple sites of gas compression due to H II regions in the extended NGC 6334 region.
§.§ Velocity dispersion in NGC 6334 extended region
Figure <ref> presents a histogram of the velocity dispersion (σ_v) determined from the Gaussian fit to the ^13CO line profiles toward the mapped region. Mean and median values of the distribution are 0.9 and 0.8 km s^-1, respectively. We then made maps for different velocity dispersion ranges σ_v≥2, 2.0>σ_v≥0.9 and 0.9≤ σ_v km s^-1 (top to bottom panels in Figure <ref>). The gas with velocity dispersions lower that 0.9 km s^-1 (mean value), represents the extended regions. At an increased dispersion level from 0.9 to 2 km s^-1, the emission mostly corresponds to the dense ridges along the filament while three regions (EF1, central ridge and GM24-SF2) exhibit velocity dispersions higher than 2 km s^-1. Except toward a few H II regions, we do not observe significantly higher velocity dispersions associated with the H II shells. This could mean that H II regions play a minor role in injecting turbulence to the large scale gas distribution. In the NGC 6334 extended region, we found that the velocity dispersion is well correlated with the dense gas structure along the filament. This indicates that the origin of the velocity dispersion is likely related to formation
of the filaments themselves. The large values of dispersion toward the NGC 6334 central ridge and filament EF1, σ_v≥2, may reflect global collapse of gas onto the filaments. Both of these regions have a broad V-shape velocity structure in the lv-plot which will be discussed in Section <ref>.
§ DISCUSSION
§.§ Velocity coherence and fluctuations
The NGC 6334 filamentary structure was studied in lower excitation lines of CO by <cit.>. Based on their position-velocity map, these authors reported a ∼50 pc sized velocity coherent structure between l = 350.4^∘ and 352.6^∘). In this study, with the higher resolution data and improved method for creating position velocity map by following the dense gas ridge along the filament, we investigated the velocity structure in the NGC 6334 extended region (l = 350.15^∘ to 352.65^∘) (see Section <ref>). We found that the extent of the filament in which we observe the velocity coherent structure is ∼ 80 pc in length and thus longer than previously thought. The velocity gradient along the entire filament (∼80 pc) is much smaller than 1 km/s/pc which quantitatively illustrates its velocity coherency. Such coherency in large scale structures is also found in the simulations of galactic filaments (e.g., ). In a recent review of the filamentary ISM, <cit.> suggest that velocity coherency could simply be a necessary feature of survival of the large filaments since larger gradients would otherwise lead to their rapid destruction.
We also observed smaller scale velocity and intensity fluctuations (or so called `wiggles') along the filament (see Section <ref> and Figure <ref>). These fluctuations or oscillatory features in the velocity centroid and intensity (with phase shift) are thought to trace core-forming flows or local density enhancement in the filament (e.g., ).
§.§ Multiple gas compression, global collapse and infall
We studied both localized and the large scale velocity structure of the NGC 6334 filament using position velocity diagrams (see Sect. <ref>). In addition to the commonly used position-velocity plots, which are intensity weighted, we used longitude/latitude velocity (lv or bv) plots using the Gauss fitted velocities from ^13CO line profiles. We investigated the velocity structure toward the FIR sources in the central filament and also toward the H II sources located in the extended region. Toward all the FIR sources we observe V-shaped (inverted V-shape) velocity structures latitude-velocity plots (see Fig. <ref>). Toward the H II regions the lv and bv plots exhibited a variety of complex velocity features (see Fig. <ref> to <ref>). A more common feature is again the V-shape (inverted V-shape) velocity structure particularly toward sources that are located in or adjacent to the filament. The V-shape velocity features are thought to trace gas compression due to propagating shock fronts or colliding flows (for example, ). Formation of molecular clouds after multiple compressions in interacting shells or bubbles has been proposed in theoretical studies (e.g., ). Our observations of multiple gas compression features observed toward H II shells/radii, as well as toward FIR sources in the central ridge support this scenario.
On a larger scale, we have identified broader V-shape velocity structures toward the NGC 6334 central filament, the GM-24 region and EF 1 (the G352 region) in the median velocity contours projected along longitude direction (see Figure <ref> in Section <ref>). We discuss these three regions individually here.
§.§.§ NGC 6334 central filament:
The NGC 6334 central filament is approximately 10 pc long and runs almost parallel to the Galactic plane (see Figure <ref>). Three distinct properties of the gas velocity structure are observed toward this region.
First, the east and west side of the central ridge exhibit bright emission at blue-shifted velocities with respect to the average velocity of the NGC 6334 central filament -3.9 km s^-1. This is evident from the channel maps presented in Figure <ref> and <ref>. The case for a global collapse scenario has already been made by <cit.> for this filament. Our observations provide further evidence of the globally collapsing gas in the NGC 6334 central ridge.
Second, we observed a broad inverted V-shape structure in the lv-plot (see Figure <ref>). The base length of the V-shape is ∼6 pc (0.2 deg at 1.7 kpc) and it is located between FIR sources II and IV. Additionally, we also observe the intensity fluctuation phase shifted with respect to the observed velocity structure (Figure <ref>, bottom panel). Such V-shapes indicate gas compression (due to collision or due to H I/H II bubbles) (e.g., ) or a global infall/collapse if observed with phase shifted intensity and velocity gradients (e.g., ). The velocity gradient inferred from the arms of the V-shape is 1.3 km s^-1 pc^-1. Assuming the observed velocity gradient is due to free fall and using the extension of V-shape (R∼3 pc), we estimate a kinetic mass (M ≈ 2R^3∇ V^2/G; see Eqn. 1 in ) of the central filament of ∼2×10^4 M_⊙. The mass estimate from the free fall assumption is consistent with the reports of line mass per unit length of 1000 M_⊙ pc^-1 toward the central NGC 6334 filament by <cit.>.
Third, toward both longitudinal ends of the central filament, we observe two `bridge' structures (see Figure <ref>). The gas emission in the bridges correspond to the bluer velocity component (-9.2 km s^-1) shown in Figure <ref>. From ^12CO and ^13CO emission properties, it is clear that these are prominent dense gas structures, filamentary in nature, and running almost perpendicular to the central ridge of the main filament. The same gas velocity component is also associated with GM-24 region. From the central ridge, the eastern bridge feature in ^12CO (3-2) emission extends approximately 7 pc to the south while the western bridge is about 15 pc long extending both toward south and north passing from FIR source V. The south extension of the west bridge from sources V is ∼8 pc. Along the central ridge both the -3.9 km s^-1 and the -9.2 km s^-1 velocity components coexist spatially. This hints mixing of the gas either due to collision or merger of the clouds.
<cit.> presented a cloud-cloud collision scenario in which the -20 km s^-1 filamentary cloud is in collision with the NGC 6334 filament. In fact, we also observe at velocities [-25, -15] km s^-1 a feature in ^12CO emission, but not in ^13CO. The emission morphology presented in Figure <ref> at this velocity range runs north-east to the central filament (with reference to the source I[N]). This emission region extends from (351.5 deg,0.8 deg) to (352.2 deg, 0.5 deg). We refer to this as the Northern Filament and have labeled it "NGC6334-NF" in the figure. This emission feature is also detected in CO (2-1) and CO (1-0) and is connected in the position velocity map with the central NGC 6334 filament (see ). The detection of higher excitation J = 3-2 lines of CO in this work indeed indicates that it also contains relatively denser gas but likely at quiescent phase since no ATLASGAL clumps are detected toward this region. According to <cit.>, cloud-cloud collision scenario explains the observations of the "bridge" features. In addition they also postulate that the cloud collision is happening at different locations at different time scales and, therefore, giving rise to the different evolutionary phases of the NGC 6334 extended filament from west to east. GM-24 region located in the west is an evolved phase, the central ridge is in an active phase of star formation, and, the filament to the east (G352 region) is in a quiescent phase. We suggest that the spatial and kinematic connection of the NGC6334-NF filament and the `bridge' features with the main gas velocity structure should be considered in explaining star formation in the NGC 6334 filamentary complex.
§.§.§ GM-24 region:
A broad inverted V-shape is observed toward the GM-24 region in the median velocity contour in the lv-plot (see Fig. <ref>). However, the GM-24 region contains gas emission at both -3.9 km s^-1 and -9.2 km s^-1. Therefore the broad V-shape in median velocity contours shown in Figure <ref> does not exactly illustrate the velocity structure.
<cit.> suggested cloud-cloud collision scenario toward this region based on the complimentary distribution of gas emission at two velocities (-10 and -6 km s^-1) and the V-shape observed in position velocity map of ^12CO 2-1 at spatial resolution of 90. Our LAsMA observations with 20 spatial resolution do not confirm complimentary distribution of the gas. We find that the observed gas emission morphology and velocity structure toward GM-24 region can be explained by the multiple H II bubbles and shell-like structures. In addition, we observe that the larger bubbles located south to the GM-24 region are shaping the filamentary gas emission structure.
§.§.§ Eastern Filament (EF1): a hub-filament in formation?
Filament EF1 is located around l = 352 deg and b = 0.7 deg (see Figure <ref> and <ref>). The filament corresponds to the dark lane observed in the infrared three color map in Figure <ref> caused by dust absorption. In addition, <cit.> have reported velocity coherent filamentary structure (VCF47) toward this filament.
In the longitude velocity plots, we observe a broad V-shape toward this filament. These velocity fluctuations are accompanied by the intensity fluctuations with phase shift (Fig. <ref>, bottom panel). The velocity gradient is also visible in the moment 1 map of ^13CO (Fig. <ref>). We suggest that this is consistent with the global collapse scenario in which gas is infalling toward the filament EF1. This filament harbours a few ATLASGAL clumps (see Fig. <ref>). The velocity gradient inferred from the arms of the V-shape is 1.3 km s-1 pc^-1. Assuming a free fall velocity corresponding to the observed velocity gradient and extension of V-shape (R∼2 pc), we estimate that the kinetic mass (≈ 2R^3∇ V^2/G) of the filament ∼5×10^3 M_⊙. The channel maps in the [+1, +5] km s^-1 also show multiple gas streams that are more clearly visible in ^12CO (Fig. <ref>). The global infall and filamentary gas streams in EF1 indicate that this region is a hub-filament system in formation.
§.§ Supersonic velocity dispersion
We observed supersonic dispersion velocities along the dense gas ridge in NGC 6334 filament. At the spatial resolution of our observations (beam size of 20), the Mach number, corresponding to σ_v of 0.9 to 3.0 km s^-1 are 3 to 11, values inferred from sound speed at an average temperature of 20 K. Supersonic velocity dispersions are commonly observed toward giant filaments (see review by ). Theories on the origin of the velocity dispersion in molecular clouds and filaments are under intense discussion both in theoretical and observational studies if it reflects the gravitational or turbulent origin (e.g., ).
Observationally, it is clear that determinations of velocity dispersionss depend on the spatial resolution. Resolved clouds and filaments seem to exhibit dispersions close to the sonic speed (e.g., ). In NGC 6334 itself, high resolution observations with ALMA have revealed subsonic and transonic velocity dispersions in filaments and cores (NGC 6334S: , NGC 6334 I[N] and I: ). <cit.> investigated relations between velocity dispersion, filament length and line mass and showed that these scaling relations are followed by various types of filaments, which indicates that at larger scales non-thermal motions govern the gas dynamics. Whether the non-thermal motions originate from a turbulence cascade, core forming flows in the filament, global gravitational collapse or cloud collisions requires further investigation.
§.§ Role of H II regions in the star formation processes in NGC 6334 extended region
Various feedback processes act upon molecular clouds, providing mechanical and radiative energy input. Supernova explosions of dying high-mass stars (> 8 M_⊙), stellar winds, stellar jets and outflows provide mechanical feedback while ionizing and non-ionizing radiation provide radiative energy inputs (). Toward the NGC 6334 region, only one supernova remnant (G351.7+0.8) has been detected (), which is located North-East of the central filament. However, based on optical extinction data, the distance of this supernova remnant was found to be 3.4 kpc (), twice the accepted distance to NGC 6334
(∼ 1.7 kpc; ). A large number of OB stars and the H II regions/bubbles created by them drive the feedback in the NGC 6334 extended region.
As shown in Figure <ref>, OB stars are clustered in the GM-24 region while they are found in association in the NGC 6334 central region (see ). A total of 42 H II regions/bubbles are found in the extended region that are visually identified in the mid-infrared with varying sizes from 0.2 to 12 pc (). At least eight H II regions, associated with present star formation, are found in and around the NGC 6334 central ridge (see ). The GM-24 region has bubbles within bubbles (see Figure <ref>). Only a few H II regions, mostly of smaller size, are found toward the eastern filaments (near EF1, EF2 and G352.5, see Figure <ref>, <ref>).
Of the 42 H II regions from <cit.>, most H II regions (40 of 42) show gas velocities derived from ^13CO that are consistent with the velocity of the bulk gas emission (see Table <ref>). This confirms that they are indeed part of the NGC 6334 region. From a visual inspection of the channel maps of the CO emission structure around the H II regions also using quantitative methods, we find that a significant fraction (>80%) of H II regions have an impact on the surrounding gas. Impact here either means clear shell-like/arc-like structure in emission morphology and/or high-contrast values in the CO intensities at bubble edges. Visually, clear signatures of H II bubbles interacting with the filamentary structure are seen toward GM-24 region in which filaments extending west from the central ridge pass through the edges of the bubbles (see Figure <ref> and <ref>). We suggest that these bubbles, typically larger in size, have induced accumulation of gas in the filament and reached pressure equilibrium conditions with the environment. <cit.> have reported that the maximum size of an H II region is set by pressure equilibrium with the ambient ISM, consistent with our interpretation. Similar to our findings, the role of H II bubbles in the formation of a molecular filament has also been suggested for the case of RCW 120 ().
Hundred and sixty-seven ATLASGAL clumps are found in the extended NGC 6334 region, most of which are embedded in the filamentary structure. We searched for their location to investigate possible associations with the H II bubbles. A total of 56 (of 167) ATLASGAL clumps are located on the edges of the H II bubbles (0.8-1.2× R_H II). We also found that 26 of 42 H II regions have at least one ATLASGAL clump located at the bubble radii. This result hints that H II regions may play an important role in formation of the filaments in which the ATLASGAL clumps are embedded. has highlighted the important role of the H II region in the formation of stars at their edges/shells (see also ). Cases of positive feedback from H II bubbles are reported towards multiple star forming regions, for example, toward the G305 complex `collect and collapse' model of triggered star formation is reported by <cit.>.
§ SUMMARY
We conducted observations of the ^13CO and ^12CO 3→2 molecular lines toward the extended NGC 6334 filament using the LAsMA instrument on the APEX telescope, with a spectral resolution of 0.25 km s^-1 and a spatial resolution of ∼20 arcsec (0.16 pc at 1.7 kpc). In this paper, we focused on studying the emission morphology and velocity structure of the gas in the filament traced by carbon monoxide. The results are summarized here:
* The CO traced gas in the NGC 6334 extended region is filamentary, and extends over 80 pc parallel to the Galactic plane. The central NGC 6334 filament exhibits bright CO emission tracing the dense gas reservoir that extends over 10 pc scale. While ^13CO traces denser regions, ^12CO exhibits a more extended emission morphology.
* We fitted the ^13CO line profiles using an automated Gauss fit algorithm . Multiple velocity components were required to fit the observed line profiles toward the denser regions indicating the complex gas velocity structure of the filament. Overall, NGC 6334 exhibits two distinct gas components at velocities -3.9 and -9.2 km s^-1. A third component at ∼-20 km s^-1 is kinematically connected to the extended region.
* We observed velocity and intensity fluctuations (so called `wiggles') along the dense ridge of the filament. We suggest that such fluctuations are likely to be associated with the local density enhancements and gravitational infall onto the filament.
* We found that the dense gas along the filament shows velocity dispersions of 2 > σ_v >0.9 km s^-1. Higher velocity dispersions (> 2 km s^-1 were observed toward NGC 6334 central filament and an Eastern Filament (EF1). The velocity dispersion in NGC 6334 filament is supersonic at the spatial resolution of our observations.
* We investigated the molecular gas structure around the infrared H II regions identified by <cit.> using azimuthally averaged radial ^13CO intensity profiles and measured the line intensity enhancement using the contrast method. Toward most H II regions, we detected molecular line emission and reported the systemic velocities. We found 36 of 42 H II regions to show the signature of molecular gas clearance from the center. These sources exhibit little emission or a flat emission profile toward the central region with an intensity increasing outward or a bumpy feature near or at the H II radii. Using a contrast measurement method, we found intensity enhancements toward 34 of 42 H II regions. In addition, we found that many H II regions (26 of 42) have at least one ATLASGAL clump located at their shell radii.
* A visually clear signature of H II bubble shells emanating from the filamentary structure is observed in particular toward the GM-24 region, in which filaments extending west from the central ridge are located on the edges of the bubbles. We suggest that these bubbles, typically evolved and larger in size, have assisted the formation of gas filament by accumulating gas at their edges and have reached pressure equilibrium with the environment.
* We investigated the gas velocity structure around six FIR sources (I[N], I to V) in the central NGC 6334 filament using position velocity diagrams and found evidence for gas compression toward all of them.
In addition, we studied the gas velocity structure around H II regions using lv and bv plots in which velocities were obtained from Gauss fitting of ^13CO line profiles. Toward a third of the H II regions we observed a V-shape emission indicating multiple gas compression in the NGC 6334 extended filament.
* In the longitude-velocity (lv) plot for the entire mapped region (Figure <ref>), we observed broad V-shaped (or inverted V-shaped) velocity structure toward NGC 6334 central filament and the eastern filament (l∼352.1 deg).
Toward the NGC 6334 central filament and eastern filament EF1 (l∼352.1 deg), the velocity gradient inferred from the arms of the V-shapes were consistent with a global infall scenario.
We conclude that east filament EF1 (l∼352.1 deg) is a hub-filament system in formation.
* Finally, we studied the kinematic connection of `bridge' features and the northern filament (NGC 6334-NF) to the main gas component in NGC 6334. We found NGC 6334-NF contains relatively quiescent gas and does not harbour any star forming clumps traced by submillimeter emission from dust. We suggest that the `bridge' features are possibly linked to cloud-cloud collisions in NGC 6334-NF and the NGC 6334 main filament.
In summary, our observations revealed a complex gas velocity structure in the NGC 6334 filament that extends over ∼80 pc. Located in the west, GM24 region exhibits bubbles within bubbles and is at relatively evolved stage of star formation. The NGC 6334 central ridge is undergoing global gas infall and exhibits two gas `bridge' features possibly indicating cloud-cloud collisions in the NGC 6334-NF and the NGC 6334 main gas component. The relatively quiescent eastern filament (EF1 - G352.1) also shows the kinematic signature of global gas infall onto the filament. We detected molecular emission around most infrared H II regions and found that most H II regions have already cleared out the molecular gas from the center and that many have shell/ring like molecular structure around them. We also observed multiple gas compression signatures around the H II regions and highlighted their important role in shaping the gas emission and velocity structure in the NGC 6334 extended region and in the overall evolution of this star forming complex.
§ DATA AVAILABILITY
The supplementary materials (figures and tables of the Appendix section) are available online at: <https://doi.org/10.5281/zenodo.13642114>.
We thank anonymous referee for the constructive feedback that helped to improve the manuscript. This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX) under program ID M-0107.F-9518A-2021. APEX has been a collaboration between the Max-Planck-Institut für Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory. This work was supported by the Collaborative Research Council (CRC) 956, sub-project A6, and CRC 1601, sub projected B1, funded by the Deutsche Forschungsge-meinschaft (DFG). G.G acknowledges support by the ANID BASAL project FB210003.
aa
§ RMS NOISE AND GAUSSIAN COMPONENTS: ^13CO (3-2)
§ PV MAPS TOWARD SELECTED REGIONS AND FIR SOURCES IN THE NGC 6334 CENTRAL FILAMENT
§ CHANNEL MAPS OF SELECTED REGIONS
§ ^13CO EMISSION MAPS OF H II REGIONS
§ RADIAL PROFILES AND CONTRAST PARAMETER
§ LONGITUDE-VELOCITY (LV) AND LATITUDE-VELOCITY (BV) PLOTS
|
http://arxiv.org/abs/2409.03162v1 | 20240905014351 | Low-phase-noise surface acoustic wave oscillator using phononic crystal bandgap-edge mode | [
"Zichen Xi",
"Joseph G. Thomas",
"Jun Ji",
"Dongyao Wang",
"Zengyu Cen",
"Ivan I. Kravchenko",
"Bernadeta R. Srijanto",
"Yu Yao",
"Yizheng Zhu",
"Linbo Shao"
] | physics.app-ph | [
"physics.app-ph"
] |
Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA
Center for Quantum Information Science and Engineering (VTQ), Virginia Tech, Blacksburg, VA 24061, USA
Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA
Center for Photonics Technology, Virginia Tech, Blacksburg, VA 24061, USA
Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA
Center for Quantum Information Science and Engineering (VTQ), Virginia Tech, Blacksburg, VA 24061, USA
School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ, 85281 USA
Center for Photonic Innovation, Arizona State University, Tempe, AZ, 85281 USA
School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ, 85281 USA
Center for Photonic Innovation, Arizona State University, Tempe, AZ, 85281 USA
Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37830 USA
Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37830 USA
School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ, 85281 USA
Center for Photonic Innovation, Arizona State University, Tempe, AZ, 85281 USA
Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA
Center for Photonics Technology, Virginia Tech, Blacksburg, VA 24061, USA
[email protected]
Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA
Center for Quantum Information Science and Engineering (VTQ), Virginia Tech, Blacksburg, VA 24061, USA
§ ABSTRACT
Low-phase-noise microwave-frequency integrated oscillators provide compact solutions for various applications in signal processing, communications, and sensing. Surface acoustic waves (SAW), featuring orders-of-magnitude shorter wavelength than electromagnetic waves at the same frequency, enable integrated microwave-frequency systems with much smaller footprint on chip. SAW devices also allow higher quality (Q) factors than electronic components at room temperature. Here, we demonstrate a low-phase-noise gigahertz-frequency SAW oscillator on 128°Y-cut lithium niobate, where the SAW resonator occupies a footprint of 0.05 mm^2. Leveraging phononic crystal bandgap-edge modes to balance between Q factors and insertion losses, our 1-GHz SAW oscillator features a low phase noise of -132.5 dBc/Hz at a 10 kHz offset frequency and an overlapping Hadamard deviation of 6.5×10^-10 at an analysis time of 64 ms. The SAW resonator-based oscillator holds high potential in developing low-noise sensors and acousto-optic integrated circuits.
Low-phase-noise surface acoustic wave oscillator
using phononic crystal bandgap-edge mode
Linbo Shao
September 9, 2024
=============================================================================================
§ INTRODUCTION
Oscillators, which generate periodically alternating signals with a stable frequency and phase, play crucial roles in modern telecommunication, metrology, and sensing systems. Complying constraints in size, weight, and power (SWaP), miniaturized microwave-frequency oscillations are developed using integrated electronic circuits <cit.>, microwave photonics <cit.>, optomechanical devices <cit.>, chip based atomic clocks <cit.> and acoustic-wave devices <cit.>. Recently, microwave integrated photonics demonstrates chip-based oscillators with ultralow phase noise <cit.>, despite integration of all needed components, including laser <cit.>, photodiode <cit.>, optical resonators, optical amplifier, and feedback control circuits, on a single chiplet remains challenging.
Meanwhile, integrated microwave acoustics <cit.> with electronic circuitry enables simpler oscillator architectures with competitive performance metrics. Compared to electronic counterparts, acoustic-wave resonators <cit.> provide higher quality (Q) factors in smaller footprints, leveraging the orders-of-magnitude smaller velocity of acoustic waves in solids than electromagnetic waves. Phase velocities of acoustic waves in solids are typically within a range of a few kilometers per second resulting in hundreds of nanometers to micron scale feature sizes for GHz-frequency acoustic-wave devices, which are nanofabrication friendly with research facilities and industrial foundries. The mechanical nature of acoustic waves results in immunity to electromagnetic noise and crosstalk in a compact package. In addition, acoustic waves can be efficiently bidirectionally transduced to electrical signals via the piezoelectric effect of materials, such as, lithium niobate (LN) <cit.>, quartz <cit.>, gallium nitride <cit.> and aluminum nitride <cit.>.
Acoustic-wave oscillators employ either acoustic-wave delay lines or resonators to achieve low phase noise. Generally, a longer phase delay and a lower insertion loss of the acoustic-wave device will lead to a lower phase noise <cit.>. High-Q acoustic-wave resonators can provide equivalent delay times with larger frequency spacing (free spectral range) between resonant modes and smaller footprint than acoustic-wave delay lines. For example, the delay time of an acoustic-wave resonator with a Q factor of 2,000 at 1 GHz is equivalent to a 2-mm-long acoustic delay line on lithium niobate. Integrated acoustic-wave resonators in suspended <cit.>, thin-film-on-bulk <cit.>, and surface <cit.> acoustic-wave device architectures have been developed with frequencies from MHz to sub-THz <cit.>. Their Q factors range from several hundreds to tens of thousands with frequency-Q (fQ) product reaching 10^13 at room temperature. At cryogenic temperature, Q over ten billion (10^10) and fQ over 10^20 have been achieved in a nano acoustic resonator <cit.>.
Here, we demonstrate a microwave-frequency low-phase-noise surface acoustic wave (SAW) oscillator using a phononic crystal (PnC) resonator on LN platform. We leverage a bandgap-edge mode of the PnC resonator to trade off Q factor against insertion loss to achieve the low phase noise. Our 1-GHz oscillator features an output power of 2.71 mW (4.33 dBm), phase noise of -132.5 dBc/Hz at a 10-kHz offset frequency, and a minimum Hadamard deviation (long-term stability) of 6.5×10^-10 at analysis time τ∼ 64 ms. The design of our oscillator could be scalable to 10 GHz assuming a 100-nm feature size nanofabrication capability. The SAW oscillators with small mode volumes are promising candidates in developing oscillator-based sensors.
§ DEVICE DESIGN PRINCIPLE AND FABRICATION
The phase noise, PN, of a resonator-based oscillator can be estimated by the Leeson’s formula <cit.>,
𝑃𝑁≈ -174-P_c+𝑁𝐹_LNA+𝐼𝐿+20 log_10(f_0/2f_mQ)
where PN is the phase noise in dBc/Hz, P_c is the carrier power at the amplifier output in dBm, IL is the insertion loss of the loop in dB, 𝑁𝐹_LNA is the noise figure of the low-noise amplifier (LNA) in dB, f_0 is the oscillation frequency, f_m is the offset frequency, and Q is the quality factor of the resonator. The P_c and 𝑁𝐹_LNA is determined by the performance characteristics of the LNA. This work focuses on the acoustic resonator, which mainly determines the insertion loss IL, resonant frequency f_0, and Q in the Leeson’s formula.
Towards a low-phase-noise oscillation, the Leeson’s formula suggests that the product of transmission T and Q as the figure of merit for the resonator. This motivates our usage of the bandgap-edge mode of a PnC resonator. Compared to the previous PnC resonant modes <cit.> locating at frequencies deep in the bandgap, the bandgap-edge mode locates near the bandgap-edge frequencies. While the bandgap-edge mode is still a confined mode maintaining a high Q factor, the acoustic waves at bandgap-edge frequencies can propagate deeper into the PnC mirror and result in a much higher external coupling efficiency, i.e., transmission T.
Our PnC resonator (Fig. <ref>) is fabricated on a 128°Y-cut LN substrate with SAW propagating along crystal X axis. Lithium niobate features large piezoelectricity, low acoustic-wave propagation loss at GHz frequencies, and mature nanofabrication processes <cit.>. The 128°Y-cut X-propagating configuration shows high electromechanical coupling efficiency (k^2) and a low acoustic-wave diffraction loss. We use the black LN to mitigate the pyroelectric issue during fabrication processes. We define the PnC resonator by a series of etched grooves (Fig. 1(a)), which are patterned by electron beam lithography (EBL) and etched by reactive ion etching (RIE) using argon gas. The target depth of the grooves is 100 nm. The metal layer of interdigital transducers (IDTs) is patterned by EBL aligned to etched markers, which centers the electrodes between two etched grooves (Figs. <ref>(b) and <ref>(c)). We deposit aluminum of 100 nm using an electron beam evaporator followed by a lift-off process. To minimize the perturbation to the resonant mode by the electrodes, we choose aluminum over gold for its higher conductivity over density ratio.
We design the resonator by varying the period and width of etched grooves in different regions (Fig. <ref>(a)). The center segment (labeled Segment A in Fig. <ref>(a)) is with grooves of period 2 μm and gradually transits to segments (labeled Segment B, C in Fig. <ref>(a)) of period 1.96 μm. Segments B and C further transit to unetched free surface by tapering the width of etched grooves. These gradual transitions reduce the scattering loss of acoustic waves into the substrate and improve Q factor of acoustic resonant modes. We make the number of grooves in Segment B less than that in Segment C to optimize the external coupling efficiency from the IDT to the resonator modes. The optimized numbers of periods in PnC segments and tapers are 10 periods in Taper A, 60 periods in Segment B, 30 periods in Taper B, 10 periods in Segment A, and 100 periods in Segment C.
A pair of IDTs are used to excite and detect acoustic waves. The side IDT (labeled IDT 1 in Fig. <ref>(a)) is positioned outside the groove region and designed to closely match the external 50 Ω impedance for efficient conversion between electrical and acoustic-wave domain. The central IDT (labeled IDT 2 in Fig. <ref>(a)) is positioned inside the groove region for effectively coupling to the confined acoustic-wave mode. We place electrodes of IDT 2 at center of unetched part between etched grooves to optimally overlap them with our interested acoustic mode profiles. Due to the resonant enhancement, less electrodes are needed inside the Segment A region.
The simulated band structure diagram (Fig. <ref>(b)) shows a bandgap from 963 to 1,002 MHz (982 to 1,024 MHz) in Segment A (Segments B and C) structures. Upper modes of Segment A are within the bandgap of Segments B and C. Due to the short length of Segment A, only a few resonant modes can be formed on the upper band of Segment A.
§ CHARACTERIZATION OF SURFACE ACOUSTIC WAVE RESONATOR
We characterize our SAW resonator by measuring S parameter spectra (Figs. <ref>(c) and <ref>(d)). The S parameter measurement is calibrated to the IDT electrode contact pads. We perform numerical simulations of eigenmodes (Figs. <ref>(e)-(g)) using COMSOL Multiphysics and experimentally measure mode profiles (Figs. <ref>(h)-(j)) using our in-house optical vibrometer <cit.>, which features a detectable frequency up to 20 GHz and a displacement sensitivity of 0.1 pm.
Our SAW resonator supports three resonant modes. Modes 1 and 2 (marked by the two red dots in Fig. <ref>(b)) are well confined modes within the bandgap of Segments B and C (Figs. <ref>(c), <ref>(e) and <ref>(f)). The interested bandgap-edge mode, labeled Mode 3 (marked by the dark blue dot in Fig. <ref>(b)), is at the frequency of upper bandgap edge of Segments B/C (Figs. <ref>(c), <ref>(d) and <ref>(g)). In the S parameter spectra measurements, we connect Ports 1 and 2 of a vector network analyzer (Keysight P5004A) to the IDTs 1 and 2, respectively. We observe that within the bandgap frequency ranging from 984 MHz to 1,026 MHz (highlighted by the light blue in Fig. <ref>(c)), transmission S_21 is suppressed to the level of -50 dB. This bandgap frequency range is in good agreement with the simulated results (Fig. <ref>(b)).
The measured S parameter spectra (Fig. <ref>(c)) clearly show a transmission S_21 peak and a reflection S_22 dip at 1,005.59 MHz, corresponding to Mode 1 (the fundamental mode). Mode 1 has a measured loaded Q of ∼ 500 with a transmission S_21 of -39 dB (0.012%). The simulated eigenmode profile of Mode 1 (Fig. <ref>(e)) agrees with the measured displacement profile (Fig. <ref>(h)). We note that the optical vibrometer does not measure the eigenmode profiles, but displacement profiles excited by the side IDT 1 using a continuous microwave source at the corresponding resonant frequencies. Thus, large displacements near the side IDT 1 are observed.
The simulated eigenmode profile of Mode 2 (Fig. <ref>(f)) indicates that it is a second-order mode within the bandgap frequencies. We note that Mode 2 has little overlap with central IDT 2, it is not clearly observed in the S parameter spectra (Fig. <ref>(c)). On the other hand, when excited by the side IDT 1, Mode 2 is observed by our optical vibrometry (Fig. <ref>(i)).
The bandgap-edge mode (Mode 3) exhibits a transmission S_21 peak and a reflection S_22 dip at 1,025.98 MHz, which is near the upper bandgap-edge frequency of Segments B/C (Fig. <ref>(c)). The bandgap-edge mode has a measured higher loaded Q of ∼2,800 with a significantly higher transmission of -20 dB (1.0%) [Fig. <ref>(d)] than Mode 1. The bandgap-edge mode is thus preferred to build a lower-phase-noise oscillator. While Modes 1 and 2 are confined in Segment A and its nearby taper regions, the measured profile of this bandgap-edge mode extends to Segment B and C (Figs. <ref>(g) and <ref>(j)). The profile of the bandgap-edge mode has two nodes near both boundaries of Segment A and shows a large field at central IDT 2 region which allows an efficient coupling.
§ STABLE ACOUSTIC-WAVE OSCILLATION AND CHARACTERIZATION
We achieve SAW oscillation using a positive feedback loop (Fig. <ref>(a)), which consists of our acoustic resonator, a LNA (Mini-circuits, ZKL-33ULN-S+), a microwave attenuator, a phase shifter (RF-LAMBDA RFPSHT0002W1), and a coupler (Mini-circuits, ZFDC-10-5-S+). The self-oscillation occurs at a frequency, where the loop phase delay is an integer number of 2π, and the losses are fully compensated by the gain provided by LNA.
We characterize our oscillator by measuring output signals of the oscillator coupled out from the coupler. Four equipment configurations are employed (Insets (1)-(4) in Fig. <ref>(a)). A sinusoidal waveform (Fig. <ref>(b)) with an amplitude of 0.5 V and a period of ∼1 ns is captured by the oscilloscope (Rohde & Schwarz RTO6) (Inset (1) in Fig. <ref>(a)). The amplitude of oscillator output signal is determined by the saturation power of the LNA and coupling ratio of the coupler. Captured by a spectrum analyzer (Keysight, P5004A, Spectrum analyzer mode), the frequency spectrum of the oscillator output (Fig. <ref>(c)) shows a maximum power of 4.33 dBm at 1,025.88 MHz, which matches the resonant frequency of the bandgap-edge mode. Higher order harmonics are observed with power of -16.9 dBc; these higher order harmonics are due to the nonlinear saturation of LNA.
We employ an in-phase/quadrature (I/Q) demodulator to characterize the phase noise of our oscillator. An ultralow-phase-noise microwave generator (Keysight N5183B with low phase noise option) with a spec (typical) phase noise of -139 (-146) dBc/Hz (at 1 GHz carrier frequency) at 10 kHz offset is used as a reference local oscillator (Inset (3) in Fig. <ref>(a)). Experimentally, we tune the phase shifter to maximize the output power, and this manipulation also minimizes the phase noise, experimentally.
We observed that the phase noise was minimized after we introduced a 3 dB attenuator into the loop.
We suspect that the noise performance degradation of the LNA at deep gain saturation region results in the phase noise improvement by the additional attenuation. Our oscillator reaches a phase noise of -132.5 dBc/Hz (at ∼1,026 MHz carrier frequency) at a 10 kHz offset (Fig. <ref>(d)), which is comparable to commercial electronic oscillators but 20 to 30 dB worse (with oscillation frequency normalized) than the state-of-the-art integrated microwave photonic oscillators <cit.>. We note that the estimation by Leeson’s formula: taking P_c=19 dBm, IL = 26.2 dB (SAW resonator: 20 dB, cables in the loop: 2 dB, attenuator: 3 dB, coupler: 1.2 dB) and 𝑁𝐹_LNA = 0.5 dB into Eq. <ref>, it suggests a phase noise of -141 dBc/Hz at 10 kHz offset, which is lower than our measured results. We suspect that the difference between measured and estimated phase noise is caused by the underestimation of the noise figure of LNA in the gain saturation region, where its noise figure could be significantly higher than the specification value for small input signals.
We further characterize the frequency stability of our oscillator over a time scale from 10 μs to 10,000 s (Fig. <ref>). The long-time measurement up to 60,000 seconds (17 hours) uses a frequency counter (Keysight 53230A, 200 M points memory) (Inset (4) in Fig. <ref>(a)). We note that our results in Fig. <ref> are in an open lab environment without any temperature feedback. The measured frequency shifts (Fig. <ref>(e)) match our lab temperature – the building air conditioning is turned off at night and back on in morning. Due to the resolution limit of the frequency counter, the measurements from 10 μs to 100 s use the I/Q demodulation (Inset (3) in Fig. <ref>(a))). Constrained by the storage length (200 M points in total) of the oscilloscope, the sampling rates are adjusted accordingly in measurements with different time lengths. The measurements with length of 0.1, 2, 100 seconds, I/Q sampling rates are set to 500, 25, 0.5 MSa/s. The minimum overlapping Hadamard deviation of our oscillator is 6.5×10^-10 at analysis time τ∼64 ms, demonstrating an outstanding frequency stability. There is a 150 Hz frequency modulation noise with peak-to-peak amplitude ∼40 Hz (Fig. <ref>(b)). This frequency modulation noise also causes the fluctuations in the range of 1∼20 ms in the overlapping Hadamard deviation (Fig. <ref>(a)). We suspect that these noises are introduced by the DC power supply, as we observed a voltage ripple peak at the same 150 Hz (See details in Appendix <ref>).
In addition, we characterize the temperature coefficient of frequency (TCF) of our device using a Peltier (thermoelectric) module placed under the chip with the temperature ranging from 15 to 42 °C (Fig. <ref>). We compare the TCF of the oscillator and the passive SAW resonator. The frequency counter is used to measure the oscillating frequency, and the vector network analyzer is used to extract the mode resonant frequency by fitting the transmission peaks. We measured a TCF of -70 ppm/°C for both passive resonator and active oscillator cases. This TCF is at a similar level compared to other surface or thin-film acoustic wave devices on LN platform <cit.>. We note that LN has anisotropic temperature coefficients and engineering of acoustic-wave modes on different LN crystal orientations could be performed to either reduce the TCF for stable oscillation or enhance the TCF for sensor development.
§ CONCLUSION AND OUTLOOK
In conclusion, we demonstrate a low-phase-noise oscillator at 1 GHz based on a SAW resonator. Compared to previous SAW PnC resonators, our resonator features higher TQ product using the bandgap-edge mode, resulting in a low phase noise of our oscillator. By integrating an on-chip thermal-acoustic phase modulator <cit.> and LNA dies, all components of the SAW oscillators could be compactly packaged together for SWaP-constrained applications. The oscillating frequency of our SAW oscillator is scalable by geometrically scaling the PnC resonator design. As LN is a versatile platform, our SAW oscillators could be integrated with other optical, electro-optic, acousto-optic components to form a large-scale multi-physics integrated circuits for applications in microwave signal processing, sensing, and THz technologies.
§ WAVEFORM AND SPECTRA OF DC POWER SUPPLY
The ripples of a DC power supply could be coupled into the oscillation and induce frequency noises through the LNA. The waveform (Fig. 6(a)) and frequency spectrum (Fig. 6(b)) of the DC power supply (Rigol 832A, 2.8 V input voltage with 0.5 A current limit) are captured and calculated by an oscilloscope (Rohde & Schwarz RTO6). We observe several peaks in the spectrum: most are the AC frequency (60 Hz) and its higher-order harmonics. The notable one is at 153 Hz, which is likely induced by the internal circuits of the DC power supply. We note that despite this 153 Hz noise, this DC power supply overall has the lowest ripple voltage among ones available in our lab.
§ ACKNOWLEDGEMENT
We thank Prof. J. Walling for microwave instrumentation, Dr. S. Ghosh and Dr. M. Benoit for probe station in the cleanroom for quick tests. Device fabrication was conducted as part of a user project (CNMS2022-B-01473, CNMS2024-B-02643) at the Center for Nanophase Materials Sciences (CNMS), which is a DOE Office of Science User Facility. This work is supported by 4-VA Pre-Tenure Faculty Research Award, Virginia Tech FY23 ICTAS EFO Opportunity Seed Investment Grant, 2023 Ralph E. Powe Junior Faculty Enhancement Awards by Oak Ridge Associated Universities (ORAU), and the Defense Advanced Research Projects Agency (DARPA) OPTIM program under contract HR00112320031. The views and conclusions contained in this document are those of the authors and do not necessarily reflect the position or the policy of the Government. No official endorsement should be inferred. Approved for public release; distribution is unlimited.
§ AUTHOR CONTRIBUTIONS
Z.X. and L.S. designed, fabricated, and characterized the SAW oscillator with contributions from all other authors. J.G.T. and Y.Z. designed and built the in-house optical vibrometer, and measured mode displacement profiles. Y.Y., Z.C., and D.W. characterized the oscillator performance at ASU. I.I.K. and B.R.S. and J.J. contribute to nanofabrication and process optimization. L.S. performed the SEM imaging. Z.X. drafted the manuscript with revisions from all other authors. L.S. supervised the project.
|
http://arxiv.org/abs/2409.02298v1 | 20240903212219 | Photometric and kinematic studies of open clusters Ruprecht 1 and Ruprecht 171 | [
"Hikmet Çakmak",
"Talar Yontan",
"Selçk Bilir",
"Timothy S. Banks",
"Raúl. Michel",
"Esin Soydugan",
"Seliz Koç",
"Hülya Erçay"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
1]H. Çakmak
2]T. Yontan
2]S. Bilir
3,4]T. S. Banks
5]R. Michel
6,7]E. Soydugan
8]S. Koç
8]H. Erçay
Çakmak et al.
[1]Faculty of Science, Department of Computer Sciences, Istanbul University, Istanbul, Türkiye
[2]Faculty of Science, Department of Astronomy and Space Sciences, Istanbul University, Istanbul, Türkiye
[3]Nielsen, 675 6th Ave., NYC, NY, USA
[4]Harper College, 1200 W Algonquin Rd, Palatine, Illinois, USA
[5]Universidad Nacional Autonoma de Mexico, Observatorio Astronomico Naciona, Ensenada, Mexico
[6]Faculty of Sciences, Department of Physics, Çanakkale Onsekiz Mart University, Çanakkale, Türkiye
[7]Astrophysics Research Center and Ulupınar Observatory, Çanakkale Onsekiz Mart University, Çanakkale, Türkiye
[8]Institute of Graduate Studies in Science, Istanbul University, Istanbul, Türkiye
*Hikmet Çakmak, Faculty of Science, Department of Computer Sciences, Istanbul, Türkiye. [email protected]
This study outlines a detailed investigation using CCD UBV and Gaia DR3 data sets of the two open clusters Ruprecht 1 (Rup-1) and Ruprecht 171 (Rup-171). Fundamental astrophysical parameters such as color excesses, photometric metallicities, ages, and isochrone distances were based on UBV-data analyses, whereas membership probability calculations, structural and astrophysical parameters, as well as the kinematic analyses were based on Gaia DR3-data. We identified 74 and 596 stars as the most probable cluster members with membership probabilities over 50% for Rup-1 and Rup-171, respectively. The color excesses E(B-V) were obtained as 0.166±0.022 and 0.301±0.027 mag for Rup-1 and Rup-171, respectively. Photometric metallicity analyses were performed by considering F-G type main-sequence member stars and found to be [Fe/H]=-0.09± 0.16 and [Fe/H]=-0.20± 0.20 dex for Rup-1 and Rup-171, respectively. Ages and distances were based on both UBV and Gaia-data analyses; according to isochrone-fitting these values were estimated to be t=580±60 Myr, d=1469±57 pc for Rup-1 and t=2700±200 Myr, d=1509±69 pc for Rup-171. The present-day mass function slope of Rup-1 was estimated as 1.26±0.32 and Rup-171 as 1.53±1.49. Galactic orbit integration analyses showed that both of the clusters might be formed outside the solar circle.
,
,
,
,
,
,
, and
(2024),
Photometric and kinematic studies of open clusters Ruprecht 1 and Ruprecht 171, Astronomische Nachrichten, 2024;00:1–19.
Photometric and kinematic studies of open clusters Ruprecht 1 and Ruprecht 171
H. Erçay
==============================================================================
§ INTRODUCTION
The study of open star clusters (OCs) in our Galaxy can offer valuable insights. OCs are loose groupings of stars, bound together by their (weak) self-gravitational force. OCs contain stars of similar age and composition making them, for example, excellent laboratories for studying stellar evolution. Metal abundance, distance, kinematics, and age can be estimated leading to Galactic OCs acting as tracers into the structure, formation, and evolution (in both chemistry and structure) of the Galactic disk <cit.>. The current paper is part of a wider project using a common methodology across detailed studies of OCs <cit.>, making detailed and careful analyses of otherwise neglected OCs and building towards a meta-analysis.
The high-precision astrometric, photometric, and spectroscopic data of the Gaia space mission provides a foundation for high-quality astrophysics research <cit.>. The astrometric data from this mission makes identification of the cluster members easier <cit.>. Many researchers have successfully performed membership analyses from the proper-motions and trigonometric parallaxes of Gaia <cit.>. Such clearly distinguished groups made up of cluster members supply cleaner color-magnitude and color-color diagrams, as well as allow more accurate calculations of the fundamental astrophysical parameters for the clusters under study.
The mass function of OCs highlights the diversity and dynamics of stellar populations. As a group of stars formed from the same molecular cloud and typically represent a wide range of stellar masses, OCs are useful tools to study present-day and initial mass functions. Various authors have investigated these functions for OCs, exploring whether the initial-mass function is universal for all OCs or if it is affected by star-forming processes <cit.>. The study of OCs gives insight into the topics of dynamical evolution, mass segregation for OCs, and thus the distribution of different stellar masses in the clusters <cit.>[e.g.,[]Bisht19, Bisht21.
§.§ Ruprecht 1
<cit.> presented the cluster Ruprecht 1 (α = 06:36:20.2, δ = -14:09:25, J2000), assigning it a <cit.> classification of `III 1 p', indicating a poorly populated detached cluster with no concentration, composed of less than 50 (then) observed stars having nearly the same apparent brightness. The identification chart of this cluster is shown in Fig. <ref>-a. <cit.> included the cluster in their catalog of astrophysical data for 520 Galactic OCs. The cluster was estimated to have an angular radius of 15, E(B-V)=0.15 mag, a distance of 1100 pc, and an age of 575 Myr. These values contrast with the results of <cit.>, who made CCD observations of the cluster using the Washington C and the Kron-Cousins R_ KC (in place of Washington T_1) bands. <cit.> estimated the reddening E(B-V) as 0.25 ± 0.05 mag, the apparent radius as 5^'.3 ± 0^'.4 (and hence the physical radius as ∼ 2.6 ± 0.2 pc), and provided upper and lower estimates for the distance and age by assuming first an upper limit of z=0.02 and a lower one of 0.008. The cluster distance was therefore estimated as between 1.9 ± 0.4 and 1.5 ± 0.3 kpc, and the cluster age as between 200 ± 47 and 251 ± 58 Myr. <cit.> fitted <cit.> models to 236 OCs listed in the catalog of <cit.>, and so estimated core and tidal radii as well as the tidal masses of the studied clusters. These authors derived Rup-1's core radius as 2.1 pc, the tidal radius as 4.6 pc, and the log cluster mass (in solar units) as 1.462. Subsequently <cit.> built off <cit.>, revising the tidal radius to 7.6 pc and the logarithmic cluster mass to 2.554 solar units. Later papers, such as <cit.>, included Ruprecht 1 in large scale analyses of many clusters, with <cit.> being the last in-depth study of the cluster. Table <ref> presents the key results of these studies and shows that there is still a spread in the estimates (with values often being copied over from earlier studies). Hence, as aimed for in this study, detailed analyses should be performed to clarify parameters for the cluster.
§.§ Ruprecht 171
<cit.> classified Ruprecht 171 (α =18:32:02.9, δ = -16:03:43, J2000) as `II 1 m', or a detached cluster with little noticeable concentration, with a medium number of stars (in the range 50 to 100 inclusive) of the same apparent brightness. The identification chart of this cluster is shown in Fig. <ref>-b. Many of the same catalog studies noted above for Ruprecht 1 included this system, as listed in Table <ref>. Again, we see the repetition of parameter estimates from catalog to catalog together with a scatter in the independent estimates. A key additional paper is that of <cit.>, who examined Gaia DR2 data accompanied by high-resolution optical spectra of seven red giant branch and red clump stars assessed to have a high probability for cluster membership. The estimates for distance and [Fe/H] were in reasonable agreement with those of <cit.> due to the large estimated uncertainties, as are those of <cit.>. However, reddening varies substantially across the literature estimates, with even recent outliers for age <cit.>. As noted for Ruprecht 1, this cluster seems to be in need of additional study.
The current paper explores and characterizes two clusters, Ruprecht-1 (hereafter Rup-1) and Ruprecht-171 (hereafter Rup-171). In this study, CCD UBV photometric and Gaia DR3 astrometric, photometric, and spectroscopic data were used together for the first time to investigate Rup-1 and Rup-171. During the analyzes, we considered two separate catalogs for each cluster: a UBV catalog that contained magnitude and color measurements (see later for details), and the Gaia catalog gathered from the Gaia DR3 database which included the stars located in 25 arcmin areas from each cluster center and was comprised of these stars' astrometric, photometric, and spectroscopic measurements. The membership probabilities of stars were calculated from Gaia catalog. Then we cross-matched the two catalogues, allowing the membership probabilities of the same stars in the UBV catalog to be determined. The UBV based catalog was used to obtain fundamental astrophysics parameters such as E(B-V) and E(U-B) color excesses, photometric metallicities [Fe/H], ages, and isochrone distances of the two OCs. The Gaia-based catalog was used in the estimation of structural and astrophysical parameters, as well as to investigate their astrometric, dynamic, and kinematic properties. In this study we investigated the two OCs in detail performing individual methods that are described in later sections. Hence, we aimed to determine homogeneous results and eliminate the uncertainties in the cluster parameters given in literature.
[ph!]
8pt10ptlabelfont=bf, font=normal
The photometric and astrometric catalogs for Rup-1 and Rup-171.
11cRuprecht 1
ID R.A. Decl. V U-B B-V G G_ BP-G_ RP μ_αcosδ μ_δ ϖ P
(hh:mm:ss.ss) (dd:mm:ss.ss) (mag) (mag) (mag) (mag) (mag) (mas yr^-1) (mas yr^-1) (mas)
001 06:36:04.84 -14:06:05.91 16.466(0.007) 0.189(0.013) 0.697(0.009) 16.285(0.003) 0.959(0.010) -0.322(0.043) 3.047(0.049) 0.268(0.051) 0.41
002 06:36:04.90 -14:08:11.89 19.292(0.031) 0.186(0.078) 0.812(0.046) 19.104(0.004) 1.041(0.042) -0.081(0.216) 1.296(0.253) -0.089(0.253) 0.24
003 06:36:05.21 -14:13:07.18 18.945(0.023) 0.798(0.103) 1.056(0.035) 18.647(0.003) 1.351(0.049) -0.652(0.164) 0.630(0.171) 0.334(0.161) 0.09
004 06:36:05.22 -14:12:01.84 20.212(0.060) —– 1.192(0.094) 19.773(0.005) 1.674(0.099) -1.516(0.338) 0.492(0.361) 0.116(0.346) 0.05
005 06:36:05.30 -14:09:53.34 15.459(0.006) 0.151(0.011) 0.732(0.010) 15.260(0.003) 0.977(0.005) -0.435(0.026) -0.618(0.029) 0.344(0.029) 0.45
... ... ... ... ... ... ... ... ... ... ... ...
182 06:36:37.25 -14:10:51.13 19.991(0.044) —– 0.804(0.062) 19.684(0.005) 1.107(0.092) 0.431(0.362) 0.723(0.396) 0.054(0.419) 0.03
183 06:36:37.28 -14:11:08.88 18.000(0.012) 1.168(0.073) 1.121(0.022) 17.569(0.003) 1.433(0.022) -2.095(0.086) -1.024(0.095) 0.543(0.108) 0.03
184 06:36:37.51 -14:07:44.27 20.105(0.048) —– 0.594(0.070) 19.815(0.005) 0.918(0.099) -0.088(0.366) -0.049(0.390) 1.518(0.437) 0.38
185 06:36:37.54 -14:08:57.51 18.656(0.019) 1.269(0.150) 1.213(0.029) 18.136(0.003) 1.630(0.032) 2.184(0.138) -4.449(0.172) 0.677(0.155) 0.00
186 06:36:37.56 -14:09:12.87 19.674(0.048) 0.454(0.127) 0.790(0.058) 19.316(0.004) 1.290(0.055) 1.332(0.256) -0.612(0.275) 0.057(0.301) 0.23
11cRuprecht 171
ID R.A. Decl. V U-B B-V G G_ BP-G_ RP μ_αcosδ μ_δ ϖ P
(hh:mm:ss.ss) (dd:mm:ss.ss) (mag) (mag) (mag) (mag) (mag) (mas yr^-1) (mas yr^-1) (mas)
001 18:31:57.23 -16:08:43.07 18.455(0.036) 1.621(0.231) 1.329(0.049) 18.076(0.003) 1.673(0.030) 7.707(0.154) 1.291(0.128) 0.690(0.141) 1.00
002 18:31:57.44 -16:07:01.19 18.573(0.054) —– 1.360(0.077) 18.431(0.004) 1.779(0.041) 0.854(0.229) -1.378(0.186) 0.067(0.200) 0.07
003 18:31:57.56 -16:06:47.73 14.999(0.008) 0.157(0.016) 0.750(0.016) 14.793(0.003) 1.009(0.005) 7.696(0.031) 0.941(0.027) 0.633(0.027) 1.00
004 18:31:58.37 -16:08:59.33 15.393(0.010) 0.182(0.010) 0.662(0.012) 15.083(0.003) 1.028(0.005) 7.755(0.032) 0.998(0.027) 0.649(0.030) 1.00
005 18:31:58.38 -16:08:23.09 14.741(0.008) 0.228(0.011) 0.735(0.012) 14.482(0.003) 1.037(0.005) 0.558(0.027) -1.509(0.022) 0.685(0.025) 0.03
... ... ... ... ... ... ... ... ... ... ... ...
366 18:32:29.26 -16:03:26.12 17.544(0.016) 0.311(0.047) 1.065(0.035) 17.155(0.003) 1.293(0.013) 1.742(0.087) 0.401(0.072) 0.408(0.082) 0.13
367 18:32:29.41 -16:04:38.30 18.648(0.040) 0.877(0.169) 1.445(0.066) 18.134(0.004) 1.848(0.026) -4.376(0.275) -6.076(0.251) 0.657(0.262) 0.00
368 18:32:29.44 -16:04:16.85 16.773(0.039) —– 1.004(0.047) 16.370(0.003) 1.486(0.011) -0.466(0.074) -2.774(0.060) 0.247(0.074) 0.07
369 18:32:29.46 -16:08:55.69 17.784(0.015) 0.896(0.064) 1.176(0.023) 17.331(0.003) 1.474(0.017) -3.038(0.098) -1.982(0.081) 0.675(0.100) 0.00
370 18:32:29.47 -16:08:23.28 18.406(0.025) —– 1.902(0.051) 17.254(0.003) 2.458(0.028) 1.022(0.094) -2.644(0.079) 0.481(0.098) 0.17
§ OBSERVATIONS AND DATA REDUCTIONS
The observations of these two clusters were carried out at the San Pedro Martir Observatory,[<https://www.astrossp.unam.mx/en/users/telescopes/0-84m-telescope>] as part of an ongoing UBVRI photometric survey of Galactic stellar clusters started on September 2009. Up to date 1,496 observations of http://www.astrosen.unam.mx/ rmm/SPMO_UBVRI_Survey/Clusters_Open.html1,385 open clusters and 149 observations of http://www.astrosen.unam.mx/ rmm/SPMO_UBVRI_Survey/Clusters_Globular.html87 globular clusters have been carried out. The publication of the details of this survey is in preparation by Raúl Michel. The 84 cm (f/15) Ritchey-Chretien telescope was employed in combination with the Mexman filter wheel.
Rup-1 was observed on 2016-11-07 with the Marconi 3 detector (a 2048 × 2048 13.5-μm square-pixels e2v CCD42-40 with a gain of 1.71 e^- ADU^-1 and a readout noise of 4.9 e^-, giving a field of view of about 7.6 × 7.6 arcmin^2). Short and long exposures were taken to properly measure both the bright and faint stars of the fields. Exposure times for I and R were 2, 20, 200s in duration; 4, 40, 400s for V; 6, 60, 600s for B; and 10, 100, 1000s for U.
Rup-171 was observed on 2013-06-09 with the ESOPO CCD detector (a 2048 × 4612 13.5-μm square-pixels e2v CCD42-90 with a gain of 1.83 e^- ADU^-1 and a readout noise of 4.7 e^- at the 2 × 2 binning employed, providing an unvignetted field of view of about 7.6 × 9.2 arcmin^2). Three different exposure times per filter were used without stacked images at all. Exposure times were 10, 50, 200s for both I and R; 10, 20, 200 for V; 10, 20, 300s for B; and 30, 60, 600s for U.
The observations were carried out during very photometric conditions. Landolt's standard stars <cit.> were also observed, at the meridian and at about two airmasses, to properly determine the atmospheric extinction coefficients. Flat fields were taken at the beginning and the end of each night and bias images were obtained between cluster observations. Data reduction with point spread function (PSF) photometry was carried out by Raúl Michel with the IRAF/DAOPHOT packages <cit.> and employing the transformation equations recommended, in their Appendix B, by <cit.>.
§ DATA ANALYSIS
§.§ UBV Photometric Data
Data reduction and analyses resulted in UBV photometric catalogs of 186 and 370 stars for Rup-1 and Rup-171, respectively (Table <ref>). The coordinate solution for the targets was performed using the astrometry packages of IRAF. These catalogs contain equatorial coordinates, V-band magnitudes and U-B, B-V color indices, and relevant photometric errors of each detected star. V-band magnitudes of the stars are within the range 10<V<21.5 mag for Rup-1 and 11<V<21 mag for Rup-171.
To derive reliable astrophysical parameters from the UBV-based analyses, first, we derived the faint magnitude limit of the V-band. The distributions of the number of stars versus V magnitudes with 1 mag intervals were constructed (for each cluster) and are presented in the left panels of Fig. <ref>. It can be seen from Fig. <ref> that the number of stars increases up to V=19 mag and decreases after this limit. We concluded that the V=19 mag is the faint magnitude limit for both clusters. We used stars brighter than V=19 mag in further UBV-based analyses.
The number of stars within the ranges 17<V≤18 and 18<V≤19 mag is 79 and 88 (see Table <ref>), respectively, for Rup-171. Although the first decrease in the number of stars appears at V=18 mag, as seen in Fig. <ref>-c, the number of stars for these two ranges is very close to each other. Therefore, we chose V=19 mag as the faint magnitude limit for Rup-171.
The photometric uncertainties adopted as internal errors were those derived from PSF photometry. We calculated mean photometric errors of the V magnitudes, U-B, and B-V color indices as functions of V interval magnitudes. These are listed in the upper rows of Table <ref> for the two clusters. V-band errors at the faint magnitude limit (V=19) are 0.022 mag for Rup-1 and 0.043 mag for Rup-171. The mean errors reach up to 0.085 and 0.031 mag in U-B and B-V measurements for Rup-1 at V=19 mag, respectively. These values correspond to 0.191 and 0.098 mag for Rup-171.
§.§ Gaia Astrometric and Photometric Data
To perform membership analyses, derive visual extension, age, and distance as well as the kinematic properties of Rup-1 and Rup-171, we used the third data release of the Gaia <cit.> astrometric and photometric data. Gaia DR3 complements the early third data release of Gaia <cit.>, containing 585 million sources with five-parameter astrometric measurements such as equatorial coordinates (α, δ), proper-motion components (μ_αcosδ, μ_δ), and trigonometric parallaxes (ϖ) up to G=21 mag. New data in Gaia DR3 includes new estimates of mean radial velocities to a fainter limiting magnitude of G∼ 14 mag. The Gaia photometry presents three optical pass bands of G, G_ BP and G_ RP with 330-1950 nm, 330-680 nm, and 630-1050 nm wavelengths, respectively <cit.>.
In the study, we gathered Gaia DR3 astrometric, photometric, and spectroscopic data for all stars in the directions of the studied clusters for 25×25 arcmin regions about the clusters' centers. The central locations were taken from <cit.> (α=06^ h 36^ m 20^ s. 16, δ= -14^∘ 09^ ' 25^”. 20 for Rup-1 and α=18^ h 32^ m 02^ s. 87, δ= -16^ o 03^ ' 43^”. 20 for Rup-171). The identification charts of the 25 arcmin fields of view for the two clusters are shown in Fig. <ref>. The final Gaia catalog includes 21,149 and 362,080 stars within the 8<G<23 and 7<G<23 mag ranges for Rup-1 and Rup-171, respectively. When considering these counts it is worth remembering that Rup-171 is located along the Galactic plane.
To obtain precise results also in the Gaia-based analyses, we determined the faint magnitude limit of G-band through a similar approach as for the V-band magnitudes. We plotted histograms with 0.5 bin intervals of G and found that the number of stars decreases after G=20.5 mag for the two clusters (see the right-hand panels of Fig. <ref>). Hence, we considered this limit as a faint G magnitude limit and used the stars brighter than G=20.5 mag for further analyses. Additionally, to visualize our observational field of view with the field of 25'×25' in G bands, we constructed the histogram of stars detected in UBV bands (red histograms on the right-hand panels of Fig. <ref>). Because of a lack of observational data in our field of view we considered G=20.5 mag as the limiting magnitude for the Gaia-based analyses. The mean photometric errors were calculated (for the 25'×25' cluster regions), considering internal errors of G magnitudes, G_ BP-G_ RP and G-G_ RP colors as a function of interval G magnitude. The mean errors for Gaia photometry are listed in the bottom panel of Table <ref>. The mean G errors reach up to 0.012 mag and 0.016 mag, and G_ BP-G_ RP errors do not exceed 0.24 mag and 0.35 mag for the stars brighter than G=21 mag (which contains faint G limit) for Rup-1 and Rup-171, respectively. The mean G-G_ RP errors are 0.109 mag and 0.158 mag for relevant G ranges for Rup-1 and Rup-171, respectively.
§.§ Structural Parameters of the Clusters
Estimation of the structural parameters and visual sizes for the two clusters was based on Gaia DR3 data of an 25'×25' area centered on each of the clusters. To do this, we utilized radial density profile (RDP) analyses, taking into account the central coordinates presented by <cit.>. We divided the cluster areas into concentric rings, each representing a specific distance from the clusters' adopted center. The number of stars within each ring was then counted, and the stellar densities (ρ) were computed by dividing the star count by the ring's area. We plotted stellar densities according to distance from the cluster center as shown in Fig. <ref>. We compared RDPs by fitting <cit.> models via least-square fitting (χ^2). This allowed us to infer `optimal' estimates for the core, limiting, and effective radii for two clusters. The <cit.> model is expressed as ρ(r)=f_ bg+[f_ 0/(1+(r/r_ c)^2)], where r is the radius from the cluster center, f_ bg the background density, f_ 0 the central density, and r_ c the core radius. The best fitting solution of the <cit.> RDP fits for each cluster was represented by a black continuous line in Fig. <ref>. The estimates of central stellar density, core radius, and background stellar density are f_ 0=51.550± 3.132 stars arcmin^-2, r_ c=0.254± 0.016 arcmin and f_ bg=7.573± 0.136 stars arcmin^-2 for Rup-1, respectively, and f_ 0=7.610± 0.973 stars arcmin^-2, r_ c=3.297± 0.920 arcmin and f_ bg=148.411± 2.487 stars arcmin^-2 for Rup-171, respectively. Through visual examination of the RDP plots, we estimated the observable limiting radii for the two clusters. We adopted these radii as the point where the background density merges with the cluster density. Following this process we estimated the limiting radii as r=7' for Rup-1 and r=10' for Rup-171. Only stars within these limiting radii were included in the following Gaia-based analyses.
§.§ Color-Magnitude Diagrams and Selection of Cluster Members
Field star contamination across our view of an OC affects the reliable estimation of fundamental parameters for the cluster. It is therefore necessary to separate cluster members from field stars. Thanks to the Gaia DR3 astrometric data, membership determination analyses give precise results. This leads to the cluster morphology being clearly distinguished on CMDs, allowing precise determinations of the parameters. In this study, we used the Unsupervised Photometric Membership Assignment in Stellar Cluster program <cit.> method to investigate the membership probabilities of stars in each cluster region. upmask is based on the principle that cluster stars share common features in proper-motion and trigonometric parallax space and have a region of concentration in equatorial coordinates. This method was previously used in many studies <cit.>. A detailed description can be found in <cit.>.
In the membership analyses, we used equatorial coordinates (α, δ), as well as the Gaia DR3 proper-motion components (μ_αcosδ, μ_δ) and trigonometric parallaxes (ϖ) with their uncertainties as input parameters for all stars in the 25 arcmin regions of the Rup-1 and Rup-171 OCs. We ran 100 iterations of upmask for the two clusters, scaling these inputs to unit variance to determine membership probabilities (P). We considered the stars with membership probabilities over 0.5 as the most probable cluster members. Hence, for Rup-1 we identified that 74 possible members, brighter than G=20.5 mag, lie within the limiting radius (r≤7') and with membership probabilities P≥ 0.5. According to a similar magnitude limit (G≤20.5) and membership criteria (P≥ 0.5) with a r≤10' limiting radius, we identified 596 possible members for Rup-171. <cit.> used Gaia DR2 data and determined 129 and 739 member stars with membership probabilities over than 0.5 for Rup-1 and Rup-171, respectively. The Gaia DR3 data used for membership analyses in this study contain improved precision in position, trigonometric parallax, and proper motion measurements. This improved data could affect the membership analyses. In addition to the membership probabilities P ≥ 0.5, we considered the stars within the clusters' limiting radii as possible members. These features can explain the differences of the number of member stars between this study and <cit.>. We used these stars in further analyses for the determination of mean astrometric and kinematic parameters, as well as the ages and distances of the two clusters. G× (G_ BP-G_ RP) CMDs of these stars within the aforementioned 25 arcmin fields are shown in the upper and lower panels of Fig. <ref> for Rup-1 and Rup-171, respectively. In order to perform UBV photometry-based analyses for the two clusters, the membership probability values calculated from the Gaia catalog were also applied to the same stars identified in the UBV catalog. For this purpose, the stars in the Gaia and UBV catalogs were cross-matched according to their coordinates so that the membership probabilities of the same stars in the UBV catalog were determined.
Additionally, using the photometric criteria for the UBV data, we took into consideration the possible binary star contamination on the main-sequences of Rup-1 and Rup-171. We plotted the V× (B-V) CMDs and fitted the Zero Age Main-Sequence (ZAMS) of <cit.> as a blue and red envelope to these diagrams (see Fig. <ref>). The blue envelope of ZAMS was fitted through visual inspection, considering the most probable (P ≥ 0.5) member stars in the main sequence. For the red envelope ZAMS, the blue one was shifted by 0.75 mag towards brighter magnitudes to include the possible binary star contamination. Through this investigation, 36 and 115 stars remained as the most probable cluster members for UBV data in Rup-1 and Rup-171, respectively. These stars were used in further estimation of color excess, photometric metallicity as well as the derivation of UBV data-based age and isochrones distance for each cluster. V× (B-V) CMDs with the blue and red ZAMS envelopes, as well as the most probable and field stars, are shown in left panels of Fig. <ref> for Rup-1 and Rup-171.
Using membership probabilities and numbers of stars from the Gaia and UBV catalogues of each cluster, we prepared probability distributions as shown in Fig. <ref>. These figures compare the membership probabilities versus number of stars. Panels (a) and (b) in the figures were constructed for each 25-arcmin cluster region (white histograms) and stars inside the clusters' limiting radii (blue histograms), whereas panels (c) and (d) were plotted for the stars detected in UBV observations (white histograms) as well as lying within the ZAMS curves and clusters' limiting radii (blue histograms). It can be seen from the right panels of the Fig. <ref> that the membership probability of cross-matched stars in UBV catalogues are higher than 0.9. These stars were also used to obtain mean proper-motion components and trigonometric parallaxes of both clusters. To assign the member stars in proper-motion space and investigate the bulk motion of the clusters we plotted both vector-point diagrams (VPDs) and projection of proper-motion vectors on the sky, which are presented as left and right panels of Fig. <ref>, respectively. In both of the left panels of Fig. <ref> it can be seen that the most probable members (the color-scaled points) are concentrated in certain areas, allowing cluster stars to be distinguished from field stars (gray points). The right panels of Fig. <ref> indicate that most probable members of the cluster have similar directions on the RA and DEC plane. The mean proper motion component estimates are (μ_αcosδ, μ_δ)=(-0.287 ± 0.003, -0.903 ± 0.003) for Rup-1 and (μ_αcosδ, μ_δ) = (7.720 ± 0.002, 1.082 ± 0.002) mas yr^-1 for Rup-171. The intersections of the blue dashed lines in Fig. <ref> show the mean value points of the proper-motion components. Trigonometric parallaxes of the most probable member stars were used to calculate mean trigonometric parallaxes and so corresponding distances of the clusters. To perform these analyzes we constructed the histograms of trigonometric parallaxes versus stellar numbers and fitted a Gaussian to these distributions, as shown in Fig. <ref>. These distributions include the most probable members with probabilities P≥0.5 and those inside the limiting radii of clusters. From the Gaussian fits these groupings, we obtained the mean trigonometric parallaxes of Rup-1 and Rup-171 as ϖ= 0.649± 0.027 mas and ϖ= 0.631 ± 0.042 mas, with corresponding distances d_ϖ=1541±64, d_ϖ=1585±106 pc respectively. The mean trigonometric parallax error was calculated from the statistical uncertainties in the Gaussian fitting process.
§ ANALYSES OF THE UBV DATA
This section summarizes the procedures for the astrophysical analyses of Rup-1 and Rup-171. We used two-color diagrams (TCDs) to calculate the reddening and photometric metallicities separately. Keeping these two parameters as constants and using CMDs, we next obtained the distances and ages simultaneously <cit.>. Hence, we summarized the relevant analyses in this section.
§.§ Color Excess for the Two Open Clusters
To obtain the E(U-B) and E(B-V) color excesses in the direction of the two clusters we constructed (U-B)× (B-V) TCDs. These are shown as Fig. <ref> and are based on the most probable (P≥ 0.5) main-sequence stars. The intrinsic ZAMS of <cit.> for solar metallicity was fitted to the observational data, employing the equation of E(U-B)=0.72 × E(B-V) + 0.05× E(B-V)^2 <cit.>. This process was performed according to a least-square (χ^2) method with steps of 0.001 mag. By comparing the ZAMS to the most probable main-sequence stars, we achieved best-fit results for color excesses corresponding to the minimum χ^2. These estimates are E(B-V)=0.166± 0.022 mag for Rup-1 and E(B-V)=0.301± 0.027 mag for Rup-171. The errors of the calculations were determined as ± 1σ deviations, and are shown as the green lines in Fig. <ref>. Using the equation of A_V/E(B-V)=3.1 <cit.>, we calculated the V-band absorption as A_V=0.511± 0.068 and A_V=0.933± 0.083 mag for Rup-1 and Rup-171, respectively.
In the study, three-dimensional (3D) reddening maps known as STructuring by Inversion the Local Interstellar Medium (Stilism)[https://stilism.obspm.fr/] were used to determine the color excess in the direction of two OCs. We used the 3D reddening map of <cit.>, which analyses stars within 2.5 kpc at about 23,000 sightlines. Using Stilism information, considering the Galactic coordinates of two OCs (l, b) and their mean distances calculated from trigonometric parallax measurements (d_ϖ), the color excesses for Rup-1 and Rup-171 were estimated as E(B-V)=0.198±0.096 and as E(B-V)=0.212±0.212 mag, respectively.
The comparison of the color excess estimated from the UBV photometric data for two OCs with the results in the literature (which are given in Table <ref>) is shown in Fig. <ref>. In the panels of the Fig. <ref>, the black dots labelled with numbers represent E(B-V) color excesses given in the literature as identified in Table <ref>, the red dots represent the color excess calculated from the 3D reddening maps, and the blue lines and grey regions represent the E(B-V) color excess and uncertainties calculated in this study. The color excess estimated for Rup-1 is in good agreement with the values (0.146 ≤ E(B-V) ≤ 0.197 mag) estimated by different authors <cit.>. For Rup-171 the estimated color excess is compatible with the results of <cit.> and <cit.>. In general, the E(B-V) color excess calculated for Rup-1 in this study are in good agreement with the results in the literature, while the color excess estimated for Rup-171 is close to the upper limit in the literature (see Fig. <ref>).
§.§ Photometric Metallicities for the Two Open Clusters
We estimated the photometric metallicity of the studied clusters using the (U-B)_0×(B-V)_0 TCDs and employing the method of <cit.>. This methodology considers F and G spectral-type main-sequence stars and their UV-excesses. The (B-V)_0 color indices of these stars are in the range of 0.3≤ (B-V)_0≤0.6 mag <cit.>. Thus, we estimated intrinsic (B-V)_0 and (U-B)_0 color indices considering the color excesses derived above and selected the most probable F-G spectral type main-sequence stars inside the 0.3≤ (B-V)_0≤0.6 mag range. We determined UV-excesses (δ) for the selected stars. This is described as differences between the (U-B)_0 color indices of the selected cluster and Hyades main-sequence members with the same intrinsic (B-V)_0 color indices. Such differences are defined by the expression of δ =(U-B)_ 0,H-(U-B)_ 0,S, where H and S are the Hyades and cluster stars with the same (B-V)_0 color indices, respectively. By normalizing the UV-excess of the stars at (B-V)_0 = 0.6 mag we estimated the selected stars' normalized UV-excess (δ_0.6) values. For each cluster, we constructed the histogram of δ_0.6 and fitted a Gaussian to the resulting distribution to derive a mean δ_0.6 value. This was then used in the estimation of the photometric metallicity ([ Fe/H]) of the selected cluster. The equation of <cit.> used for metallicity calculations is given as follows:
[Fe/H]=-14.316(1.919)δ_0.6^2 -3.557(0.285)δ_0.6
+0.105(0.039).
Six F-G spectral type main-sequence stars in Rup-1 and 32 stars in Rup-171 were selected to derive the [Fe/H] values of these two clusters. (U-B)_0×(B-V)_0 diagrams and distribution of normalized δ_0.6 values of selected stars for Rup-1 and Rup-171 are shown in Fig. <ref>. The peaks of the Gaussian fits to the normalized UV-excesses are δ_0.6=0.047±0.011 and δ_0.6=0.067±0.022 mag for Rup-1 and Rup-171, respectively. The uncertainty of the mean δ_0.6 was derived by ±1σ (the standard deviation) of the Gaussian fit. Taking into account the internal errors of the photometric metallicity calibration, the metallicities corresponding to mean δ_0.6 values are calculated to be [Fe/H] = -0.09± 0.06 and [Fe/H] = -0.20± 0.13 dex, for Rup-1 and Rup-171, respectively. Moreover, considering uncertainties of the UBV data and relevant color excesses, we determined external errors as 0.15 dex for both clusters. We evaluated these two values using error propagation. Hence, the final results for metallicities were determined as [Fe/H] = -0.09 ± 0.16 and [Fe/H] = -0.20 ± 0.20 dex, for Rup-1 and Rup-171, respectively.
The calculated metallicities were transformed into the mass fraction z to help select which isochrones would be used in age estimation. We considered the analytic equation given in the studies of <cit.> and <cit.>. The equation is given as follows:
z=(z_ x-0.2485× z_ x)/(2.78× z_ x+1)
Here, z and z_ x indicate the elements heavier than helium, and the intermediate operation function which is expressed by
1.2
z_ x=10^ [[ Fe/H]+log(z_⊙/1-0.248-2.78× z_⊙)]
respectively. z_⊙ is the solar metallicity adopted as 0.0152 <cit.>. We calculated z=0.012± 0.003 for Rup-1 and z=0.010± 0.004 for Rup-171.
In the literature, the metallicity estimation of Rup-1 is based on the adoption of theoretical metal contents (see Table <ref> on page tab:literature). The photometric metallicity calculated in the current study for Rup-1 matches well the value of <cit.>. <cit.> analyzed high-resolution hermes spectra of six red clumps in Rup-171, measuring the metallicity of the cluster as [Fe/H]=-0.041±0.014 dex. <cit.> used the harps-n spectrograph <cit.> and acquired high resolution (R ∼ 115,000) optical spectra for the eight highly probable members of Rup-171 including two red giant branch (RGB) and six red clump stars (RC). They performed two analysis methods. The first, Fast Automatic MOOG Analysis (fama), is based on the equivalent width method and the second is based on the analysis code rotfit <cit.>. Hence they reported two different metallicity values for each studied star. According to fama and rotfit analyses, <cit.> found that the metallicities of the six clump stars are within the -0.38≤[Fe/H]≤ 0.08 and -0.12≤[Fe/H]≤ 0.10 dex, in order of the two methods. They indicated that RGB stars are more metal-poor than RC stars and there are residual differences between the metallicity values because of the physics of stellar evolution, such as atomic diffusion and mixing or to approaches during the spectroscopic analyses. Hence, they adopted the mean metallicity result from six RC stars derived from rotfit analyses as [Fe/H]=0.09 ± 0.10 dex. Our metallicity estimate (-0.20 ± 0.20 dex) for Rup-171 is based on F-G type main-sequence stars, and it is more metal-poor than the literature studies. It's important to note that the increase in metallicities for giant stars is not solely due to a single factor but rather a combination of various processes. Stellar nucleosynthesis, mass loss, and mixing processes with convective motion during the giant stage are more efficient compared to main-sequence stars, leading to a higher enrichment of metals in the outer layers <cit.>. For the reasons mentioned above, the use of metallicity calculated from main-sequence stars in OC age calculations may give more precises results.
§.§ Distance Moduli and Age Estimation
The distance moduli, distances, and ages of Rup-1 and Rup-171 were estimated by fitting the parsec isochrones of <cit.> to the UBV and Gaia based CMDs, as shown in Fig. <ref>. Selection of the parsec isochrones was made according to the mass fractions (z) derived above for the two clusters. A fitting procedure to the V× (U-B), V× (B-V), and G× (G_ BP-G_ RP) CMDs was applied by visual inspection taking into account the most probable (P≥ 0.5) main-sequence, turn-off, and giant member stars present in the two studied clusters: The first step in the fitting process was to ensure that the isochrones had the best fit to the lower envelope of the most likely main sequence stars. After this step, the isochrones with the ages that best represent the turn-off point of the cluster and the most likely stars in the giant region were determined. While determining the distance moduli and ages from the UBV data, we used E(U-B) and E(B-V) color excesses obtained in this study. For the Gaia data, when we were taking into account the coefficient of E(G_ BP-G_ RP)= 1.41× E(B-V) as given by <cit.>, we interpreted that this value does not match well with the observational value to determine distance moduli and age for the two clusters. We achieved a better estimation of these two astrophysical parameters from Gaia data using the coefficient of E(G_ BP-G_ RP)= 1.29× E(B-V) as given by <cit.>. The errors for the distance moduli were calculated with the method of <cit.>. We estimated the uncertainty in the derived cluster ages by fitting two more isochrones whose values were good fits to the data sets but at the higher and lower acceptable values compared to the adopted mean age. The best fit isochrones represent the estimated ages for the clusters, whereas the other two closely fitting isochrones, where one is younger and the other is older than the estimated best fit age, were taken into consideration to estimate the uncertainties in cluster age. Thus, errors for the ages contain visual inspection errors and do not contain errors of the estimated distance moduli, color excesses, and metallicities.
The estimation of the distance modulus, distance, and age parameters for the two clusters are as follows:
* Rup-1: The best fit by the z=0.012± 0.003 scaled parsec isochrones across log(t)=8.68, 8.76, and 8.83 yr ages gave the apparent distance modulus and age of the cluster as μ_ V=11.346 ± 0.083 mag and t=580 ± 60 Myr, respectively. The best age and distance modulus solution in UBV and Gaia photometry is shown in the upper panels of Fig. <ref>. By applying the estimated distance modulus, (μ_ V), and V-band absorption (A_ V) values into the distance modulus definition (μ_ V=5×log d-5 + A_ V), we calculated the distance of the cluster to be d_ iso=1469± 57 pc. The age and distance determined in this study are in reasonable agreement with most of the results given by different researchers (see Table <ref> on page tab:literature). The isochrone fitting distance of the cluster agrees within error for the distance calculated above from trigonometric parallax (d_ϖ=1541±64 pc, Sec. <ref>).
The best-fitting isochrone is well-matched with the position of the most probable members except the brightest star on clusters' CMDs (upper panels of Fig. <ref>). According to information from the SIMBAD database, this star is classified as a double or multiple star named BD-14 1504. In our study, we determined the apparent V-band magnitude of BD-14 1504 as V=10.054 (which corresponds to G=9.806 mag in Gaia DR3 data) with the probability of P=1. The star is located at a distance of 2^'.6 from the center of the cluster. Moreover, Gaia DR3 proper motion components (μ_αcosδ, μ_δ=-0.416 ± 0.027, -0.937 ± 0.030 mas yr^-1) and trigonometric parallax (ϖ=0.598±0.032 mas) values were well-matched with the mean results of these parameters of Rup-1. Astrometric evidence indicates that BD-14 1504 is a member of Rup-1. It's important to note that the apparent magnitude of a double or multiple-star system is influenced by the combined magnitudes of its components, the brightness ratio between the components, and the separation between them. These factors can result in variations in the observed magnitudes of the system over time <cit.>. These processes in double or multiple systems potentially can explain why star BD-14 1504 is not superimposed with the age isochrones fitted to the cluster's CMDs in the current paper.
* Rup-171: The isochrones of log(t)=9.38, 9.43, and 9.48 yr with z=0.010± 0.004 were fitted on the UBV and Gaia based CMDs, as shown in the upper panels of Fig. <ref>. Based on this isochrone fitting, the distance modulus, distance, and age for Rup-171 are μ_ V=11.819 ± 0.098 mag, d_ iso=1509± 69 pc, and t=2700± 200 Myr, respectively. The age and distance values derived for the cluster are also in good agreement with most of the findings presented by earlier studies (see Table <ref>). The isochrone-based distance estimate also matches within error with the mean trigonometric parallax (d_ϖ=1585±106 pc, Sec. <ref>) calculated earlier in this study.
We investigated the Gaia-based CMD of the cluster and picked out the blue straggler stars (BSS) by visual inspection. We identified four BSSs with probabilities over 0.6 within the radial distance of 5^' from Rup-171's center. The BSSs of the cluster are plotted in the blue dotted dashed-lined box and shown in the lower right panel of Fig. <ref>. <cit.> investigated 1246 OCs with Gaia DR2 data and identified BSSs within these clusters. They classified seven BSSs, four of them with possible candidates of BSSs. The G-band magnitude of four BSSs found in this study are within the 11<G<13 mag range, and they are in common with the stars that confirmed as cluster's BSSs in the study of <cit.>. When the four possible BSSs of <cit.> were examined according to Gaia DR3 data, it was found that their magnitudes and color indices are within the ranges 14<G<14.5 and 0.8<(G_ BP-G_ RP)<1 mag, respectively, which occur at the most probable MS turn-off point as can be seen in the lower right panel of Fig. <ref>. Hence, we concluded that these stars are not good BSS candidates.
§ KINEMATICS AND GALACTIC ORBIT PARAMETERS OF TWO OPEN CLUSTERS
We estimated kinematical properties and the Galactic orbital parameters of Rup-1 and Rup-171 using the MWPotential2014 potential model as implemented in galpy (the galactic dynamics library) and described by <cit.>[See also https://galpy.readthedocs.io/en/v1.5.0/]. The MWPotential2014 model is a simplified representation of the Milky Way, assuming axis-symmetry and time-independence of the potential. It consists of a spherical bulge, a dark matter halo, and a Miyamoto-Nagai <cit.> disk potential. The spherical bulge represents the mass distribution of the Milky Way, and it is defined as a spherical power-law density profile as described by <cit.>, given as follows:
ρ (r) = A ( r_ 1/r) ^αexp[-(r/r_ c)^2 ]
In this expression, r_ 1 represents the current reference radius, r_ c the cut-off radius, A the amplitude that is applied to the potential in mass density units, and α is the power-law index that determines the steepness of the density profile.
The disk potential describes the gravitational potential of a disk-like structure in Galactic dynamics as described by <cit.>, given as follows:
Φ_ disk (R_ gc, Z) = - G M_ d/√(R_ gc^2 + (a_ d + √(Z^2 + b_ d^2 ))^2)
where R_ gc describes the distance from the Galactic center, Z is the vertical distance from the Galactic plane, G is the gravitational constant, M_ d the mass of the Galactic disk, a_ d and b_ d are the scale height parameters of the disk.
The dark matter halo component is typically represented by Navarro-Frenk-White profile <cit.>, given as follows:
Φ _ halo (r) = - G M_ s/R_ gcln(1+R_ gc/r_ s)
where M_ s presents the mass of the dark matter halo of the Milky Way and r_ s is its radius.
The input parameters needed to perform kinematic analyses and orbit integrations for the two clusters are the central equatorial coordinates (α, δ), mean proper-motion components (μ_αcosδ, μ_δ), and distances (d). The distances were taken from the isochrones fitting estimates made above in this study. Besides these input parameters, radial velocity data (V_ R) are also required for complete kinematic and orbit analyses. All the input parameters are listed in Table <ref> (on page tab:Final_table). The mean radial velocities for the two clusters were calculated using the most probable members as selected from the Gaia DR3 catalog, within the clusters' limiting radii. 13 stars in Rup-1 and 102 for Rup-171 had probabilities P≥ 0.5 and were considered in the mean radial velocity calculations. The estimation of mean radial velocities was based on the equations given by <cit.>. These use the weighted average of the data. We determined the mean radial velocities for Rup-1 and Rup-171 as V_ R= 10.37 ± 2.22 and V_ R= 5.32± 0.23 km s^-1, respectively. These results are within the error of the radial velocity results given by <cit.>, <cit.> and <cit.>. We adopted the galactocentric distance, circular velocity, and the distance from the Galactic plane of the Sun to be R_ gc=8 kpc, V_ rot=220 km s^-1 <cit.>, and 27± 4 pc <cit.>, respectively.
The orbits of Rup-1 and Rup-171 were integrated backward in time with 1 Myr steps up to an age of 3 Gyr from the clusters' present positions in the Galaxy. The output parameters that were estimated from the kinematic and orbit analyses, which are listed in Table <ref>, where R_ a and R_ p are apogalactic and perigalactic distances, respectively, and e is eccentricity of the Galactic orbit. Z_ max is the maximum vertical distance from Galactic plane, (U, V, W) are the space velocity components, and P_t is the orbital period. The space velocity components for Rup-1 were derived as (U, V, W) = (-3.48 ± 1.44, -10.02 ± 1.63, -6.21 ± 0.52) km s^-1, and for Rup-171 as (-6.39 ± 0.30, 31.75 ± 1.43, -45.71 ± 2.04) km s^-1. <cit.> considered Gaia DR2 astrometric data <cit.> and derived the space velocity components for Rup-1 as (U, V, W) = (-4.94 ± 2.63, -11.48 ± 2.53, -6.89 ± 0.63) and for Rup-171 as (U, V, W) = (-6.31 ± 0.22, 32.38 ± 0.17, -46.36 ± 0.22) km s^-1. These results are based on two cluster members in Rup-1 and 20 in Rup-171. Our findings for the space velocity components are compatible with the results of <cit.>.
In order to include a correction for the Local Standard of Rest (LSR) we used the space velocity components of <cit.>. These are (U, V, W) = (8.83 ± 0.24, 14.19 ± 0.34, 6.57 ± 0.21) km s^-1. Using these values we estimated the LSR corrected space velocity components ((U, V, W)_ LSR) as well as total space velocities (S_ LSR) for Rup-1 as S_ LSR=6.79± 2.28 and for Rup-171 as S_ LSR=60.40± 2.55 km s^-1 (see also Table <ref>). According to the study of <cit.>, it is emphasised that stars with a S_ LSR of less than 50 km s^-1 are members of the thin disk, while stars with a S_ LSR between 70 and 200 km s^-1 are members of the thick disk population. Accordingly, Rup-1 is a member of the young thin-disk population and Rup-171 is a member of the old thin-disk population.
We plotted the resulting orbits, as shown in Fig. <ref>. In the figure, the upper and lower left panels represent the side view of the orbits in the R_ gc× Z plane for Rup-1 and Rup-171, respectively <cit.> The upper and lower right panels of Fig. <ref> show the distance from the Galactic center as a function of time in the R_ gc× t planes for each cluster. The birth and current locations of the two clusters are indicated by yellow-filled triangles and circles, respectively. Pink and green-dashed lines as well as the relevant triangles show the orbits and birth radii of the clusters for upper and lower errors of input parameters. The upper panels of Fig. <ref> show that Rup-1 formed outside the solar circle (R_ Birth=8.52± 0.09 kpc) and entirely orbits outside the solar circle. The lower panels of Fig. <ref> indicate that Rup-171 also formed outside the solar circle (R_ Birth=9.95± 0.05 kpc) and enters inside the solar circle during its orbital motion. Another factor affecting the birth position of OCs is the uncertainty in their ages. The uncertainties in the ages of the Rup-1 and Rup-171 OCs investigated in this study were determined as 60 and 200 Myr, respectively. Taking into account the uncertainties in the ages of the two OCs, the change in birth positions for the clusters is shown by the grey regions in the right panels of Fig. <ref>. Dynamical orbital analyses show that if uncertainties in cluster ages are considered, birth positions can be varied in the range 8.31≤ R_ gc≤ 10.42 kpc for Rup-1 and 6.53 ≤ R_ gc≤ 9.97 kpc for Rup-171. Considering the uncertainties in cluster ages, it is determined that Rup-1 is likely to form outside the solar circle and Rup-171 is likely to form inside and/or outside the solar circle.
In this study, the metal abundances of the two OCs and their distances from the Galactic centre at today and at the time of their birth are taken into account. For this purpose, we refer to <cit.>, who studied the metal abundances of OCs calculated from the analysis of the spectroscopic data from cluster member stars and the distances of these OCs to the Galactic centre. On the [Fe/H]× R_ gc diagram, it was found that the metal abundances of other OCs located at the same distance from the today positions of Rup-1 and Rup-171 are in the metallicity intervals of -0.25< [Fe/H] (dex)<0.13 and -0.09< [Fe/H] (dex)<0.30, respectively. Considering the metal abundances calculated for Rup-1 ([Fe/H]=-0.09±0.16 dex) and Rup-171 ([Fe/H]=-0.20±0.20 dex) in this study, it was determined that Rup-1 is within the metallicity range of <cit.>, while Rup-171 is outside the expected metallicity range. Nevertheless, this is in much better agreement with the metallicity ranges of <cit.> -0.21< [Fe/H] (dex)<0.18 and -0.31 < [Fe/H] (dex)< 0.09 when the birth positions of the OCs Rup-1 (R_ Birth=8.52±0.09 kpc) and Rup-171 (R_ Birth=9.95±0.05 kpc) are considered.
§ LUMINOSITY AND PRESENT-DAY MASS FUNCTIONS
The luminosity function (LF) refers to the distribution of brightness for a group of stars. We took into consideration Gaia DR3 photometric data in the estimates of LF for each cluster. We selected the main-sequence stars with probabilities P>0.5 and located within the limiting radii obtained in Section <ref>. The number of selected stars and their magnitude range are 72 and 11.3≤ G ≤ 20.5 mag for Rup-1, for Rup-171 the parameters values correspond to 533 stars and 14.25≤ G ≤ 20.50 mag range. However it can be interpreted that due to possible binary star contamination on the cluster's main sequence, it is not likely to detect all binary stars individually in the cluster. Hence, the stars used in luminosity and present-day mass function analyses were considered as single stars. We derived absolute magnitudes M_ G from apparent G magnitudes by using the equation of M_ G = G-5×log d +5+A_ G, where d is the distance derived in the study and A_G is Gaia photometry based extinction that described by A_G=1.8626× E(G_ BP-G_ RP) <cit.> (where, E(G_ BP-G_ RP) is the color excess obtained in the Section <ref>). We plotted the LF distribution for two clusters as shown in Fig. <ref>. The number of stars were calculated for the intervals of 1.0 mag bin. It can be seen from Fig. <ref> that the absolute magnitude ranges lie within the 0≤ M_ G≤ 9 mag for Rup-1 (panel a) and 2≤ M_ G≤ 9 mag for Rup-171 (panel b). From the Fig. <ref>a it is concluded that Rup-1 retains its massive and low-mass stars because of its young age, whereas Fig. <ref>b shows that most of the massive stars of Rup-171 are evolved due to its old age.
The present-day mass function (PDMF) provides information about the number density of stars per mass interval, and it is related to the LF. To derive PDMFs, we considered the same stars selected in the LF analyses for each cluster. LFs of Rup-1 and Rup-171 were converted into present-day mass functions (PDMFs) with the aid of parsec models <cit.> that scaled to the mass fraction (z) and age estimated in this study. Using these models, we expressed an absolute magnitude-mass relation with a high degree polynomial equation between M_G absolute magnitudes and masses of theoretical main-sequence stars. The derived relation was applied to the observational stars to transform their absolute M_ G magnitudes into masses. This resulted in the mass range of the main-sequence stars being estimated as 0.75≤ M/ M_⊙≤ 2.50 for Rup-1, and 0.75≤ M/ M_⊙≤ 1.50 for Rup-171. Stellar masses were adjusted to 0.25 mass bins and logarithmic values of the number of stars within each bin were calculated for two clusters. Then we estimated the slope of the mass function by a power law set as by <cit.>:
log(dN/dM)=-(1+Γ)×log(M)+ constant
here dN symbolizes the number of stars in a mass bin dM, M represents the central mass of the relevant bin and Γ is the slope of the function. The best-fit PDMFs are plotted in Fig. <ref>. The derived PDMFs are Γ = 1.26 ± 0.32 for Rup-1 and Γ = 1.53 ± 1.49 for Rup-171, which agree with the value of Γ=1.35 given by <cit.> and the value of Γ=1.30 provided by <cit.> within error. In addition to this, the total mass of the clusters (M_ tot) and mean masses of the member stars (⟨ m ⟩) for Rup-1 and Rup-171 were calculated as 99 and 1.33 M/M_⊙, and as 623 and 1.05 M/M_⊙, respectively. Moreover, it is found that the uncertainties in the metal abundances of the two OCs can lead to a change of at most 0.05 M/M_⊙ in the stellar mass calculations. This has a no direct impact on the determination of the mass functions of two OCs analysed.
§ SUMMARY AND CONCLUSION
We present a comprehensive study of the two OCs Ruprecht 1 and Ruprecht 171 taking into account CCD UBV photometric as well as Gaia DR3 astrometric, photometric, and spectroscopic data. Analyses of fundamental astrophysical parameters were performed by using the UBV data, whereas the estimation of distances and ages, orbit integrations, and structural analyses were based on the Gaia DR3 data. The main results are listed in Table <ref> and summarized as follows:
* RDP analyses utilized the Gaia DR3 data gathered in 25 arcmin radii areas about the cluster centers. We fitted King profiles to the stellar densities, obtaining through visual inspection the limiting radius of Rup-1 as r_ lim=7' and for Rup-171 as r_ lim=10'. These values correspond to the limiting radii for Rup-1 and Rup-171 being 2.99 pc and 4.39 pc, respectively.
* The membership probability calculation was based on Gaia DR3 proper motion components, trigonometric parallaxes, and their uncertainties. We adopted as possible cluster members the stars with membership probabilities P≥ 0.5. To perform UBV data-based analyses, the membership probability values of the same stars in the Gaia DR3 and UBV catalog were cross-matched. We made a selection of the most probable member stars for these two catalogs separately:
* For UBV data, we considered binary star contamination on main-sequence stars that lie within the clusters' limiting radii. We fitted intrinsic ZAMS to V× (B-V) CMDs of the two clusters and shifted it Δ V=0.75 mag towards the brighter stars. In addition to this criteria, for UBV data, we selected the stars with membership probabilities P≥ 0.5 and brighter than faint V magnitude limit and identified 36 and 115 most probable member stars for Rup-1 and Rup-171, respectively.
* For Gaia DR3 data, we selected the stars located within the clusters' limiting radii and those brighter than faint G magnitude limit and with membership probabilities P≥ 0.5 as the most probable members. Hence, for Gaia DR3 data, we estimated the number of most probable member stars to be 74 and 596 for Rup-1 and Rup-171, respectively. Consequently, UBV and Gaia data-based analyses were performed considering the member stars identified from the relevant catalog.
The number of most probable cluster stars are different between the UBV and Gaia samples. The limited field of view of the UBV photometric observations and/or exposure times may influence the number of detected stars. To avoid `loss' of stars that may be caused by these reasons and to improve parameter determinations such as for age, LF, and PDMF we therefore considered also Gaia data for its larger field of view.
* Mean proper-motion components for Rup-1 were calculated as (μ_αcosδ, μ_δ) = (-0.287 ± 0.003, -0.903 ± 0.003) mas yr^-1 and for Rup-171 as (μ_αcosδ, μ_δ) = (7.720 ± 0.002, 1.082 ± 0.002) mas yr^-1.
* The mean trigonometric parallax was derived for Rup-1 as ϖ_ Gaia= 0.649 ± 0.027 mas, and for Rup-171 as ϖ_ Gaia= 0.631 ± 0.042 mas. Using the linear equation of ϖ ( mas)=1000/d ( pc), we calculated trigonometric parallax-based distances (d_ϖ) for Rup-1 and Rup-171 as 1541 ± 64 pc and 1585 ± 106 pc, respectively.
* We identified the four most probable BSSs in Rup-171 within the 5 arcmin area from the cluster's center. Three of these stars were previously identified in the study of <cit.>.
* The color excesses and photometric metallicities of the two clusters were derived separately from (U-B)× (B-V) TCDs. The E(B-V) color excess and [Fe/H] photometric metallicity are 0.166 ± 0.022 mag and -0.09 ± 0.16 dex for Rup-1, respectively. These values correspond to 0.301 ± 0.027 mag and -0.20 ± 0.20 dex for Rup-171.
* The distance and age of the two clusters were estimated simultaneously on UBV and Gaia DR3 data-based CMDs. Keeping as constants the derived color excesses and metallicities, we estimated apparent distance moduli, distance, and age of Rup-1 as μ_ V=11.346± 0.083 mag, d=1469± 57 pc, and t=580± 60 Myr, respectively. Similarly μ_ V=11.819± 0.098 mag, d=1509±69 pc, and t=2700± 200 Myr were obtained for Rup-171. According to the Gaia DR3 data-based results, the best solution of E(G_ BP-G_ RP) was achieved when we consider the equation of E(G_ BP-G_ RP)= 1.29× E(B-V) of <cit.>.
* The results of space velocities and Galactic orbital parameters indicated that Rup-1 belongs to the young thin-disk population, whereas Rup-171 is a member of the old thin-disk population. Also, we concluded that Rup-1 and Rup-171 formed outside the solar circle with the birth radii of 8.52± 0.09 pc and 9.95± 0.05 kpc, respectively, but only Rup-1 entirely orbits outside the solar circle. Considering the uncertainties in cluster ages, it is determined that Rup-1 is likely to form outside the solar circle and Rup-171 is likely to form inside and/or outside the solar circle.
* Results of PDMFs were found as Γ=1.26± 0.32 and Γ=1.53 ± 1.49 for Rup-1 and Rup-171, respectively, which are in good agreement with the value of <cit.>. Also, the total masses of the clusters and mean masses of the member stars for Rup-1 and Rup-171 were calculated as 99 and 1.33 M/M_⊙, and as 623 and 1.05 M/M_⊙, respectively.
The study of the OCs analysed in this paper with Gaia DR3 data and different filter sets minimised the degeneracy between the parameters by allowing the basic astrophysical parameters to be calculated with independent methods. This will contribute to the study understanding of the Galactic structure and the understanding of the chemo-dynamic evolution of the Galactic disk, as a result of investigating a large number of OCs with the same method.
§ ACKNOWLEDGMENTS
This study has been supported in part by the Scientific and Technological Research Council (TÜBİTAK) 122F109. The observations of this publication were made at the National Astronomical Observatory, San Pedro Mártir, Baja California, México, and the authors thank the staff of the Observatory for their assistance during these observations. The authors express their sincere gratitude to the anonymous referee for providing invaluable feedback and suggestions that have significantly enhanced the readability and overall quality of the paper. This research has made use of the WEBDA database, operated at the Department of Theoretical Physics and Astrophysics of the Masaryk University, and also made use of NASA's Astrophysics Data System. The VizieR and Simbad databases at CDS, Strasbourg, France were invaluable for the project as were data from the European Space Agency (ESA) mission Gaia[https://www.cosmos.esa.int/gaia], processed by the Gaia Data Processing and Analysis Consortium (DPAC)[https://www.cosmos.esa.int/web/gaia/dpac/consortium]. Funding for DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. IRAF was distributed by the National Optical Astronomy Observatory, which was operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. PyRAF is a product of the Space Telescope Science Institute, which is operated by AURA for NASA.
§.§ Author contributions
Conception/Design of study: Hikmet Çakmak, Selçuk Bilir, Talar Yontan, Timothy Banks;
Data Acquisition: Hikmet Çakmak, Seliz Koç, Hülya Erçay, Talar Yontan, Raúl Michel;
Data Analysis/Interpretation: Hikmet Çakmak, Selçuk Bilir, Talar Yontan, Timothy Banks, Raúl Michel, Seliz Koç, Hülya Erçay;
Drafting Manuscript: Selçuk Bilir, Talar Yontan, Hikmet Çakmak, Timothy Banks, Raúl Michel, Esin Soydugan;
Critical Revision of Manuscript: Hikmet Çakmak, Selçuk Bilir, Talar Yontan, Timothy Banks, Raúl Michel, Esin Soydugan;
Final Approval and Accountability: Selçuk Bilir, Talar Yontan, Hikmet Çakmak.
§.§ Financial disclosure
None reported.
§.§ Conflict of interest
The authors declare no potential conflict of interests.
|
http://arxiv.org/abs/2409.03182v1 | 20240905021629 | Cosmic ray north-south anisotropy: rigidity spectrum and solar cycle variations observed by ground-based muon detectors | [
"M. Kozai",
"Y. Hayashi",
"K. Fujii",
"K. Munakata",
"C. Kato",
"N. Miyashita",
"A. Kadokura",
"R. Kataoka",
"S. Miyake",
"M. L. Duldig",
"J. E. Humble",
"K. Iwai"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.HE",
"physics.space-ph"
] |
Masayoshi Kozai
[email protected]
0000-0002-3948-3666]M. Kozai
Polar Environment Data Science Center (PEDSC), Joint Support-Center for Data Science Research (ROIS-DS), Research Organization of Information and Systems, Tachikawa, Tokyo, Japan
0000-0002-0890-0607]Y. Hayashi
Physics Department, Shinshu University, Matsumoto, Nagano, Japan
0000-0002-4315-6369]K. Fujii
National Institute of Informatics (NII), Research Organization of Information and Systems, Chiyoda-ku, Tokyo, Japan
0000-0002-2131-4100]K. Munakata
Physics Department, Shinshu University, Matsumoto, Nagano, Japan
0000-0002-4913-8225]C. Kato
Physics Department, Shinshu University, Matsumoto, Nagano, Japan
0009-0006-3569-7380]N. Miyashita
Physics Department, Shinshu University, Matsumoto, Nagano, Japan
0000-0002-6105-9562]A. Kadokura
Polar Environment Data Science Center (PEDSC), Joint Support-Center for Data Science Research (ROIS-DS), Research Organization of Information and Systems, Tachikawa, Tokyo, Japan
0000-0001-9400-1765]R. Kataoka
National Institute of Polar Research (NIPR), Research Organization of Information and Systems, Tachikawa, Tokyo, Japan
0000-0002-3067-655X]S. Miyake
National Institute of Technology (KOSEN), Gifu College, Motosu, Gifu, Japan
0000-0001-7463-8267]M.L. Duldig
School of Natural Sciences, University of Tasmania, Hobart, Tasmania, Australia
0000-0002-4698-1671]J.E. Humble
School of Natural Sciences, University of Tasmania, Hobart, Tasmania, Australia
0000-0002-2464-5212]K. Iwai
Institute for Space-Earth Environmental Research (ISEE), Nagoya University, Nagoya, Aichi, Japan
§ ABSTRACT
The north-south (NS) anisotropy of galactic cosmic rays (GCRs) is dominated by a diamagnetic drift flow of GCRs in the interplanetary magnetic field (IMF), allowing us to derive key parameters of cosmic-ray propagation, such as the density gradient and diffusion coefficient.
We propose a new method to analyze the rigidity spectrum of GCR anisotropy and reveal a solar cycle variation of the NS anisotropy's spectrum using ground-based muon detectors in Nagoya, Japan, and Hobart, Australia.
The physics-based correction method for the atmospheric temperature effect on muons is used to combine the different-site detectors free from local atmospheric effects.
NS channel pairs in the multi-directional muon detectors are formed to enhance sensitivity to the NS anisotropy, and in this process, general graph matching in graph theory is introduced to survey optimized pairs.
Moreover, Bayesian estimation with the Gaussian process allows us to unfold the rigidity spectrum without supposing any analytical function for the spectral shape.
Thanks to these novel approaches, it has been discovered that the rigidity spectrum of the NS anisotropy is dynamically varying with solar activity every year.
It is attributed to a rigidity-dependent variation of the radial density gradient of GCRs based on the nature of the diamagnetic drift in the IMF.
The diffusion coefficient and mean-free-path length of GCRs as functions of the rigidity are also derived from the diffusion-convection flow balance.
This analysis expands the estimation limit of the mean-free-path length into ≤200 GV rigidity region from <10 GV region achieved by solar energetic particle observations.
§ INTRODUCTION
The cosmic-ray intensity from ∼0.1 to ∼100 GeV energy region shows a clear anti-correlation with the solar activity typically represented by the sunspot number, explicitly indicating that galactic cosmic rays (GCRs) are influenced by the helispheric state, or solar modulation, through which they propagate before approaching Earth.
Measurements of GCR properties, including the intensity, energy spectrum, composition, anisotropy, and their temporal variations, on the ground and in the space have been driving the elucidation of the GCR propagation and space environment.
The anisotropy, or momentum-space distribution of GCRs, is expected to be the next key to reveal solar modulation;
although it is theoretically proved to have a close relation with the interplanetary magnetic field (IMF) structure and GCR diffusion coefficients <cit.>, observed solar cycle variations of the anisotropy are nearly unexplored in quantitative reproductions by heliospheric simulations, unlike the GCR density variations.
Even in the broader areas of cosmic-ray physics, including galactic propagation, a consistent interpretation of the anisotropy with other properties has been an open question <cit.>.
Ground-based observations of cosmic rays measure secondary cosmic rays produced by GCR interactions with atmospheric nuclei and the subsequent reactions.
They feature long-term stability, improved statistics from large detection areas, and large angular acceptances by a worldwide network, all essential to study solar modulation phenomena consisting of diverse temporal variations from hourly (e.g., interplanetary shocks) to >10-year (solar cycles) scales.
Among secondary cosmic-ray species on the ground, muons have the highest flux (statistics) and relatively accurate response of the incident angle to primary cosmic rays because they directly originate from pion decays immediately occurring after the atmospheric nuclear spallations by primary cosmic rays typically around the upper edge of troposphere.
These advantages make the ground-based muon detector a unique means for measuring the anisotropy whose magnitude is only ∼0.1% of GCR intensity.
However, anisotropy observation by muon detectors has suffered from an atmospheric temperature variation that perturbs muon counting rates.
Conventional methods <cit.> have corrected the temperature effect by subtracting muon counting rates of directional channels from each other to cancel out the effect.
The subtracted directional channels must be from the identical location where the temperature effect is expected to be similar.
Additionally, in some analyses, the statistics are degraded by this subtraction.
On the other hand, <cit.> proved the validity of a more physics-based approach in which the temperature effect is directly corrected in each directional channel by using meteorological reanalysis data.
Contrary to the conventional method, this correction method allows for a direct combination of observations from different sites free from the local temperature effect.
<cit.> applied this correction to derive the rigidity spectrum of the anisotropy in a solar eruption event, demonstrating the new temperature correction method for anisotropy studies.
However, there is still a problem in that power-law indices of the rigidity spectra of the derived anisotropy have unreasonably unstable fluctuations, especially in the case of a small anisotropy amplitude.
The power-law function supposed to reconstruct the rigidity spectrum in their study is likely too definitive for the dynamically varying spectrum, causing this problem.
In this study, we propose a new method applying Bayesian estimation for deriving the rigidity spectrum.
It does not pre-suppose an analytical function such as the power-law function for the spectrum, while it uses the Gaussian process to confine the smoothness of the spectrum instead.
This high tolerance of varying spectral shapes enables us to trace the dynamic variation of the spectrum in solar modulation phenomena and provides a reliable analysis result.
Additionally, we introduce general graph matching <cit.> in graph theory to survey optimal channel combinations in the inter-station network, leveraging the advantage of the temperature correction method that allows us to combine multiple station data directly.
We demonstrate these analysis ideas by analyzing the rigidity spectrum of the north-south (NS) anisotropy observed by ground-based muon detectors in Nagoya, Japan, and Hobart, Australia.
The NS anisotropy, or NS flow of GCRs, shows a polarity reversal according to the local IMF sector assigned as Toward or Away each time <cit.>.
This phenomenon proves that the NS anisotropy is dominated by a diamagnetic drift flow of GCRs induced by a combination of the gyro-motion and radial (anti-sunward) density gradient of GCRs in the IMF.
Based on this mechanism, we derive rigidity-dependent variations of the density gradient and, subsequently, the radial diffusion coefficient of GCRs, which are essential parameters to elucidate solar modulation.
We also report a plan for the NS conjugate observation in the polar region, which will vastly improve the sensitivity of the muon detector network to the NS anisotropy.
§ METHOD
§.§ Preprocessing of muon counting rates
Table <ref> lists characteristics of the Nagoya and Hobart muon detectors used in this paper, compiled by <cit.>.
It summarizes the geographical latitude, longitude, and altitude of the observation sites (λ_D, ϕ_D, and alt.), the number of directional channels (ch-no.) each counting incident muons, the geomagnetic cutoff rigidity P_ cut, the hourly muon counting rate (cph), its statistical error σ (0.01%), the median rigidity P_ median of primary cosmic rays producing muons measured in each directional channel, and the asymptotic trajectory direction (λ_ asymp, ϕ_ asymp) of the primary cosmic rays with the median rigidity outside the magnetosphere.
Nagoya and Hobart stations started their observations in 1970 and 1992, respectively, and currently form a part of the Global Muon Detector Network (GMDN).
We analyze their muon counting rates from 1995 when both the muon detectors and the solar wind observation data used in this paper have sufficient duty cycles.
Archive data of the muon counting rates in GMDN are published in Shinshu University's institutional repository <cit.>.
Our processing of muon counting rates described below partially refers to the conventional analysis method of the NS anisotropy, known as Nagoya-GG <cit.>.
It uses muon counting rates I_i uncorrected for the temperature effect in Nagoya station and takes differences
GG = (I_ N2 - I_ S2) + (I_ N2 - I_ E2) /2
where factor 2 is introduced in this paper for a quantitative comparison with our method.
The channel identifiers N2, S2, and E2 indicate secondary-inclined channels whose central viewing directions are north-, south-, and eastward, respectively, with a zenith angle of 49^∘.
The first NS differential term, I_ N2 - I_ S2 ensures a high response of Nagoya-GG to the NS anisotropy.
The second term I_ N2 - I_ E2 is introduced to cancel out the temperature effect remaining in the first term.
Nagoya-GG is proven to have sufficient sensitivity to the NS anisotropy and has contributed to revealing it <cit.>.
In our method, on the other hand, the atmospheric temperature effect on muons is corrected in each directional channel of Nagoya and Hobart detectors by a physics-based method using the meteorological reanalysis data, as described in Appendix <ref>.
The corrected muon counting rate I_i(t) for each directional channel i is coupled to the GCR anisotropy in space as <cit.>
I_i(t) ∼∫_P=0^∞[ ξ_0(t,P) c_0,i^0(P) + ξ_z(t,P) c_1,i^0(P) +
ξ_x(t,P) c_1,i^1(P) cosω t_ st - s_1,i^1(P) sinω t_ st
+ ξ_y(t,P) s_1,i^1(P) cosω t_ st + c_1,i^1(P) sinω t_ st] .
Here, the anisotropy is expanded into the zeroth and first-order spherical harmonics in momentum space with rigidity P.
The expansion coefficients ξ_0, ξ_z, ξ_x and ξ_y are defined in a geocentric (GEO) coordinate system whose z-axis points geographic north pole and x-axis points to midnight in the equatorial plane.
Indices n and m attached to coefficients c_n,i^m and s_n,i^m are degree and order of the spherical harmonics, respectively, representing components of the zeroth and first-order harmonics.
These coefficients are known as the differential coupling coefficients and are derived from numerical calculations of GCR propagation in the magnetosphere, atmosphere, and detector <cit.>.
The local time t_ st of each station to which the directional channel i belongs is related to universal time t as ω t_ st = ω t + ϕ_ st where ϕ_ st is the station's geographic longitude and ω=π/12.
The higher order (≥2nd) harmonics are omitted because they have negligible amplitudes compared to the zeroth and first-order anisotropy in the usual state.
This equation describes the coupling between the observed muon counting rate I_i and the space anisotropy (ξ_0, ξ_z, ξ_x and ξ_y) via the differential coupling coefficients at each rigidity P.
The zeroth order anisotropy ξ_0, or isotropic component, represents a variation of the GCR density from its Bartels rotation average because the counting rate I_i is converted into a deviation from its Bartels rotation average as shown in Appendix <ref>.
Equatorial components of the first-order anisotropy (ξ_x,ξ_y) represent a diurnal anisotropy or the GCR flow in the equatorial plane.
The remaining component ξ_z is the NS anisotropy representing the NS component of the GCR flow, which is the main target of this study.
The hourly counting rate I_i(t) is averaged for each day in which all counting rates and hourly IMF data mentioned below are available for at least 20 hours.
This daily average is expected to minimize a contribution from the diurnal anisotropy (ξ_x, ξ_y) in the counting rates.
Then the daily counting rates are sorted into days assigned to each IMF sector, Toward or Away, and averaged over each sector's period in each year, as I_i^T for the Toward sector or I_i^A for the Away sector.
In this process, only Bartels rotations each with ≥5 Toward, ≥5 Away, and total ≥15 available days are used to secure the sector reversal in each Baterls rotation.
Errors of I_i^T and I_i^A, σ_i^T and σ_i^A respectively, are derived as standard deviations of the daily counting rates.
The IMF data is obtained from OMNIWeb service <cit.> in the geocentric solar ecliptic (GSE) coordinate system whose x-axis points toward Sun and y-axis opposes Earth's orbital motion.
In the picture of the diamagnetic drift dominating the NS anisotropy <cit.>, the IMF parallel to the ecliptic plane is assumed, and its GSE-y component (B_y) is essential for determining the anisotropy.
Therefore, we omit days with |B_z|>|B_y| for the daily average IMF, and the IMF sector is identified daily as a Toward (Away) sector when B_y < 0 (B_y ≥ 0).
It is also noted that a definition of each year is slightly modified as described in Table <ref>, so that its start and end dates correspond to boundaries of the Bartels rotation periods.
Expanding the concept of Nagoya-GG into our temperature-corrected dataset, we take a difference between counting rates of directional channels i and j averaged for Toward (Away) sector in each year as
η_ij^T(A) = I_i^T(A) - I_j^T(A).
Its standard error is derived as
(σ_ij^T(A))^2 = (σ_i^T(A))^2 + (σ_j^T(A))^2 .
The channel pair ij is defined to have sufficient sensitivity to the NS anisotropy in Section <ref>.
The sector reversal of the NS anisotropy is extracted by taking a difference between η_ij^T and η_ij^A, and from equation (<ref>), it is expressed as
η_ij^TA = η_ij^T - η_ij^A/2∼∫_P=0^∞( ξ_z^TAc_ij^z + ϵ_0^TAc_ij^0 + ϵ_c^TAc_ij^d + ϵ_s^TAs_ij^d )
where
c_ij^z = c_1,i^0 - c_1,j^0, c_ij^0 = c_0,i^0 - c_0,j^0, c_ij^d = c_1,i^1 - c_1,j^1,ands_ij^d = s_1,i^1 - s_1,j^1.
Rigidity P as a variable of ξ and c (s) is omitted in this equation for simplicity.
A standard error of η_ij^TA is derived as
(σ_ij^TA)^2 = (σ_ij^T)^2 + (σ_ij^A)^2/4 .
The parameter ξ_z^TA represents a sector reversal of the NS anisotropy ξ_z, defined as ξ_z^TA = (ξ_z^T - ξ_z^A)/2 where ξ_z^T(A) denotes the Toward (Away) sector average of ξ_z in each year.
Hereafter, we call ξ_z^TA(P) the rigidity spectrum of the NS anisotropy.
The parameters ϵ_0^TA and (ϵ_c^TA,ϵ_s^TA) also correspond to Toward - Away sector differences of GCR density ξ_0 and diurnal anisotropy (ξ_x,ξ_y) respectively in each year.
Contrary to the NS anisotropy, these parameters have no physical reason to systematically depend on the IMF polarity in sufficiently long period averages.
Therefore these are expected to play only as perturbations on η_ij^TA, while ξ_z^TA has a substantial contribution on η_ij^TA, leading to an approximation of equation (<ref>) as
η_ij^TA∼∫_P=0^∞ξ_z^TAc_ij^z.
This equation indicates that the observed value η_ij^TA is a convolution of the NS anisotropy spectrum ξ_z^TA(P) through its differential response c_ij^z(P)/dP.
Unlike Nagoya-GG described above, the channel pair ij can be formed from any two of all channels in Nagoya and Hobart muon detectors, thanks to the physics-based temperature correction in each channel.
Nagoya and Hobart muon detectors have 17 and 13 directional channels, respectively;
the total 30 channels contain 30×29/2=435 kinds of channel pairs, and we need to select some pairs from them to analyze the NS anisotropy.
Simultaneously forming multiple pairs makes this problem more complex.
In the next section, we solve this problem using graph theory algorithm.
§.§ Optimization of channel pairing
Our optimization problem is defined as searching for a pattern of the channel pairing (ij's) which maximizes the sum of sensitivity
A = ∑_ij a_ij
where a_ij is a sensitivity of each channel pair ij to the NS anisotropy, derived in Appendix <ref>.
The set of the pairs has to meet a condition that each channel (i or j) does not duplicate in the set to ensure that derived η_ij^TA's are independent of each other.
This problem is equivalent to the maximum matching problem in general graph, or general graph matching in graph theory, and solved by the function of the Python library <cit.>.
Figure <ref> visualizes this problem and its solution.
Each node in this network corresponds to a directional channel (i or j) of each pair passing the threshold described in equation (<ref>), and each edge connecting the nodes represents each channel pair.
The node positions are determined by the Fruchterman-Reingold algorith <cit.>.
The line width of each edge is proportional to the sensitivity a_ij of each pair, and the edges with red colors represent the optimized pairing pattern which maximizes the total sensitivity A.
Each node's code “NAG” or “HOB” represents Nagoya or Hobart station.
Each character string below the station code indicates each directional channel, where V, N, S, E, and W indicate the vertical channel and north-, south-, east-, and westward inclined channels, respectively.
The 1st inclined channels (N, S, E, W) have central viewing direction with a zenith angle of 30^∘, while the 2nd and 3rd inclined channels denoted by 2 and 3 in the channel name have the zenith angles of 49^∘ and 60^∘ respectively.
Table <ref> lists the derived channel pairs, ij's, along with their coupling coefficient c_ij^z for a flat spectrum and sensitivity a_ij each derived as described in Appendix <ref>.
All left side channels i's are those in Nagoya station in the northern hemisphere, while the other side channel j subtracted from the channel i in equation (<ref>) is all in Hobart station in the southern hemisphere.
This is reasonable as a result of the optimization maximizing the NS anisotropy sensitivity.
A solid line in Figure <ref> displays a differential response c_ij^z/P for each channel pair ij in table <ref>.
This differential response function has a positive value when the asymptotic direction of channel i at rigidity P points to a higher latitude (more northward) direction than channel j.
In other words, its negative value in the lower rigidity region in Figure <ref> indicates that the NS relation between channels i and j is swapped for low rigidity GCRs.
The low-rigidity GCRs passing through the mid- or low-latitude magnetosphere, such as those observed by Nagoya or Hobart detectors, are largely deflected from the original viewing angle of each directional channel by the geomagnetic field, causing this negative response.
This problem is solved by placing detectors in Arctic and Antarctic regions where the geomagnetic effect is minimized, and such an observation plan is mentioned in Section <ref>.
The difference of the differential response between channel pairs displayed in Figure <ref> allows us to derive the rigidity spectrum ξ_z^TA(P) of the NS anisotropy, as described in the next section.
Finally, the dashed line in Figure <ref> displays the differential response of Nagoya-GG for comparison.
The channel pairs ij's used in this paper (solid lines) generally have higher response peaks than Nagoya-GG, thanks to the optimization of the channel pairing.
§.§ Bayesian estimation of the rigidity spectrum
In conventional approaches, certain analytical functions are proposed for the rigidity spectrum of the anisotropy.
Most typically use a power-law function,
ξ_z^TA(P) = β P^γ,
and searches for parameters (β, γ) fitting to the observed counting rates <cit.>.
Although the power-law function is inapplicable for a spectrum crossing zero or with an insignificant magnitude, such a case can happen in the anisotropy component because the anisotropy is a vector quantity, unlike the absolute density spectrum.
<cit.> suggests that this is a cause of the unstable fluctuations of the power-law index γ in their results.
A single analytical function such as the power-law spectrum is probably too restrictive a premise to express all spectral shapes appearing in solar modulation phenomena.
On the other hand, this paper derives the NS anisotropy spectrum and its error range by the Bayesian approach as described below.
The Gaussian process is introduced as a prior probability distribution in Bayesian estimation, confining acceptable ranges of the smoothness and magnitude of the spectrum instead of the analytical function.
This approach makes the spectrum estimation tolerant of varying spectral shapes, allowing us to trace dynamic variations of the spectrum.
Hereafter, the GCR rigidity P [GV] is expressed by its logarithm q = log_10(P) in equations.
We approximate the rigidity spectrum by a step function with a finite number (N) of rigidity bins as
ξ_z^TA(q) = θ_k for q_k-1≤ q < q_k
where k=1,…,N indicates each rigidity bin and q_k = log_10 P_k represents an upper limit of the k-th rigidity bin.
The NS anisotropy θ_k in each rigidity bin k is approximated to be constant for the interval q_k-1≤ q < q_k.
The rigidity spectrum ξ_z^TA(P) is now expressed by an N-dimension vector parameter θ = ( θ_1,θ_2,…,θ_N)^⊤, and its mean and error can be expressed by a probability distribution in the N-dimension space in which each dimension corresponds to each rigidity bin k.
Based on Figure <ref>, our responsive rigidity range is from ∼10 to ∼400 GV, and we split this range into N=8 bins.
Table <ref> lists each rigidity bin with its index k and upper boundary q_k.
Median rigidity P_k^m [GV] of each rigidity bin satisfies log_10 P_k^m = (q_k-1 + q_k)/2.
The differential response c_ij^z/P is also discretized and expressed by a matrix
C_lk = ∫_P=P_k-1^P_kc_ij^z(P)
where each channel pair ij is replaced with l = 1,2,…,8 in the index of Table <ref>.
From equation (<ref>), expected value of the observable η_l^TA = η_ij^TA from the parameter θ is derived as
η̃_l^TA = ∑_k=1^N C_lkθ_k = (Cθ)_l.
Therefore, the conditional probability of the observed values η_l^TA's for the parameter θ is modeled as
𝒫(η | θ)
= ∏_l 𝒩(η_l^TA | η̃_l^TA, (σ_l^TA)^2)
∝∏_l
exp[- η_l^TA - (Cθ)_l^2/2 (σ_l^TA)^2]
= exp[-1/2∑_l η_l^TA - (Cθ)_l^2/(σ_l^TA)^2]
where η is a vector consisting of η_l^TA's for all channel pairs and σ_l^TA=σ_ij^TA in equation (<ref>).
The symbol 𝒩(x | μ,σ^2) expresses a normal distribution for a variable x with its mean μ and variance σ^2.
On the other hand, the Gaussian process is introduced by considering the probability distribution of the spectrum ξ_z^TA(q) as a multivariate normal distribution in which each dimension corresponds to each value of q; the distribution is defined in an infinite-dimensional space in principle.
In this practical case, which represents the spectrum by a finite number (N) of rigidity bins, the Gaussian process is expressed by a multivariate normal distribution for the parameter θ in the N-dimension space, as
𝒫(θ)
= 𝒩(θ | Θ_G, Σ_G)
where Θ_G and Σ_G are the mean vector and covariance matrix of the Gaussian process, respectively.
We adopt the radial basis function kernel for the covariance matrix as
Σ_G,kk' = σ_G^2 exp( -|q_k - q_k'|^2/b^2 ).
The diagonal component of Σ_G is provided as Σ_G,kk = σ_G^2, representing a variance of the parameter θ_k.
The non-diagonal component divided by the variance σ_G^2, i.e., exp(-|q_k - q_k'|^2 / b^2), represents a correlation coefficient between θ_k and θ_k', confining an acceptable range of the smoothness of the spectrum.
Detailed descriptions of the conditional probability 𝒫(η | θ) and Gaussian process 𝒫(θ) are provided in Appendix <ref> and <ref>.
Implementing the Gaussian process 𝒫(θ) as a prior probability distribution in Bayesian estimation, the posterior distribution of the parameter θ is derived as
𝒫( θ | η ) ∝𝒫( η | θ ) 𝒫(θ).
The Gaussian process 𝒫(θ) forms a conjugate prior distribution of the normal distribution 𝒫( η | θ ) in this Bayesian estimation.
Therefore, the posterior distribution is also a multivariate normal distribution written as
𝒫( θ | η ) =
𝒩( θ | Θ, Σ).
This equation can be analytically solved, providing another advantage of our method compared to the conventional approach, which generally requires a numerical fitting of parameters, such as the index γ for the power-law spectrum.
From Appendix <ref>, the mean vector and covariance matrix in equation (<ref>) are derived as
Θ =
Σ_G ( Σ_L + Σ_G )^-1Θ_L
+ Σ_L ( Σ_L + Σ_G )^-1Θ_G and
Σ =
Σ_L ( Σ_L + Σ_G )^-1Σ_G.
The vector Θ_L and matrix Σ_L are parameters of the conditional probability 𝒫(η | θ) and derived as
Θ_L = Σ_L pandΣ_L = Q^-1wherep = C^⊤WηandQ = C^⊤WC.
The weight matrix W is a diagonal matrix with its diagonal elements W_ll = 1/(σ_l^TA)^2.
In the hyperparameters of the Gaussian process, the mean vector Θ_G is determined every year, while parameters for the covariance matrix Σ_G are set as common values σ_G = 0.1% and b = 0.4 [log_10(GV)] for all years, as described in Appendix <ref>.
The marginal distribution of 𝒫(θ | η) in each rigidity bin k is derived as
𝒫(θ_k | η)
= ∫𝒫(θ | η) θ_(k)'
= ∫𝒩(θ | Θ, Σ) θ_(k)'
= 𝒩(θ_k | Θ_k, Σ_kk)
where θ_(k)' is a subset of the vector θ from which an element θ_k is removed.
Therefore the 1-σ error σ_k attached to the mean spectrum Θ_k is derived from Σ as
σ_k^2 = Σ_kk.
The next section presents the results of the mean spectrum Θ and its error σ_k derived for each year.
§ RESULTS
Figure <ref> represents derived rigidity spectra of the NS anisotropy in (a) 2009 on left panels and (b) 2015 on right panels, as sample years around solar activity minimum and maximum, respectively.
A solid black line in each upper panel is the mean spectrum Θ derived by equation (<ref>), and dashed lines above and below the mean spectrum show its ±1σ error range derived by equation (<ref>) from observed values η_l^TA's in each year.
A horizontal blue line is the mean vector Θ_G of the prior probability distribution.
The mean spectrum represented by the solid black curve seems biased to the prior distribution in the outer rigidity bins (∼10 GV and ∼300 GV), resulting in some kinks of the mean spectrum near these rigidities.
This is due to the less responses in these rigidities as shown in Figure <ref>, while it negligibly affects scientific discussions, covered by large errors in these rigidity bins.
Solid circles in each lower panel in Figure <ref> show a scatter plot between the observable η_l^TA's of all channel pairs (vertical axis) and their expected values η̃_l^TA's from the mean spectrum (horizontal axis) in each year.
The expected value η̃_l^TA is calculated by replacing θ in equation (<ref>) with the mean spectrum Θ.
They are consistent with observed values within error bars in each year, ensuring that our new method successfully reconstructs the observed rigidity spectrum of the NS anisotropy with muon counting rates.
An open circle in each lower panel represents the observed and expected values of Nagoya-GG.
The Nagoya-GG is not used to reconstruct the NS anisotropy spectrum in this paper, but its expected value from the derived spectrum shows agreement with its observed value in a comparable range with solid circles.
This result demonstrates that our new analysis method is consistent with the conventional method while providing advanced insights, such as the yearly variation of the anisotropy spectrum.
It is also found that the standard error of Nagoya-GG is smaller than solid circles by a factor less than ∼1/2.
One of the causes is that Nagoya-GG double-counts channel pairs in equation (<ref>), reducing the statistical error by a factor of ∼1/√(2).
The remaining factor is likely a local effect different in different stations and remained uncorrected <cit.>, representing the limit of our method of directly combining multiple stations.
On the other hand, the solid circles show larger magnitudes than the open circle in each panel, enlarging their significance, thanks to a higher response of each channel pair than Nagoya-GG as shown in Figure <ref>.
The harder slope of the mean spectrum in the solar maximum (2015) than the minimum (2009) in Figure <ref> represents a common feature of the solar cycle variation of the NS anisotropy, as demonstrated by Figure <ref>.
It displays the mean spectrum every year from 1997 to 2020, split into the solar-activity ascending and descending phases in the solar cycle 23 or 24 in each panel.
The edge rigidity bins around ∼10 GV and ∼300 GV are truncated in this figure, considering their relative unreliability mentioned above.
Each year is denoted by a legend below each panel, and the gradation of line colors indicates the solar activity, where the red and blue colors correspond to years around the activity maximum and minimum, respectively.
Overall, the blue lines have a softer slope than the red lines in each panel, indicating that the NS anisotropy exhibits a softening of its rigidity spectrum in the solar activity minimum.
<cit.> also suggested a variation of the NS anisotropy's rigidity spectrum according to the solar activity, and since then, quantitative estimation of the spectrum has been rarely reported.
<cit.> provided only two average spectra around the maximum and declining solar activity phases, each in 1969-1970 and 1971-1973, and concluded an insignificant difference of the spectrum between the two periods.
In that analysis, multiple types of observations, including underground muon detectors, were required, and the power-law rigidity spectrum of the anisotropy was presumed.
These limitations probably prevented the study from revealing the dynamic variation of the anisotropy with a better temporal resolution.
On the other hand, our approach successfully reconstructs the spectrum every year, as shown in Figure <ref>, thanks to the better tolerance of varying spectral shapes.
A physical origin of the revealed variation of the spectrum is discussed in Section <ref>.
§ DISCUSSIONS
§.§ Modulation parameters
The sector reversal of the NS anisotropy is expected as a consequence of the diamagnetic drift of GCRs with the radial density gradient g(P) in the IMF, expressed as <cit.>
ξ_z^TA(P)/cosδ_E∼ R_L g(P) sinψ = P/cB g(P) sinψ.
The radial density gradient g(P) is related to the GCR density as
g(P) = 1/U(r_E,P). ∂ U(r,P)/∂ r|_r=r_E
where U(r) is a GCR density at the distance r from Sun and r_E is the position of Earth.
In equation (<ref>), it is approximated that the density gradient of GCRs in the ecliptic plane is dominated by its radial component, especially in the Bartels rotation mean, which averages the azimuthal (GSE-y) component out to zero.
The gyro-radius R_L of GCRs is derived as R_L = P/(cB) from the GCR rigidity P, IMF magnitude B and light speed c.
Derivation of the NS anisotropy in Section <ref> is based on the GEO coordinate system, and ξ_z^TA is expected to be a projection of the sector reversal of the GSE-z component of the anisotropy onto the GEO-z axis.
Therefore, the NS anisotropy is divided by the factor cosδ_E, where δ_E is the inclination angle of Earth's rotation axis from GSE-z axis, in equation (<ref>) to convert the NS anisotropy ξ_z^TA from the GEO to GSE coordinate system.
Referring to the derivation procedure of η_ij^TA in Section <ref>, GSE-x and y components of the IMF vector are calculated from OMNIWeb data as
B_x(y)^TA = B_x(y)^T - B_x(y)^A/2
where B_x(y)^T and B_x(y)^A are averages of the GSE-x(y) component of the IMF in the Toward and Away sectors respectively in each year.
The IMF magnitude and spiral angle ψ in equation (<ref>) are consequently derived as
B = √((B_x^TA)^2 + (B_y^TA)^2)andsinψ = - B_y^TA/B.
Replacing ξ_z^TA(P) in equation (<ref>) with the mean spectrum Θ_k derived in Section <ref>, the most probable value of the radial density gradient in each rigidity bin k is derived as
g_k ∼ - c/P_k^m cosδ_E·B^2/B_y^TAΘ_k
where the rigidity P is represented by its median P_k^m in each rigidity bin.
A square of its error is estimated as
σ^2(g_k) = ∂ g_k/∂ B_x^TAσ(B_x^TA) ^2 +
∂ g_k/∂ B_y^TAσ(B_y^TA) ^2 +
( ∂ g_k/∂Θ_kσ_k )^2
= ( c/P_k^m cosδ_E B_y^TA )^2
[ 2 B_x^TAΘ_k σ(B_x^TA) ^2
+ ( B_y^TA - (B_x^TA)^2/B_y^TA ) Θ_k σ(B_y^TA) ^2
+ ( B^2 σ_k )^2 ]
where σ(B_x^TA) and σ(B_y^TA) are errors of B_x^TA and B_y^TA respectively.
Similarly to the error of η_ij^TA in equation (<ref>), these errors are derived as
σ^2(B_x(y)^TA) = σ^2(B_x(y)^T) + σ^2(B_x(y)^A)/4
where σ(B_x(y)^T) and σ(B_x(y)^A) are derived from standard deviations of the GSE-x(y) component of the daily IMF in the Toward and Away sectors respectively.
To reveal a solar cycle variation of the radial density gradient along with its rigidity dependence, we extract g_k's in k=3 and k=6 bins corresponding to the rigidity of P∼30 GV and P∼130 GV, respectively.
Figure <ref>c displays temporal variations of them in a unit of %/AU and a logarithmic vertical axis as a function of year.
The solar activity and interplanetary states are represented by Figures <ref>a and <ref>b;
the panel (a) displays yearly averages of the IMF magnitude B (blue line) and spiral angle ψ (red line) derived in equation (<ref>), red line in the panel (b) shows yearly averages of the solar wind speed V, and black points in the panel (b) are occurrence rate of the X-class solar flares every week.
In the same manner as the IMF, the yearly average of the solar wind speed is derived as V = (V^T + V^A)/2 where V^T and V^A are averages of the solar wind speed in the OMNIWeb for the Toward and Away sectors, respectively, every year.
The IMF spiral angle ψ and solar wind speed V show an anti-correlation with each other, following the Parker spiral picture.
The X-class flare occurrence rate is derived from the Konus-WIND flare catalog <cit.>.
We can identify a solar cycle variation of the radial density gradient at P∼130 GV (solid circles in Figure <ref>b), where the density gradient is suppressed in the solar activity minima around 1997, 2009, and 2020.
The density gradient at P∼30 GV (open circles in Figure <ref>b) also shows minima at least around 2009 and 2020, but its relative variation expressed in the logarithmic scale is smaller than that of P∼130 GV.
We can conclude that the solar cycle variation of the NS anisotropy's rigidity spectrum as seen in Figure <ref> is attributed to this rigidity-dependent solar cycle variation of the density gradient.
Despite the hard spectrum of the NS anisotropy overall in Figure <ref>, the radial density gradient at P∼30 GV is generally larger than P∼130 GV in Figure <ref>b, indicating a soft spectrum of the density gradient.
This soft spectrum is reasonable from a picture of the smaller solar modulation in higher rigidity GCRs.
From a technical viewpoint, it is caused by the factor 1/P_k^m multiplied in equation (<ref>).
While a quantitative reproduction of the observed anisotropy has been rarely performed by heliospheric simulations except for a small number of studies <cit.>, the radial density gradient, or radial distribution of GCR density, is predicted by some simulation works for a realistic heliosphere model including the termination shock and heliosheath <cit.>.
Our result on the radial density gradient will provide an observational validation of such heliospheric models.
It is also worth focusing on some years deviating from the general picture of the density gradient with a soft spectrum, as seen in 1999-2001, 2003, 2007, and 2013 in Figure <ref>c.
While being insignificant because of the large errors, these years have rapid suppression of the density gradient only in ∼30 GV GCRs from the previous years, resulting in comparable density gradients between ∼30 GV and ∼130 GV GCRs.
Such a hard spectrum of the density gradient causes a noticeably hard spectrum of the NS anisotropy in these years, as displayed in Figure <ref>.
Most of these years, 1999-2001, 2003, and 2013, correspond to the years when the area or number of low-latitude coronal holes was enhanced on Sun, as seen in the results of automatic identification of the coronal holes <cit.>.
The remaining year, 2007, is just after the coronal mass ejection (CME) event on December 13th, 2006, which was the biggest halo CME since the “Halloween storm” in 2003 as of the study by <cit.>.
From black points in Figure <ref>b, it is found that a few X-class flare events, including the CME on December 13th, 2006, successively occurred after a relatively calm period without X-class flares in 2005-2006.
A similar situation was found in the last half of 2017, where three successive X-class flares were detected after over ∼2 years with no X-class flares.
After this event, the density gradient shows a spectral hardening in 2017-2018, similar to 2006-2007, as displayed by the open and solid circles getting closer to each other in these years in Figure <ref>b.
These results imply that the post-storm interplanetary state after discrete solar eruptions or the low-latitude coronal holes mentioned above possibly caused unusual modulation effects suppressing the density gradient in lower rigidity GCRs, although more detailed analyses are required.
Assuming a radial balance between the diffusion flow and solar wind convection of GCRs further provides an estimation of their diffusion coefficient, as
κ_rr(P) g(P) ∼2 + Γ/3 V
where Γ∼ 2.7 is a power-law index of the energy spectrum of the GCR density.
The radial diffusion coefficient κ_rr of GCRs at the position of Earth is expressed as
κ_rr = κ_∥cos^2 ψ + κ_⊥sin^2 ψ
where κ_∥ and κ_⊥ are diffusion coefficients parallel and perpendicular to the IMF.
Therefore we can estimate the radial diffusion coefficient κ_rr,k in each rigidity bin k as
κ_rr,k∼2 + Γ/3·V/g_k.
The error range of κ_rr,k can no longer be approximated by a symmetric normal distribution because it is dominated by the error of g_k, which is inversely proportional to κ_rr,k.
We approximate the lower and upper limits of a 1σ confidence interval of κ_rr,k by
κ_rr,k^- = 2 + Γ/3·V/g_k + σ(g_k)andκ_rr,k^+ = 2 + Γ/3·V/g_k - σ(g_k)
respectively.
A standard deviation of the solar wind speed V is ignored because it is negligible compared to σ(g_k).
Figure <ref>d displays temporal variations of the radial diffusion coefficient κ_rr for ∼30 GV and ∼130 GV GCRs.
It demonstrates a solar cycle variation of the diffusion coefficient in both rigidities as well as the radial density gradient but negatively correlates with solar activity.
We have to note that the diffusion and convection balance in equation (<ref>), known as the force-field approximation <cit.>, is based on the diurnal anisotropy's phase observed to be around 18:00 local solar time <cit.>.
However, <cit.> and <cit.> showed a 22-year cycle excursion of the phase to earlier local time, suggesting that the left side of equation (<ref>) can be ∼4/5 of the right side in the case of such an excursion.
Therefore, this estimation of the diffusion coefficient κ_rr can have a systematic error of ∼20% in addition to equation (<ref>).
Based on a comparable sin^2ψ with cos^2ψ in the average IMF and κ_⊥/κ_∥≪ 1 supported by previous studies <cit.>, we obtain the parallel diffusion coefficient as
κ_∥,k∼κ_rr,k/cos^2ψ
from equation (<ref>) for the rigidity bin k.
Therefore, the parallel mean-free-path length of GCRs and its error range are estimated as
λ_∥,k = 3κ_∥,k/v∼3κ_rr,k/vcos^2ψ, λ_∥,k^- ∼3κ_rr,k^-/vcos^2ψ,andλ_∥,k^+ ∼3κ_rr,k^+/vcos^2ψ
where v ∼ c is the particle velocity and approximated by a light speed c for relativistic GCRs with P > 10 GV.
Blue and red curves in Figure <ref> display rigidity spectra of the parallel mean-free-path length in 2009 and 2015, respectively, as sample years in the solar activity minimum and maximum.
The outer rigidity bins with substantial errors are truncated in this figure.
The gray shaded area is “Palmer consensus” region which was proposed by <cit.> and has been validated mainly by solar energetic particle (SEP) observations and numerical simulations <cit.>.
Our result extends the mean-free-path length estimation into higher rigidity region ≤200 GV, and from Figure <ref> it is expected to smoothly connect to the consensus region by extrapolating the rigidity dependence.
Analyses of the GCR density and its rigidity spectrum also provide estimations of the diffusion coefficient or mean-free-path length in >10 GV region <cit.> by surveying optimal heliospheric parameters fitting to observations.
Our result of the mean-free-path length in Figure <ref> is consistent with their results in the order of magnitude, while our result features the unprecedented temporal resolution on a yearly basis and the wide rigidity range of over one order of magnitude.
It is also noted that their approach using the GCR density requires the assumption on the whole heliospheric structure, such as a spatial distribution of the diffusion coefficient, because the GCR density at Earth reflects an integration of the modulation effects from the outer edge of the heliosphere.
On the other hand, the NS anisotropy can provide an estimation of the diffusion coefficient free from such an assumption as well as the SEP observations in <10 GV region.
This is one of the advantages of the anisotropy observation, leading to a relatively reliable estimation of the diffusion coefficient.
§.§ Future perspective of the NS conjugate observation
Recently, we started a muon observation at Syowa Station, Antarctica, in 2018 in collaboration with the National Institute of Polar Research (NIPR), Japan.
NIPR also has a collaborative research framework with Iceland, located around an Arctic geomagnetic conjugate point with Syowa Station.
The polar conjugate observation in Iceland and Syowa Station will be an expanded concept of the current Nagoya-Hobart observation, and we provide a result of its quantitative simulation in this section.
The black bold line in Figure <ref> displays the differential response to the NS anisotropy for vertical channels of muon detectors at Husafell, Iceland, and Syowa Station.
The virtual detector at Husafell is set to have an equivalent geometry with the Syowa detector having a 1×2 m^2 detection area.
Other lines in Figure <ref> are the responses of channel pairs used in this study, i.e., repeats of those in Figure <ref>.
Notably, only the Husafell-Syowa pair has a substantial response in the lower rigidity (<20 GV) region, without a polarity reversal into a negative response seen in other channel pairs.
The north- and southward inclined channels in Nagoya and Hobart detectors respectively view high-latitude directions, but the geomagnetic field around the mid-latitude stations deflects cosmic-ray trajectories into lower-latitude directions as briefly discussed in Section <ref>.
This effect is more substantial for lower-rigidity cosmic rays and prevents us from gaining a significant response to the NS anisotropy in lower-rigidity regions only by mid-latitude stations.
Arctic and Antarctic regions, such as Iceland and Syowa Station, are relatively free from this effect and are predicted to allow us to observe the NS anisotropy in lower-rigidity GCRs accurately.
While having less angular resolutions than muon detectors, neutron monitors in Arctic and Antarctic regions are also expected to be responsive to the NS anisotropy in further lower rigidity (∼1 GV) regions.
Therefore, the concept of Iceland-Syowa conjugate muon observation can be a bridge to connect the muon and neutron detector networks.
§ CONCLUSION
A new analysis method to derive the rigidity spectrum of GCR anisotropy from ground-based observations has been developed and demonstrated by revealing a yearly variation of the NS anisotropy's spectrum.
In this method, atmospheric temperature effects on cosmic-ray muons are directly corrected by the meteorological reanalysis data, allowing for combining multiple muon detectors in a network observation free from individual local effects.
General graph matching in graph theory is adopted to survey optimal combinations of directional channels, which ensures high sensitivity to the NS anisotropy.
The highlight of our analysis method is Bayesian estimation with the Gaussian process, which has the potential to be applied to any unfolding problem of the ground-based observations to derive GCR properties in space.
The Gaussian process only confines the acceptable ranges of the spectrum value and its smoothness without supposing any analytical function, providing a sufficient tolerance of varying spectral shapes to trace dynamic variations of the GCR spectrum in solar modulation phenomena.
Previous works deriving rigidity spectra of the anisotropy use simultaneous observation data by multiple detector types, i.e., ground-based muon detectors, underground muon detectors, and neutron monitors <cit.>.
On the other hand, this paper succeeded in deriving the NS anisotropy's spectrum only by ground-based muon detectors in Nagoya and Hobart.
This relaxed requirement for observational data will make the anisotropy analysis a more common approach in cosmic-ray research.
The minimized number of observations also ensures a uniform dataset for an extended period, allowing this study to reveal the solar cycle variation of the NS anisotropy's spectrum.
Softening of the spectrum in the solar activity minima was discovered, and it is attributed to the rigidity-dependent variation of the radial density gradient of GCRs.
The diffusion coefficient or mean-free-path length of GCRs is subsequently derived based on the force-field approximation, and it is demonstrated that our analysis expands the mean-free-path length estimation into ≤200 GV region from <10 GV region achieved by SEP observations.
The rigidity-dependent diffusion coefficient has been a critical problem in elucidating the GCR propagation <cit.>, emphasizing the importance of the anisotropy observation and our result.
In addition to expanding the muon detector network into polar regions as described in Section <ref>, applying our analysis scheme to a broader range of cosmic-ray studies is desired.
Reconstruction of the three-dimensional anisotropy or density spectrum of GCRs is one such scientific target, and it can, in principle, be achieved by generalizing the Gaussian process used in this paper.
Short-term disturbance phenomena, including CME events, are also crucial topics along with the solar cycle variation analyzed in this paper, and an application of our analysis method to such an event will be presented in the future.
This research was supported by the “Strategic Research Projects” grant from ROIS (Research Organization of Information and Systems) and by JSPS KAKENHI Grant Number JP22KK0049.
The GMDN project is partially supported by ROIS-DS-JOINT (030RP2023), JARE AJ1007 program, and JSPS KAKENHI Grant Number JP24K07068.
The observations with Nagoya and Hobart muon detectors are supported by Nagoya University and Australian Antarctic Division, respectively.
§ DATASET PREPARATION
In this appendix, we describe the preparation procedure of muon counting-rate data performed before the analysis procedure in the main text, mainly the correction for atmospheric temperature effect on muons.
The temperature correction method by <cit.> defines a mass-weighted temperature for each station as
H_ st = ∑_h=1^h_max-1 w_h T_h + T_h+1/2
where T_h is the atmospheric temperature at altitude h above the location of each station and h_max indicates the top of the atmosphere.
The weight w_h is derived as w_h = (z_h - z_h+1)/z_0 where z_h is the atmospheric depth at the altitude h and z_0 = z_1 - z_h_max.
Muon counting rates N_i(t) corrected for atmospheric pressure effects on each time t are published by <cit.>.
The atmospheric temperature effect Δ N_i(t) on N_i(t) is expressed as
Δ N_i(t) = α_i H_ st(t) - H_ st^0
for the directional channel i belonging to each station.
The constant H_ st^0 is an average level of the mass-weighted temperature and set at 253 or 250 [K] for Nagoya or Hobart station, respectively.
The coefficient α_i for each directional channel is derived by <cit.> and ranges from -0.28 to -0.24 [%/K] in Nagoya station's channels and from -0.24 to -0.22 [%/K] in Hobart station's channels.
The temperature-corrected counting rate is derived as
N_i^ corr(t) = N_i(t) - Δ N_i(t).
The altitude profile of the atmospheric temperature is provided as meteorological reanalysis data by GDAS (Global Data Assimilation System) from 2005 and NCEP/NCAR (National Centers for Environmental Prediction and National Center for Atmospheric Research) until 2004, both published in NOAA's Air Resources Laboratory website <cit.>.
It provides the altitude and temperature T_h at each isobaric surface from 20 hPa to 1000 hPa at a designated location.
We set the ground-level altitude, h=1, at the isobaric surface with the lowest altitude above 500 m.
The top of the atmosphere, h_max, is approximated by the highest isobaric surface in the data, corresponding to the pressure of 20 hPa.
In calculating the weight w_h, the atmospheric pressure of each isobaric surface is used in place of the atmospheric depth z_h.
Temporal resolutions of the meteorological reanalysis data are 3 hours in GDAS data and 6 hours in NCEP/NCAR data, respectively, which are insufficient for hourly muon counting-rate data.
We first calculate the mass-weighted temperature H_ st(t) on a 3-hour or 6-hour basis using the provided data.
Then, it is interpolated by a linear interpolation between timestamps, and hourly H_ st(t) for the temperature correction is derived.
The corrected muon counting rate is converted into a deviation from its Bartels rotation average,
I_i(t) = N_i^ corr(t)/N_i^ BR
where N_i^BR is N_i^ corr(t) averaged over each Bartels rotation.
Hourly counting rates >2% below or above the Bartels rotation average N_i^ BR are omitted to eliminate disturbance events such as the Forbush decrease.
§ SENSITIVITY OF EACH CHANNEL PAIR TO THE NS ANISOTROPY
Optimization of the channel pairing described in Section <ref> is based on the sensitivity a_ij defined for each channel pair ij.
In this appendix, we describe its derivation procedure.
From the discussion with equation (<ref>) in Section <ref>, the perturbation on η_ij^TA from anisotropy components other than the NS anisotropy is estimated as
Δη_ij^TA = ∫_P=0^∞( ϵ_0^TAc_ij^0 + ϵ_c^TAc_ij^d + ϵ_s^TAs_ij^d ).
Previous studies <cit.> report nearly flat spectra for the anisotropy components and a soft spectrum proportional to ∼ P^-1 for the short-term density variations.
Based on these suggestions, we temporarily adopt an ad hoc simplification on the rigidity spectra only for the channel pairing as
ξ_z^TA(P) = ξ_z^TA, ϵ_0^TA(P) = ϵ_0^TA(P/60[ GV])^-1, ϵ_c^TA(P) = ϵ_c^TA,andϵ_s^TA(P) = ϵ_s^TA.
The constant 60 GV for the density variation ϵ_0^TA(P) is introduced as a representative rigidity of muon detectors.
Equations (<ref>) and (<ref>) are simplified as
η_ij^TA∼ξ_z^TA c_ij^zandΔη_ij^TA∼ϵ_0^TA c_ij^0 + ϵ_c^TA c_ij^d + ϵ_s^TA s_ij^d
where c_ij^z, c_ij^0, c_ij^d, and s_ij^d are coupling coefficients for the assumed spectra, defined as
c_ij^z = ∫_P=0^∞c_ij^z,
c_ij^0 = ∫_P=0^∞(P/60[ GV])^-1c_ij^0,
c_ij^d = ∫_P=0^∞c_ij^d,and
s_ij^d = ∫_P=0^∞s_ij^d.
From equation (<ref>), the optimized pairs can be defined as those maximizing the response ∂η_ij^TA/∂ξ_z^TA∼ c_ij^z to the NS anisotropy while minimizing the response to other anisotropy components which is estimated as
(∂Δη_ij^TA/∂ϵ_0^TA)^2 +
(∂Δη_ij^TA/∂ϵ_c^TA)^2 +
(∂Δη_ij^TA/∂ϵ_s^TA)^2
∼(c_ij^0)^2 + (c_ij^d)^2 + (s_ij^d)^2.
Therefore, we define the sensitivity of each channel pair ij to the NS anisotropy as
a_ij = {[ c_ij/√((c_ij^0)^2 + (c_ij^d)^2 + (s_ij^d)^2) for c_ij^z > 0.7 and a_ij > 1.6; 0 for c_ij^z ≤ 0.7 or a_ij≤ 1.6. ].
The thresholds, c_ij^z > 0.7 and a_ij > 1.6, are introduced to truncate channel pairs with worse sensitivities than Nagoya-GG, whose response and sensitivity are c_ij^z=0.63 and a_ij=1.51 respectively.
§ LIKELIHOOD FUNCTION AND GAUSSIAN PROCESS FOR THE SPECTRUM PARAMETERS
In equation (<ref>), the conditional probability distribution 𝒫(η | θ) of observed values η_l^TA's for the parameter θ is equivalent to a likelihood function for θ, which is generally used in the maximum likelihood estimation.
The parameters Θ_L and Σ_L in equations (<ref>) and (<ref>) are respectively identical to the mean vector and covariance matrix of the likelihood function expressed in a multivariate normal distribution.
In this appendix, we prove these equations and briefly inspect the profile of the likelihood function.
The effect of the Gaussian process is also described by visualizing the distributions of the likelihood and posterior probability of Bayesian estimation.
The exponent in equation (<ref>) is transformed as
-1/2∑_l η_l^TA - (Cθ)_l^2/(σ_l^TA)^2 = -1/2∑_l 1/(σ_l^TA)^2∑_k=1^N∑_k'=1^N C_lk C_lk'θ_k θ_k'
- 2 ∑_k=1^Nη_l^TA C_lkθ_k
+ (η_l^TA)^2
= -1/2∑_k=1^N∑_k'=1^Nθ_k
(∑_l C_lk C_lk'/(σ_l^TA)^2) θ_k'
- 2 ∑_k=1^Nθ_k ∑_l C_lkη_l^TA/(σ_l^TA)^2
+ ∑_l (η_l^TA/σ_l^TA)^2
= -1/2( θ^⊤Qθ
- 2 θ^⊤p + u )
where u = ∑_l (η_l^TA / σ_l^TA)^2 and Q and p are defined in equation (<ref>).
On the other hand, a square-completed quadratic form for θ is defined and expanded as
(θ - Θ_L)^⊤Q(θ - Θ_L) + u'
= θ^⊤Qθ
- θ^⊤QΘ_L
- Θ_L^⊤Qθ
+ Θ_L^⊤QΘ_L + u'
= θ^⊤Qθ
- θ^⊤( QΘ_L + Q^⊤Θ_L )
+ Θ_L^⊤QΘ_L + u'
= θ^⊤Qθ
- 2 θ^⊤QΘ_L
+ Θ_L^⊤QΘ_L + u'
where Q^⊤ = Q is used.
Comparing the parentheses in equation (<ref>) and equation (<ref>) for θ, we obtain p = QΘ_L, which is also identical to the definition of Θ_L in equation (<ref>).
The parameter u' is also defined from the comparison as u' = u - Θ_L^⊤QΘ_L.
Replacing the parentheses in equation (<ref>) with the square-completed expression in equation (<ref>) and revisiting the conditional probability distribution 𝒫(η | θ) in equation (<ref>), we obtain
ℒ(θ) ∝exp- 1/2(θ - Θ_L)^⊤Σ_L^-1(θ - Θ_L) + u' ∝𝒩(θ | Θ_L, Σ_L)
where 𝒫(η | θ) is replaced by a symbol ℒ(θ) to express the likelihood function for θ.
The definition Σ_L = Q^-1 in equation (<ref>) is also used.
Therefore, it is proven that the likelihood function is proportional to a multivariate normal distribution for θ where parameters Θ_L and Σ_L in equation (<ref>) are identical to the mean vector and covariance matrix of the distribution, respectively.
The posterior distribution in equation (<ref>) is equivalent to a product of multivariate normal distributions ℒ(θ) ∝𝒩(θ | Θ_L, Σ_L) and 𝒫(θ) ∝𝒩(θ | Θ_G, Σ_G).
From a formula for a product of multivariate normal distributions, the mean vector Θ and covariance matrix Σ of the posterior distribution are derived from Θ_L, Σ_L, Θ_G, and Σ_G, as described in equation (<ref>).
Figure <ref>a shows a conditional distribution of the likelihood ℒ(θ) in the θ_3 - θ_4 parameter space for the sample year 2015, where other parameters, θ_k for k 3 or 4, are fixed at those in the mean spectrum Θ derived by equation (<ref>).
The distribution is normalized so that its integration in the θ_3 - θ_4 space equals 1.
The likelihood distribution is widely extended along a θ_3 + θ_4 = const. line, indicating that these parameters cancel out each other in the likelihood.
This demonstrates the difficulty to uniquely determine these parameters, or the spectrum values in adjacent rigidity bins by using the likelihood function.
Figure <ref>b demonstrates how the Gaussian process works to overcome this problem.
It is a θ_3 - θ_4 parameter space distribution of the posterior probability 𝒫( θ | η ) in 2015, derived by multiplying the Gaussian process 𝒫(θ) to the conditional probability 𝒫( η | θ ), or likelihood function ℒ(θ), in equation (<ref>).
The high-confidence region is shrunk in Figure <ref>b compared to Figure <ref>a, within a reasonable region where the adjacent rigidity bins θ_3 and θ_4 have comparable values with each other.
This limitation by the Gaussian process prevents a discontinuous spectrum, enabling us to determine the spectrum while keeping a sufficient agreement between the derived spectrum and observed data.
On the other hand, how the likelihood works is visualized by the more significant likelihood in the θ_4>θ_3 region than θ_4<θ_3 region in Figure <ref>a.
It suggests that the spectrum value θ_k increases with the rigidity or its index k, resulting in the hard spectrum in 2015 as displayed in Figure <ref>b.
<cit.> also approximated a rigidity spectrum of the semi-diurnal anisotropy amplitude by a step function, as well as our formulation for the NS anisotropy in equation (<ref>).
Then, they derived the spectrum by a least-square method which is equivalent to the maximum likelihood estimation.
In their analysis, the spectrum was split into only seven rigidity bins in a range from ∼10 GV to ∼2000 GV, and not only ground-based muon detectors but also underground detectors were used.
This rough rigidity resolution and expanded dataset likely allowed them to derive the spectrum only by the likelihood function ℒ(θ) without the Gaussian process used in this study.
However, this approach is a trade-off with some accuracy, such as the rigidity bin and temporal resolution.
They derive only a several-year average of the spectrum, unlike our analysis deriving a solar cycle variation of the NS anisotropy spectrum on a yearly basis.
§ HYPERPARAMETERS OF THE GAUSSIAN PROCESS
In this appendix, we visualize hyperparameter dependences of the Gaussian process and describe a hyperparameter tuning performed for this study.
As described in Section <ref>, the Gaussian process expresses the probability density function of the rigidity spectrum θ by a multivariate normal distribution 𝒩(θ | Θ_G, Σ_G).
The covariance matrix Σ_G defines the correlation coefficients of the spectrum values across individual rigidity bins, confining the acceptable range of the smoothness of the spectrum.
The correlation coefficient is determined by the hyperparameter b in equation (<ref>).
The coefficient is reduced along with the rigidity difference between rigidity bins k and k', |q_k - q_k'| in equation (<ref>), and reaches 1/e ∼ 0.37 where |q_k - q_k'| = b [log_10( GV)].
The upper panel in Figure <ref>a displays the θ_3 - θ_4 parameter space distribution of the Gaussian process 𝒩(θ | Θ_G, Σ_G), where the hyperparameters σ_G and b in equation (<ref>) are set at those used in Section <ref>, σ_G=0.1% and b=0.4 [log_10( GV)].
Components of the mean vector Θ_G are set at a constant value θ^c = 0.15%.
From Table <ref>, the rigidity difference between rigidity bins k=3 and k'=4 is |q_k - q_k'| = 0.2 [log_10( GV)].
This results in the correlation coefficient exp(-|q_k - q_k'|^2/b^2) = 0.78, visualized by the probability density distribution along the x=y line in the upper panel of Figure <ref>a.
The lower panel in Figure <ref>a displays 10 random spectra θ's following the probability distribution of the Gaussian process.
Each vertical solid line denotes the median rigidity P_k^m [GV] in the rigidity bin k=3 or 4.
An average level of the spectra is determined by Θ_G or θ^c at 0.15%, while the deviation of spectrum values is limited from ∼0.0% to ∼0.3% by the hyperparameter σ_G.
In each line, discontinuity of the spectrum values in adjacent rigidity bins, such as θ_3 and θ_4, is confined by the correlation coefficient determined by b.
Figure <ref>b is the same as Figure <ref>a but only the hyperparameter b is set at a different value b=1.0 [log_10( GV)].
The correlation coefficients between spectrum values across rigidity bins become more significant than the case of b=0.4 in Figure <ref>a, where exp(-|q_k - q_k'|^2/b^2) = 0.96 between θ_3 and θ_4.
This high coefficient leads to smoother rigidity spectra than the case of b=0.4, or nearly linear spectra, as demonstrated by the lower panel of Figure <ref>b.
In this manner, we can confine the smoothness of the spectrum by introducing the Gaussian process as a prior distribution in Bayesian estimation as performed in Section <ref>, without assuming any analytical function.
Hyperparameters of the Gaussian process have to be predefined when we perform Bayesian estimation of the rigidity spectrum in equation (<ref>).
The mean vector Θ_G is determined every year as follows.
In the same way as Appendix <ref> and equation (<ref>), assuming a flat spectrum, θ_1 ∼θ_2 ∼…∼θ_N ∼θ^c = const., simplifies equation (<ref>) as
η̃_l^TA∼ c_l^z θ^c
where c_l^z = ∑_k=1^N C_lk.
In this case, the NS anisotropy θ^c is estimated as a weighted mean of the observed value η_l^TA divided by c_l^z, as
θ^c ∼∑_l w_l η_l^TA / c_l^z/∑_l w_l
where w_l = (c_l^z / σ_l^TA)^2.
We adopt Θ_G,1 = Θ_G,2 = … = Θ_G,N = θ^c for Θ_G using the observed value η_l^TA in each year, as a reference to the average level of the spectrum.
On the other hand, we use common values of σ_G and b for all years to ensure consistency of the precondition in this study.
Too loose constraints of the Gaussian process, corresponding to larger σ_G and smaller b, cause an over-fitting of the spectrum to observed data.
On the other hand, too strict constraints cause an under-fitting, and the reproducibility of the observed values by the derived spectrum is degraded.
From the mean spectrum Θ of the posterior distribution derived in equation (<ref>), a coefficient of determination is defined as
R^2 = 1 - ∑_l (η_l^TA - η̃_l^TA)^2/∑_l (η_l^TA - <η_l^TA>)^2
where η̃_l^TA is an expected value of the observable η_l^TA derived by inserting the mean spectrum Θ as θ in equation (<ref>).
An average of η_l^TA for all channel pairs l's is expressed by <η_l^TA>.
Now we focus on observed data η_l^TA's in 2015 as a sample year.
For this year's data, the posterior distribution is calculated by equation (<ref>) for all combinations of σ_G and b in a sufficiently wide range of these parameters.
Then, we derived R^2 for each combination of the hyperparameters.
Figure <ref> displays the result, R^2, as a function of b for each σ_G value.
An increase of R^2 with increasing σ_G saturates around σ_G ∼ 0.1%, indicating that a constraint as strict as σ_G ∼ 0.1% can be imposed without losing R^2.
In overall lines in Figure <ref> for σ_G ≤ 0.1%, R^2 is maximized around b = 0.4 [log_10( GV)].
Based on these inspections, we adopt σ_G=0.1% and b = 0.4 [log_10( GV)] as optimal hyperparameters of the Gaussian process for our dataset.
aasjournal
|
http://arxiv.org/abs/2409.02518v1 | 20240904082614 | AirFogSim: A Light-Weight and Modular Simulator for UAV-Integrated Vehicular Fog Computing | [
"Zhiwei Wei",
"Chenran Huang",
"Bing Li",
"Yiting Zhao",
"Xiang Cheng",
"Liuqing Yang",
"Rongqing Zhang"
] | cs.NI | [
"cs.NI",
"cs.SE"
] |
A background-estimation technique for the detection of extended gamma-ray structures with IACTs
T. Wach 1 A. Mitchell 1 L. Mohrmann 2
September 9, 2024
===============================================================================================
§ ABSTRACT
Vehicular Fog Computing (VFC) is significantly enhancing the efficiency, safety, and computational capabilities of Intelligent Transportation Systems (ITS), and the integration of Unmanned Aerial Vehicles (UAVs) further elevates these advantages by incorporating flexible and auxiliary services. This evolving UAV-integrated VFC paradigm opens new doors while presenting unique complexities within the cooperative computation framework. Foremost among the challenges, modeling the intricate dynamics of aerial-ground interactive computing networks is a significant endeavor, and the absence of a comprehensive and flexible simulation platform may impede the exploration of this field. Inspired by the pressing need for a versatile tool, this paper provides a lightweight and modular aerial-ground collaborative simulation platform, termed . We present the design and implementation of AirFogSim, and demonstrate its versatility with five key missions in the domain of UAV-integrated VFC. A multifaceted use case is carried out to validate AirFogSim's effectiveness, encompassing several integral aspects of the proposed AirFogSim, including UAV trajectory, task offloading, resource allocation, and blockchain. In general, AirFogSim is envisioned to set a new precedent in the UAV-integrated VFC simulation, bridge the gap between theoretical design and practical validation, and pave the way for future intelligent transportation domains. Our code will be available at <https://github.com/ZhiweiWei-NAMI/AirFogSim>.
§ INTRODUCTION
The advent of Intelligent Transportation Systems (ITS) represents a monumental shift in the landscape of urban mobility, driven by a growing need for safer and more convenient transportation modes. Central to this transformation is the emergence of Connected and Autonomous Vehicles (CAVs), which epitomize the integration of cutting-edge technology with traditional vehicular networks<cit.>. The main distinguishing features of CAVs are the availability of various onboard sensors (e.g., cameras, LiDAR, radar, etc.) that generate massive amounts of data for perception and decision-making, as well as the enhancement of Vehicle-to-Everything (V2X) communications to interact with other entities. Nonetheless, this powerful combination of computation and communication capabilities is supported by the large amount of (3 to 40 GBit/s per CAV according to Tuxera<cit.>) data generation, exchange, and processing. As we embrace the concept of the metaverse into vehicular networks <cit.>, more computation-intensive technologies such as VR, AR, and Mixed Reality (MR) are beginning to intersect with autonomous driving.
Therefore, the next-generation ITS is witnessing a tangible development that demands sophisticated computation, high-bandwidth communication, and seamless collaboration among various network entities.
In response to these technological demands, Vehicular Fog Computing (VFC) has emerged as a crucial enabler within ITS. By decentralizing data processing and bringing computational resources closer to the edge of the network, VFC significantly reduces the latency associated with cloud-based processing and enhances the overall responsiveness of the system. VFC also proposes a fascinating incentive mechanism, where intelligent vehicles (including both moving and parked vehicles) with idle resources are motivated to serve as vehicular fog nodes<cit.>. Hereby, given the many-to-many matching dynamics between tasks and resources, the crux of VFC lies in adeptly managing the time-varying and distributed nature of computational tasks, focusing on the crucial but complicated computation offloading, which has garnered considerable attention recently<cit.>.
Though the VFC paradigm greatly alleviates the burden at the static roadside units (RSUs), pervasive communication and computing needs are still outpacing the capabilities of terrestrial vehicular networks. In the coming era, terrestrial vehicular networks and aerial infrastructures are expected to be integrated to provide more ubiquitous wireless connectivity and computing services. For now, divergent modules are mounted on Unmanned Aerial Vehicles (UAVs) to offer more efficient and flexible edge computing, and many researchers <cit.> have begun to explore the potential of UAV-integrated vehicular fog computing (as shown in Fig. <ref>). On the one hand, UAVs can be deployed as aerial base stations to provide ubiquitous coverage and seamless connectivity, especially in remote areas where terrestrial RSUs are not available. On the other hand, UAVs can be leveraged as mobile fog nodes to offload computation tasks from vehicles, reducing the burden on the ground fog nodes and improving the overall system performance.
However, the integration of UAVs into vehicular networks is alongside a host of challenges, including the control signal hijacking prevention<cit.>, limited energy, and real-time trajectory planning. Consequently, research into integration mechanisms for computation offloading is essential to fully realize the potential of UAV-integrated VFC.
While research in UAV-integrated VFC is burgeoning, a fundamental challenge persists: how to effectively and accurately model this intricate system and reliably evaluate the algorithm performance? Real-world simulations are often impractical due to the large scale of vehicles and the prohibitive costs associated with traffic disruption, so researchers resort to using simulators with synthetic or real-world data to validate their propositions. Nonetheless, the current landscape of simulation tools <cit.> reveals a critical gap. Many of these tools are either overly specialized, failing to cover the extensive exploration of research interests in the UAV-integrated ITS, or are bogged down by complexity that undermines their practical use in research. There lacks a comprehensive and user-friendly simulation platform, which represents a significant barrier restricting the full exploration for computation offloading in the UAV-integrated VFC.
This paper aims to address this gap. According to our comprehensive survey on the realm of UAV-integrated VFC, current simulators are either not standardized, limiting their applicability and interoperability, or are excessively cumbersome, posing challenges for efficient deployment and development. In response, our work proposes a light-weight and modular UAV-integrated aerial-ground collaborative vehicular fog computing simulation platform, named as AirFogSim. The constructed platform is unique in its modular design, enabling it to effectively simulate a variety of missions essential to the aerial-ground computation ecosystem. Moreover, the platform's modularity allows for easy extension to additional missions, making it a developing tool for a wide range of research applications in ITS. By providing this comprehensive platform, we aim to empower researchers and engineers to explore new frontiers in the UAV-integrated aerial-ground collaborative VFC, facilitating the development of innovative solutions in intelligent transportation systems. The main contributions of this paper are summarized as follows:
* In this work, we construct the AirFogSim, a versatile, lightweight, and modular simulation platform crafted for computation offloading in UAV-integrated aerial-ground collaborative VFC. This platform is meticulously aligned with contemporary research directions and delivers a comprehensive simulation environment adept at representing the complexities of the aerial-ground interactions. The design of AirFogSim incorporates a selection of current established standards<cit.>, thereby enhancing the accuracy and practicability of the platform.
* The AirFogSim supports five key missions in the UAV-integrated VFC, which are the RSU/Aerial base station (ABS) deployment, UAV trajectory planning, V2X task offloading, security and privacy, and dynamic resource allocation. By applying various network scheduler modules, the platform simulates the intricate communicating and computing interactions among vehicles, UAVs, and RSUs, and thus enables further research and development by exposing APIs.
* To demonstrate the practical utility and effectiveness of AirFogSim, we introduce a detailed use case of the platform. This use case involves the simulation of a UAV-integrated reliable V2X task offloading framework using the blockchain technology. The simulation results verify the platform's validity and demonstrate its capability to accurately model and analyze these complex interactions in UAV-integrated VFC scenarios.
The rest of this paper is concluded as below: Section <ref> illustrates the background and related work. Section <ref> introduces the system architecture of AirFogSim. Section <ref> presents five key missions supported by the AirFogSim. Section <ref> describes the implementation and modeling of different functionalities. Section <ref> presents a practical use case. Section <ref> concludes this paper and proposes future research directions.
§ BACKGROUND AND RELATED WORK
In this section, we first introduce the architecture of UAV-integrated VFC. Then, we summarize the existing research in UAV-integrated VFC and the current simulators.
§.§ Background: UAV-Integrated VFC Architecture
Suppose vehicles, UAVs, RSUs, and cloud servers are deployed in a VFC environment. Figure <ref> represents the layered architecture and communication pathways of the UAV-integrated VFC paradigm.
Cloud Layer:
At the top of the paradigm lies the cloud layer, depicted with multiple cloud symbols indicating the expansive and powerful computational resources available through remote data centers. This layer represents the upper echelon of processing capability, suited for tasks requiring significant computational power and not time-sensitive.
UAV-Integrated Cloudlet Layer:
Below the cloud, there is the UAV-integrated cloudlet layer. This layer serves as an intermediary between the cloud and ground-level fog computing. Cloudlets are small-scale data centers that offer localized processing power, reducing latency for nearby devices. In this layer, UAVs are shown to facilitate the extension of cloudlet capabilities, implying that UAVs can carry small-scale computing infrastructure or act as communication relays to enhance the network. Both the static and mobile infrastructures are responsible for providing computation, storage, and networking services within a specific region, termed service zone in this paper.
UAV-Integrated Fog Layer:
On the ground level, there's the UAV-integrated fog layer, where most of the dynamic interactions occur. Here, UAVs are significantly involved in the direct processing and relaying of information. This layer showcases a rich network of connections among entities:
* Vehicles: Act as both data producers and consumers, equipped with sensors and communication capabilities.
* UAVs: Serve as mobile fog nodes that provide computation, storage, and networking services to nearby vehicles.
* RSUs: Fixed infrastructures that support communication and computation capabilities within their regions.
* Edge Servers: Localized servers providing processing and storage capabilities, wired to the RSUs.
§.§ UAV-Integrated Vehicular Fog Computing Missions
The UAV-integrated VFC paradigm enables a host of missions, including the task offloading, RSU/ABS deployment, UAV trajectory planning, security and privacy, and resource allocation. These missions call for diverse functionalities and operations in the simulation platform.
The joint task assignment and computation allocation to fog nodes is studied as a multi-objective minimization problem (concerning latency, energy, pricing cost, etc.), and solved via centralized or distributed methods including heuristic methods <cit.>, contract theory<cit.>, matching theory<cit.>, game theory <cit.>, reinforcement learning (RL) approaches <cit.>. UAVs are also considered flexible auxiliary nodes for computation offloading in post-disaster rescue <cit.>. In <cit.>, Liu et al. studied the UAV-assisted mobile edge computing with joint communication and computation resource allocation for vehicles.
These studies require simulation of the computation, communication, and energy models for performance validation.
Considering the varying computational capabilities, dynamic channel state information, and reliability of moving vehicles, the task offloading missions in UAV-integrated VFC is not merely an optimization problem but intertwines with multiple dimensions of vehicular networks. Reference <cit.> focused on traffic loads in heterogeneous VFC scenarios and executed computation offloading regarding the predicted network conditions. Besides prediction-based proactive schemes, reactive methods such as redundant resource allocation and service migration<cit.> can also be optimized to alleviate the uncertainty in vehicular networks. The dynamics of vehicular networks and the uncertainty of computation offloading are based on models including communication channel attributes, computation queues, road topologies, and mobility. Therefore, the simulation of both fog node network and traffic flow is a prerequisite for the task offloading mission.
As for RSU/ABS deployment issues, drones are leveraged to augment network coverage in underserved areas<cit.> and guarantee real-time safety of vehicles on highway<cit.>. This mission is not without the scope of computation and communication dynamics, as the RSU/ABS deployment is closely related to the traffic conditions and the network topology. In the realm of security and attacks, research is intensifying on addressing unique cybersecurity challenges, including data privacy and secure communication<cit.>. These mechanisms can be implemented by applying security operations such as authentication, encryption, and blockchain.
In <cit.>, Gupta et al. presented a blockchain-based secure scheme to prevent controller hijacking and man-in-the-middle attacks. Furthermore, the allocation of computational and communication resources among large-scale regions<cit.> and the exploration of economic models for resource sharing and trading through incentive CPU trading are gaining traction<cit.>, thereby fostering a collaborative and efficient vehicular network ecosystem on the basis of incentive mechanisms.
Overall, these research efforts are jointly devoted to a secure, sustainable, collaborative, and efficient computation framework in the UAV-integrated VFC, and requires supportive functionalities to validate the propositions.
§.§ Existing Simulation Platforms
By delving into the current research in Section <ref>, the required operations for a simulation platform to manage can be summarized as follows: Communication, Computation, Energy, Security, Mobility, Traffic, and Scalability. The scalability of the simulator is both in terms of the size of the simulation and the development of new modules. We survey the representative simulators relevant to these requirements and compare them with our proposed AirFogSim in Table <ref>.
General fog and edge computing simulators like IFogSim <cit.>, IFogSim2 <cit.>, and EdgeCloudSim<cit.> concentrate on computation and energy dynamics, yet they fall short in simulating critical aspects such as mobility and road topologies. Vehicular network-focused simulators, namely FogNetSim++<cit.> and Veins<cit.>, address more specialized requirements, while the former integrates computing features for complex network simulations, the latter excels in vehicular network and traffic simulations, albeit without comprehensive computation and energy components. VFogSim<cit.> represents a category of simulators specifically tailored for VFC, with robust communication and energy modeling capabilities. However, its lack of security features is a significant gap, given the increasing cybersecurity concerns in VFC environments. In the domain of UAVs, MARSIM<cit.> and Skywalker<cit.> emerge as specialized tools. MARSIM's focus on LiDAR-based UAV applications marks its niche in UAV-centric simulations, whereas Skywalker extends its utility to UAV-assisted federated computing, proving invaluable for smart city applications. However, both simulators lack the ability to simulate the urban road topology and traffic dynamics.
While existing simulators provide valuable insights into various aspects of VFC, they exhibit notable limitations in the context of computation offloading in UAV-integrated VFC. Addressing these limitations, our proposed AirFogSim offers functionalities in Table <ref>, as a modular, lightweight, and easily adaptable platform, making it a practical and efficient tool for evolving research requirements in this dynamic field.
§ SYSTEM ARCHITECTURE AND MODULE
This section provides an overview of the proposed simulation platform, AirFogSim.
As shown in Fig. <ref>, the proposed system architecture is stratified into four core parts: (1) Traffic Front-End, (2) Fog Network Simulation, (3) Environment Scheduler, and (4) Algorithm Application.
§.§ Traffic Front-End
The “Traffic Front-End” serves as the foundation of the simulation environment. It utilizes SUMO (Simulation of Urban Mobility) to generate synthetic vehicular mobility patterns or utilize real-world traffic data. The integration with Python allows for the visualization of traffic scenarios, vehicular flows, UAVs, and network topologies.
* Traffic Manager Module: Configure the traffic environment via to access SUMO. It provides the ability to generate synthetic traffic patterns or import real-world traffic data.
* Python-based Visualization: Interfaced with Python, this module provides an immersive visualization of traffic conditions, vehicular flows, and UAV movements. It provides both graphical via and tabular representations via of the simulation environment.
§.§ Fog Node Network
“Fog Node Network” epitomizes the computation, communication, and energy simulation in the environment.
* Communication Module: This module adheres to the 3GPP standards for channel propagation models. Researchers can adjust parameters online or offline to fit the dynamics of road traffic, vehicle flow, and physical obstructions that affect channel states. It offers a more lightweight and efficient solution compared with other simulators like OMNeT++<cit.> and WinProp<cit.>. Wired links are also supported to simulate the communication between RSUs and cloud/edge servers via M/M/1 queues.
* Computation Module: This module orchestrates computational tasks across diverse fog entities. It allows for the designation of different computational sequences and CPU allocation strategies. Tasks are stored in queues and processed according to the scheduling algorithms.
* Energy Module: The energy module is responsible for managing the energy consumed during transmission and computation of fog entities, especially for the UAVs. This module aims to optimize energy usage across the VFC ecosystem, ensuring sustainable operation without compromising performance.
* Synchronization Module: This module is responsible for time synchronization. Two-time scales are supported: the simulation time scale determined by SUMO and the transmission time interval (TTI) for slot-wise computation and communication.
It ensures that all entities are operating on the same time scale, facilitating the coordination of tasks and resources.
§.§ Environment Scheduler
The “Environment Scheduler” is responsible for orchestrating the simulation environment, including the operations of security, tasks, and UAVs.
* Security Module: This module leverages blockchain and authentication technologies to validate the integrity and authenticity of computation services. It incorporates verification stages to ensure the results are trustworthy. Additionally, it utilizes reputation systems to evaluate and maintain the credibility of participating entities.
* Task Module: This module employs an incentive mechanism to encourage fog entities to participate in task computation and offloading processes actively. Vehicles may generate tasks and allocate resources to specific tasks according to this module.
* UAV Module: This module is dedicated to optimizing the flight paths and destinations. The optimization of 3-D trajectories takes into account factors such as UAV energy consumption, computation demands, and collisions.
§.§ Algorithm Application
The uppermost “Algorithm Application” is the bedrock for experimentation and development. It provides a flexible framework for researchers to test and evaluate algorithms.
* Objective Function Module: This module formulates an objective function that integrates trustworthiness metrics from the “Environment Scheduler” to serve as the foundation for optimization. It takes a multi-criteria approach to formulate the objective of each operation.
* Operator Module: This module is responsible for the implementation of the optimization algorithms, including offloading, resource allocation, block mining, etc.
* Data Analyses Module: This module is responsible for analyzing the data to guide the operations and collect the information as results.
§ SUPPORTED MISSIONS IN AIRFOGSIM
AirFogSim can support different missions thanks to the light-weight modules and multifunctional operations. In this section, we introduce the key missions in Fig. <ref>.
§.§ RSU/ABS Deployment
The RSU/ABS deployment directly influences the latency, coverage, and quality of service of the network. The objective is to optimize placement and operational parameters in line with the dynamic demands of vehicular networks and urban layouts. This mission in AirFogSim allows users to define parameters such as the number of RSUs/drones, their communication range, and processing capabilities. The simulation then proceeds to position the RSUs/drones within the virtual environment and evaluate the network performance under various traffic conditions.
§.§ UAV Trajectory Planning
The core objective of this mission is to develop and evaluate UAV flight strategies that minimize energy consumption and latency while maximizing the QoS and coverage. AirFogSim enables the simulation of various trajectory planning algorithms, including those based on predictive models that consider vehicular traffic patterns, urban topologies, and user demand forecasts.
§.§ V2X Task Offloading
V2X task offloading is a pivotal functionality where computational tasks are transferred from vehicles to edge computing devices or cloud servers. The primary objective is to optimize the distribution of computational tasks among vehicles, UAVs, RSUs, and cloud servers to enhance operational efficiency, reduce latency, and conserve vehicular computational resources.
§.§ Security and Privacy
The integration of authentication and blockchain technologies in vehicular networks introduces secure and privacy-preserving approaches to ensuring data integrity and trust among participants. The blockchain module in AirFogSim aims to simulate the process of verifying and adding transaction records to the blockchain, maintaining the ledger's reliability and security within the VFC paradigm. The authentication module, on the other hand, is responsible for verifying the identity of network entities and ensuring that only authorized users can access the system.
§.§ Resource Allocation
In computation offloading, efficient allocation of communication and computation resources is crucial for the performance of a UAV-integrated VFC system. This module is dedicated to optimizing the distribution of these resources among vehicles, UAVs, and RSUs to improve overall network throughput, reduce latency, and ensure fairness.
§.§ Other Missions
Researchers can further extend the AirFogSim's capabilities to explore other aspects of UAV-integrated VFC systems. For example, electric vehicle (EV) charging can be merged into VFC<cit.>, which enables the simulation of EV charging stations and the allocation of charging resources to EVs at the expense of their CPU resources. This mission can be easily implemented by adding “battery” attributes to vehicles and developing class.
§ DESIGN AND IMPLEMENTATION
In this section, we introduce the platform design and implementation, including visualization, propagation modeling, computation and transmission modeling, blockchain modeling, and attack modeling.
§.§ Visualization based on SUMO Traffic
Traffic flow generation serves as the foundation for simulating vehicular networks within our platform.
We utilize the SUMO tool and the package to interactively handle large road networks and traffic flows.
A object orchestrates the generation of vehicular traffic within the simulation. It manages the introduction of individual vehicles into the traffic flow, stipulating their points of origin and intended destinations. While synthetic data is adequate for a broad range of simulation missions, the integration of real-world traffic data can provide additional verisimilitude in traffic patterns.
§.§ Propagation Modeling
As discussed in 3GPP Release 15 <cit.> for cellular V2X enhancement, channel gain coefficients encompass the effects of frequency-independent large-scale fading (path loss, shadowing) and frequency-dependent small-scale fading (fast fading) in AirFogSim.
§.§.§ Path Loss Model
The path loss model for the WINNER scenarios<cit.> is carried out as:
PL=Alog_10(d)+B+Clog_10f_c/D
where d is the 3D distance between transmitter and receiver, and f_c is the carrier frequency. A,B,C,D are the fitting parameters where A includes the path loss exponent, B is the intercept, C describes the path loss frequency dependence, and D is the scaling factor. The fitting parameters are environment and channel-specific. For example, the WINNER+ B1 urban scenario is adopted in 3GPP TR 36.885 <cit.> for V2V channels, thereby the fitting parameters are given as A=22.7,B=41.0, C=20, D=5.0. The path loss model can be easily changed according to practical requirements.
§.§.§ Shadow Fading Model
The shadow fading is assumed to initially follow the log-normal distribution with a fixed standard deviation<cit.>. The shadow fading is affected by the relative movements between entities (vehicles and UAVs) based on the distances moved in the last time step. This is done using an autoregressive model where the new shadow fading is a weighted combination of the previous shadow fading and a new shadowing term. The weights depend on the relative movements between entities and the decorrelation distance. The update process can be mathematically represented as:
S_i(t+1) = 10 ·log_10[exp(-Δ d_i/d_corr) ·(10^S_i(t)/10) + .
. √(1-exp(-2 ·Δ d_i/d_corr))· 10^N(0, σ_S_i)/10]
where S_i(t+1) represents the shadow fading of entity i at time step t+1, Δ d_i is the relative movement of entities for the i-th channel during the last time step, d_corr is the decorrelation distance, S_i(t) is the shadow fading at time step t, and N(0, σ_S_i) represents a normally distributed random variable with mean 0 and standard deviation σ_S_i. This formula allows for a dynamic update of the shadow fading as the relative positions of the entities change, thus improving the realism of the wireless signal strength simulations.
§.§.§ Fast Fading Model
The fast fading is modeled as Rayleigh fading and assumed to be exponentially distributed with unit mean<cit.>.
Hereafter, the channel power gain of the i-th channel can be concluded as:
g_i^mode=S_i/PL_ih_i^mode
where mode denotes the mode of different channel frequencies, S_i,PL_i,h_i are the shadow fading, path loss, and fast fading of the i-th channel, respectively.
After all, we discuss the channel capacity of different link modes. Suppose that the i-th V2V channel is erected between vehicle V_i (transmitter) and V_j (receiver), the transmission rate of the i-th channel can be given by:
C^V2V_i = x^V2V_i,j B^V2V_ilog_2(1+γ ^V2V_i).
In Eq. (<ref>), x^V2V_i,j is the indicator variable to show whether V_i transmit to V_j, B^V2V_i,j is the allocated bandwidth (i.e., resource blocks, RBs), and γ_i,j^V2V is the signal-to-interference-plus-noise ratio (SINR) of the V2V communications. If the allocated RBs are shared among multiple V2V channels simultaneously, the SINR of the i-th V2V link is expressed as:
γ ^V2V_i=p_i^V2Vg^V2V_i/N_0+∑_V_m, V_n∈𝐕𝐞𝐡, m≠ ix^V2V_m,np_m^V2Vg_m^V2V
where N_0 is the power of complex Gaussian white noise and g^V2V_i denotes the channel gain of the i-th V2V links. If the RB is occupied by only one channel, the interference disappears and the SINR γ^V2V_i degenerates into SNR.
Similarly, the transmission capacities of V2I, U2V, U2I, U2U, I2I, etc., channels can be induced by Eqs. (<ref>) and (<ref>).
§.§ Computation and Transmission Modeling
This subsection elucidates the computational queuing model and the transmission scheme underpinning task offloading and execution.
§.§.§ Task Queue Model
For the computation model, we symbolize the task queue at any fog node X_j as ℐ_X_j. The state of the queue at any time t can be described by the tuple (I_1^X_j, I_2^X_j, …, I_n^X_j), where I_i^X_j represents the i-th task in the queue. Each task is further characterized by its own tuple {X_j, up_i^X_j, req_i^X_j, τ_i^X_j}, specifying the upload size, required compute cycles, and delay tolerance, respectively.
§.§.§ CPU Resource Allocation
In each TTI, the CPU allocation strategy is determined by the fog node's scheduling algorithm. This strategy can be modeled by adjusting the allocation of CPU resources ϵ_j,k in the computation delay:
T^comp_X_j,k=req_k/ϵ _j,kF_j
The CPU resource allocation ϵ_j,k reflects the portion of the computing frequency F_j that is allocated to task k by device X_j. This allocation can be dynamic and governed by various scheduling algorithms that consider factors like task urgency, resource availability, and overall system optimization goals.
§.§.§ Transmission Model
The spectrum is divided into many closely spaced subcarriers, which are assigned to users in a dynamic manner. The transmission delay T^tran_i,k for the i-th sub-channel, tasked with transmitting the data for vehicle V_k, is inversely proportional to the sub-channel's capacity C_i^mode:
T^tran_i,k=up_k/C_i^mode
Here, C_i^mode encapsulates the effects of all sub-channel bandwidth allocation, modulation scheme, and the characteristics defined by the propagation modeling.
§.§ Blockchain Modeling
Blockchain technology plays a pivotal role in ensuring the integrity and security of transaction data within a network. In each time slot, transactions are collected and added to a transaction pool. The blockchain modeling process can be summarized as follows:
§.§.§ Block Generation and Mining Process
A miner is selected in accordance with the consensus algorithm and employed by the blockchain system. This miner is responsible for generating a new block, which involves collating transactions from the pool, validating them, and then broadcasting the newly created block to the network.
§.§.§ Block Verification
Upon receipt of the new block, other nodes in the network undertake the verification process. This is a crucial step to ascertain the block's validity and to maintain the blockchain's overall consistency and reliability. Once verified, the block is appended to the blockchain, thus updating the ledger.
§.§.§ Reward Mechanism
The miner who successfully generates a block is rewarded for their contribution to the network. This reward typically comprises two components: the transaction fees and the block reward. Transaction fees are collected from the transactions included in the block, serving as an incentive for miners to prioritize transactions with higher fees. The block reward, usually a set number of cryptocurrency units, is granted as an additional incentive for participating in the block generation process.
Currently, the supported consensus algorithm in AirFogSim is Proof-of-Stake (PoS) due to the limited onboard resource assumption of vehicles. The plan for other consensus algorithms, such as Proof-of-Work (PoW) and Proof-of-Authority (PoA), is underway.
§.§ Attack Modeling
Similar to previous works<cit.>, three typical attacks are considered in the computation offloading of fog vehicles:
§.§.§ Identity Spoofing Attack
In the computing ecosystem, fog nodes are rewarded by their computation. Therefore, the attacker disguises itself as a legitimate vehicle and can obtain the fees of other fog nodes. This attack can be prevented by the fog nodes' authentication mechanism.
§.§.§ Always-On Attack
In this attack, the attacker always returns false results to the offloaded tasks to obtain the computing fees without any computation costs.
§.§.§ On-Off Attack
Malicious fog vehicles obtain computing fees by returning correct results for a while and then returning false results so that the reputation can be maintained at a certain level.
These three attack models can be prevented by the well-defined reputation mechanism based on the blockchain technology in the AirFogSim platform. Additional attacks and prevention methods (cipher attack, Sybil attack, etc.) will be considered in future work.
§ CASE STUDY: A UAV-INTEGRATED RELIABLE V2X TASK OFFLOADING FRAMEWORK
In this section, we introduce a use case conducted by the AirFogSim platform.
§.§ Problem Formulation
The communication and mobility features of vehicles and UAVs are viewed as unchanged in each time slot (i.e., TTI). We use a task set ℐ[t]={I_t,1,I_t,2⋯} to denote the tasks generated by the task vehicles (TVs) in each time slot t, where I_t,k={V_k,up_t,k,req_t,k,τ_t,k}. A set of fog nodes is denoted as 𝒳={X_j} with computing frequencies F^X_j, including serving vehicles (SVs), UAVs, and RSUs.
The resource-constrained V2X task assignment problem coupled with UAV trajectories is then given by:
min_vars ∑_t,k[T^compE_t,k+ punish(1-∑_jμ_t,k,j)]
s.t. C1:x_t',k[t], y_t',k[t], μ_t',k,j∈{0,1}, ∀ t', k, t
C2: ∑ _jμ_t',k,j≤ 1, ∀ t',k
C3: x_t',k[t]+ y_t',k[t]≤∑ _jμ_t',k,j, ∀ t',k,t
C4:T_t',k^compE≥ t· y_t',k[t], ∀ t',k,t:t≥ t'
C5.1:T_t',k^compE≥ (T-t)· y_t',k[t], ∀ t',k,t:t≥ t'
C5.2:T_t',k^compS≤ T-T_t',k^compE, ∀ t',k,t:t≥ t'
C6:T_t',k^tranE≥ t· x_t',k[t], ∀ t',k,t:t≥ t'
C7:T_t',k^tranS≤ t· x_t',k[t], ∀ t',k,t:t≥ t'
C8:T_t',k^tranE≤ T_t',k^compS, ∀ t',k
C9:T^compE_t',k-t'≤τ_t',k/dt, ∀ t',k
C10:B[t]·∑_t',kx_t',k[t]≤ B, ∀ t
C11: F^X_j[t]·∑_t',kμ_t',k,j· y_t',k[t]≤F^X_j , ∀ j, t
C12: B[t]≤ B, ∀ t
C13: F^X_j[t]≤ F^X_j, ∀ t,j
C14: ∑_t=t':Tx_t',k[t]C_t',k,j[t]dt≥ up_t',k∑_jμ_t',k,j, ∀ t',k,j
C15: ∑_t=t':Ty_t',k[t]F^X_j[t]dt≥ req_t',k∑_jμ_t',k,j , ∀ t',k,j
C16: ||δ^uav[t]-δ^uav[t-1]||_2≤ v_max· dt, ∀ t
The decision variables are:
vars= {x_t',k[t], y_t',k[t], μ_t',k,j, T^compS_t',k,
T^compE_t',k, T^tranS_t',k, T^tranE_t',k, B[t], F^X_j[t], C_t',k[t], δ^uav[t]}
where x_t',k[t], y_t',k[t] and μ_t',k,j are binary decision variables indicating whether task I_t',k is being transmitted or computed at TTI t and whether it is offloaded to fog node X_j, respectively. T^compS_t',k, T^compE_t',k, T^tranS_t',k, and T^tranE_t',k represent the start and end times for computation and transmission for task I_t',k. B[t] is the bandwidth allocated at time slot t, and F^X_j[t] represents the computation resources allocated by fog device X_j at time slot t. C_t',k[t] represents the data transmission rate at time slot t for task I_t',k. δ^uav[t] is the position vector of UAVs at time slot t. punish is the punishment item when a task fails to be offloaded. dt is the length of time slot.
The constraints in Eq. (<ref>) are explained as follows:
C1 is Binary Constraint for Decision Variables, C2 is Offloading Constraint, C3 is Transmission and Computation Constraint, C4 & C5 are Computation Time Constraints, C6 & C7 are Transmission Time Constraints, C8 is Transmission Before Computation Time Constraint, C9 is Delay Constraint, C10∼ C13 are Resource Allocation Constraints, C14 is Transmission Constraint, C15 is Computation Constraint, and C16 is UAV Speed Constraint.
Due to the complexity of the original problem, we decompose it into three sub-problems: the UAV trajectory planning, V2X task assignment, and resource allocation problem.
§.§ Settings and Visualization
Methodology: We choose a square region in Berlin, Germany, spanning 2 km × 2 km as our simulation area for its complicated road situations. The environment is equipped with two RSUs and populated by four UAVs. The RSUs are strategically positioned at the coordinates (500,500) and (1500,1500) within the simulation area, while the UAVs are initially randomly dispersed. The UAVs are modeled to maintain a consistent altitude of 100 m.
Detailed simulation parameters are shown in Table <ref>.
Simulation Results: The visualization of our simulation environment is depicted in Fig. <ref>.
Changed Modules: Map and road topologies are extracted by the OpenStreetMap and the traffic is generated and controlled by the object. The routes of vehicles are stipulated by randomly selecting the start/end intersections.
§.§ K-Means for UAV Trajectory
Methodology:
K-Means clustering algorithm is used to divide the vehicles into several clusters. The UAVs are controlled to fly to the center of the clusters to collect the data from vehicles. Then, based on the positions of UAVs and RSUs, each vehicle decides the belonging service zone by selecting the nearest zone manager. Each vehicle can only offload its tasks to the fog nodes within its service zone.
Simulation Results:
In Fig. <ref>, we show the trace of the four UAVs in the system. The UAVs are initially randomly dispersed in the simulation area.
Changed Modules: The function in Fig. <ref> is changed to the K-Means algorithm.
§.§ Window-Based Hungarian for V2X Task Offloading
Methodology: The Hungarian algorithm, also known as the Kuhn-Munkres algorithm, is a combinatorial optimization algorithm that solves the assignment problem in polynomial time. For our scenario, the problem can be defined as finding the best assignment of tasks to devices that minimizes the total cost within the time window ws (defined as 10 TTIs in this part). The cost can be interpreted as the resource consumption or delay incurred by assigning a specific task to a particular device. When a task is offloaded to a device at time slot t, the task must be completed within the duration t+ws.
Simulation Results: The comparison of latency, complexity, and successful ratio is shown in Table <ref> and Fig. <ref>. Detailed analyses of the results are presented in Section <ref>.
Changed Modules: In the algorithm application module, the function is changed to the window-based Hungarian algorithm.
§.§ Alternating Optimization for Resource Allocation
Methodology:
Given the fixed task assignment relationship, the time slot allocation problem of joint communication and computation resources is formulated as a mixed-integer linear programming (MILP) problem, which is NP-hard. However, we find that the discrete variables in time slot allocation can be relaxed as continuous variables respectively for the transmission slot allocation problem and computing slot allocation problem. It is mainly because the optimal solution of the MILP problem equals to the one of the LP problem if the time-sharing condition holds<cit.>. Therefore, it leaves room for iterative optimization of the two sub-problems.
In this part, we propose an innovative approach that relies on the principles of Alternating Optimization (AO) to mitigate such complexities. Our AO-based two-step strategy breaks down the original problem into manageable sub-problems, each focusing on either the transmission time or the computational resource allocation. The AO-based strategy is shown in Algorithm <ref>. The result of Algorithm <ref> is the optimal x and y that decides the time slot allocation decisions. The AO-based strategy is guaranteed to converge to a global optimum since the two slot allocation problems are convex given the fixed task assignment matrix.
Simulation Results:
As shown in Table <ref>, we elaborate on the WHO (joint window-based Hungarian and AO), the Gurobi solver (commercial solver whose solutions are deemed as ground truth), and the greedy optimization in a small-scale area. From Table <ref>, we can observe that the WHO method achieves similar performance as the Gurobi solver, while the WHO method has much lower computational complexity. When the numbers of SV and TV reach 10, the complicated task offloading problem cannot be solved within 500 seconds, which is extremely impractical in real-world scenarios. The WHO method performs a good trade-off between performance and complexity.
In Fig. <ref>, the number of TVs is fixed at 50, and the varying number of SVs reflect different serving situations. When the number of UAVs is 4 and the number of total vehicles is less than 140, the latency of WHO drops sharply compared with the greedy algorithm in Fig. <ref>, which indicates a better utilization of the fog nodes' computation capabilities. When the number of vehicles becomes larger, the average latency is then constrained by the communication costs, with the successful ratio of 68% in Fig. <ref>.
However, when the number of UAVs changes to 6, the aerial-to-ground communication costs are reduced, and the average latency is also reduced. The successful ratio of WHO grows to 80% steadily with the increment of SV number in Fig. <ref>, while the one of the greedy algorithm is 72%.
Changed Modules: In the algorithm application module, the and functions are changed. In detail, the solution is derived by the AO-based strategy in Algorithm <ref>, and the resource allocation results are stored as properties in the algorithm module, returned directly when calling these two functions.
§.§ Proof-of-Stake for Blockchain Mining
Methodology:
We propose a blockchain-enabled framework that records every offloading transaction, thus ensuring an immutable and transparent task management process.
Consensus mechanisms play a vital role in the integrity of the blockchain. Given the computational intensity and energy inefficiency of PoW, our approach adopts the PoS consensus algorithm. This method offers a more sustainable alternative, significantly reducing the energy footprint by selecting validators (namely RSUs) based on the number of tokens they hold and are willing to stake. Each recorded transaction within the blockchain encapsulates the crucial details of the offloading event: the identifier of the paying vehicle, the receiving fog nodes, the monetary amount, and the associated task profiles, including computational requirements and expected execution time frames. The block generation policy in our framework is twofold: blocks are produced at fixed time intervals of one second, and a new block is initiated once the transaction count reaches a threshold of one hundred, thereby balancing timeliness with transactional throughput.
Simulation Results:
As shown in Fig. <ref>, the certificated transaction number per second on blockchain fluctuates around 110, which is the expected value given task completion ratio as 58.9 % in Fig. <ref> when the numbers of serving vehicles and task vehicles are 50. The transaction number per second is stable and reliable, which is the basis of the blockchain-enabled task offloading system.
Changed Modules: In the algorithm application module, the function is changed to the PoS consensus algorithm. Then, the is changed to generate the transactions in the blockchain according to the computation results in each time slot.
§ CONCLUSION AND FUTURE WORK
In this paper, we presented AirFogSim, a simulation platform that contributes to addressing the challenges of computation offloading in UAV-integrated VFC. Compared with current simulators, the proposed AirFogSim offers a more comprehensive and realistic simulation environment, focusing on the unique characteristics of UAVs and VFC in multiple layers, and providing several key missions in this field.
We also demonstrated the capabilities of AirFogSim through a case study of computation offloading in VFC. The results show that AirFogSim can effectively simulate the complex interactions between UAVs and vehicles.
Future work includes enriching AirFogSim with more diverse missions and robust security models and applying the platform to a broader range of applications in ITS. Our aim is to continuously refine AirFogSim, making it an increasingly effective tool for the research community and contributing to the evolution of intelligent transportation systems.
unsrt
1
wireless_era_X_ChengX. Cheng, R. Zhang, and L. Yang, “Wireless Toward the Era of Intelligent Vehicles,” IEEE Internet of Things Journal, vol. 6, no. 1, pp. 188-202, Feb. 2019.
blog_cavdata Tuxera, "Autonomous cars – the data storage challenge," Tuxera Blog, [Online]. Available: https://www.tuxera.com/blog/autonomous-cars-300-tb-of-data-per-year/. [Accessed: Nov. 28, 2023].
AIGC_vehicular_metaverse_XuXu, Minrui, et al. "Generative AI-Empowered Simulation for Autonomous Driving in Vehicular Mixed Reality Metaverses." arXiv preprint arXiv:2302.08418 (2023).
foggy_MuhamM. A. U. Rehman, M. Salah ud din, S. Mastorakis, and B. -S. Kim, “FoggyEdge: An Information-Centric Computation Offloading and Management Framework for Edge-Based Vehicular Fog Computing,” IEEE Intelligent Transportation Systems Magazine, vol. 15, no. 5, pp. 78-90, Sept.-Oct. 2023.
folo_zhu C. Zhu et al., “Folo: Latency and Quality Optimized Task Allocation in Vehicular Fog Computing,” IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4150-4161, Jun. 2019.
contract_matching_zhouZ. Zhou et al., “Computation Resource Allocation and Task Assignment Optimization in Vehicular Fog Computing: A Contract-Matching Approach,” IEEE Transactions on Vehicular Technology, vol. 68, no. 4, pp. 3113-3125, Apr. 2019.
ocvcZ. Wei, B. Li, R. Zhang, X. Cheng, and L. Yang, “OCVC: An Overlapping-Enabled Cooperative Vehicular Fog Computing Protocol,” IEEE Transactions on Mobile Computing, vol. 22, no. 12, pp. 7406-7419, Dec. 2023.
vfc_priority_Jinming_ShiJ. Shi, J. Du, J. Wang, J. Wang, and J. Yuan, “Priority-Aware Task Offloading in Vehicular Fog Computing Based on Deep Reinforcement Learning,” IEEE Transactions on Vehicular Technology, vol. 69, no. 12, pp. 16067-16081, Dec. 2020.
se_vfcX. Liu, W. Chen, Y. Xia, and C. Yang, “SE-VFC: Secure and Efficient Outsourcing Computing in Vehicular Fog Computing,” IEEE Transactions on Network and Service Management, vol. 18, no. 3, pp. 3389-3399, Sept. 2021.
large_scale_vfcY. Hou, Z. Wei, R. Zhang, X. Cheng, and L. Yang, “Hierarchical Task Offloading for Vehicular Fog Computing Based on Multi-Agent Deep Reinforcement Learning,” IEEE Transactions on Wireless Communications.
madrlZ. Wei, B. Li, R. Zhang, X. Cheng, and L. Yang, “Many-to-Many Task Offloading in Vehicular Fog Computing: A Multi-Agent Deep Reinforcement Learning Approach,” IEEE Transactions on Mobile Computing.
trust_compS. Xu, C. Guo, R. Q. Hu, and Y. Qian, “Blockchain-Inspired Secure Computation Offloading in a Vehicular Cloud Network,” IEEE Internet of Things Journal, vol. 9, no. 16, pp. 14723-14740, 15 Aug.15, 2022.
traffic_load_vecA. Bozorgchenani, S. Maghsudi, D. Tarchi, and E. Hossain, “Computation Offloading in Heterogeneous Vehicular Edge Networks: On-Line and Off-Policy Bandit Solutions,” IEEE Transactions on Mobile Computing, vol. 21, no. 12, pp. 4233-4248, 1 Dec. 2022.
redundant_resourceA. S. Shafigh, B. Lorenzo, S. Glisic, and Y. Fang, “Low-Latency Robust Computing Vehicular Networks,” IEEE Transactions on Vehicular Technology, vol. 72, no. 2, pp. 2130-2144, Feb. 2023.
coverage_uavM. Samir, D. Ebrahimi, C. Assi, S. Sharafeddine, and A. Ghrayeb, “Leveraging UAVs for Coverage in Cell-Free Vehicular Networks: A Deep Reinforcement Learning Approach,” IEEE Transactions on Mobile Computing, vol. 20, no. 9, pp. 2835-2847, 1 Sept. 2021.
highway_uav J. Li, X. Cao, D. Guo, J. Xie, and H. Chen, “Task Scheduling With UAV-Assisted Vehicular Cloud for Road Detection in Highway Scenario,” IEEE Internet of Things Journal, vol. 7, no. 8, pp. 7702-7713, Aug. 2020.
disaster_Y_WangY. Wang et al., “Task Offloading for Post-Disaster Rescue in Unmanned Aerial Vehicles Networks,” IEEE/ACM Transactions on Networking, vol. 30, no. 4, pp. 1525-1539, Aug. 2022.
joint_Y_LiuY. Liu et al., “Joint Communication and Computation Resource Scheduling of a UAV-Assisted Mobile Edge Computing System for Platooning Vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 8435-8450, Jul. 2022.
blockchain_evastropR. Gupta, M. M. Patel, S. Tanwar, N. Kumar, and S. Zeadally, “Blockchain-Based Data Dissemination Scheme for 5G-Enabled Softwarized UAV Networks,” IEEE Transactions on Green Communications and Networking, vol. 5, no. 4, pp. 1712-1721, Dec. 2021.
ifogsimGupta, Harshit, et al. “iFogSim: A toolkit for modeling and simulation of resource management techniques in the Internet of Things, Edge and Fog Computing Environments.” Software: Practice and Experience 47.9 (2017): 1275-1296.
ifogsim2Mahmud, Redowan, et al. “iFogSim2: An Extended iFogSim Simulator for Mobility, Clustering, and Microservice Management in Edge and Fog Computing Environments.” Journal of Systems and Software 190 (2022): 111351.
edgecloudsimSonmez, Cagatay, Atay Ozgovde, and Cem Ersoy. “Edgecloudsim: An Environment for Performance Evaluation of Edge Computing Systems,” Transactions on Emerging Telecommunications Technologies 29.11 (2018): e3493.
fognetsim++T. Qayyum, A. W. Malik, M. A. Khan Khattak, O. Khalid, and S. U. Khan, “FogNetSim++: A Toolkit for Modeling and Simulation of Distributed Fog Environment,” IEEE Access, vol. 6, pp. 63570-63583, 2018.
veins C. Sommer, R. German, and F. Dressler, “Bidirectionally Coupled Network and Road Traffic Simulation for Improved IVC Analysis,” IEEE Transactions on Mobile Computing, vol. 10, no. 1, pp. 3-15, Jan. 2011.
vfogsimÖ. U. Akgül, W. Mao, B. Cho, and Y. Xiao, “VFogSim: A Data-Driven Platform for Simulating Vehicular Fog Computing Environment,” IEEE Systems Journal, vol. 17, no. 3, pp. 5002-5013, Sept. 2023.
marsimF. Kong et al., “MARSIM: A Light-Weight Point-Realistic Simulator for LiDAR-Based UAVs,” IEEE Robotics and Automation Letters, vol. 8, no. 5, pp. 2954-2961, May. 2023.
skywalkerK. Hayawi, Z. Anwar, A. W. Malik, and Z. Trabelsi, “Airborne Computing: A Toolkit for UAV-Assisted Federated Computing for Sustainable Smart Cities,” IEEE Internet of Things Journal, vol. 10, no. 21, pp. 18941-18950, 1 Nov.1, 2023.
3gpp_r15 3GPP, “Technical Specification Group Radio Access Network; Study Enhancement 3GPP Support for 5G V2X Services; (Release 15),” Technical Specification (TS) TR 22.886, 3rd Generation Partnership Project (3GPP), Mar. 2017.
Version 15.3.0.
winner2_model P. Kysti, J. Meinil, L. Hentil, X. Zhao, and T. Rautiainen, “IST-4-027756 Winner II D1.1.2 v1.2 Winner II Channel Models,” 2008.
3gpp_36885 3GPP, “Technical Specification Group Radio Access Network; Study LTE-Based V2X Services; (Release 14),” Technical Specification (TS) 36.885, 3rd Generation Partnership Project (3GPP), Jun. 2016.
Version 14.0.0.
3gpp_36777 3GPP, “Technical Specification Group Radio Access Network; Study on Enhanced LTE Support for Aerial Vehicles; (Release 15),” Technical Specification (TS) 36.777, 3rd Generation Partnership Project (3GPP), Dec. 2017.
Version 15.0.0.
cloudsimCalheiros, Rodrigo N., et al. “CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms.” Software: Practice and experience 41.1 (2011): 23-50.
omenet++A. Varga and R. Hornig, “An Overview of the OMNeT++ Simulation Environment,” in Proc. 1st Int. Conf. Simul. Tools Techn. Commun., Netw. Syst. Workshops, 2008, pp. 1–10.
sumo_2012D. Krajzewicz, J. Erdmann, M. Behrisch, and L. Bieker, “Recent Development and Applications of SUMO - Simulation of Urban MObility,” International Journal on Advances in Systems & Measurements, 2012.
winpropR. Hoppe, G. Wölfle, and U. Jakobus, “Wave Propagation and Radio Network Planning Software WinProp Added to the Electromagnetic Solver
Package FEKO,” in Proc. Int. Appl. Comput. Electromagn. Soc. Symp., 2017, pp. 1–2.x
evfogZ. Wei, B. Li, R. Zhang, and X. Cheng, “Contract-Based Charging Protocol for Electric Vehicles With Vehicular Fog Computing: An Integrated Charging and Computing Perspective,” IEEE Internet of Things Journal, vol. 10, no. 9, pp. 7667-7680, 1 May., 2023.
spectrum_vehicle_jsac_2019 L. Liang, H. Ye, and G. Y. Li, “Spectrum Sharing in Vehicular Networks Based on Multi-Agent Reinforcement Learning,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 10, pp. 2282–2292, 2019.
time_sharing_WC_2006 W. Yu and R. Lui, “Dual Methods for Nonconvex Spectrum Optimization of Multicarrier Systems,” IEEE Transactions on Communications, vol. 54, no. 7, pp. 1310–1322, 2006.
|
http://arxiv.org/abs/2409.03738v1 | 20240905175223 | Horizontal norm compatibility of cohomology classes for $\mathrm{GSp}_{6}$ | [
"Syed Waqar Ali Shah"
] | math.NT | [
"math.NT",
"math.RT",
"11R23, 11F70 (Primary) 20E42, 20G25, 22D99 (Secondary)"
] |
§ ABSTRACT We establish abstract horizontal
norm relations involving the unramified Hecke-Frobenius polynomials that correspond under the Satake isomorhpism to the degree eight spinor L-factors of GSp_6. These relations apply to classes in the degree seven motivic cohomology of the Siegel modular sixfold obtained
via Gysin pushforwards of Beilinson's Eisenstein symbol
pulled back on one copy in a triple product of modular curves. The proof is based on a novel approach that circumvents the failure of the so-called multiplicity one hypothesis in our setting, which precludes the applicability of an existing technique. In a sequel, we combine our result with the previously established vertical norm relations for these classes to obtain new Euler systems for the eight dimensional Galois representations associated with certain non-endoscopic cohomological cuspidal automorphic representations of GSp_6.
Better bounds on Grothendieck constants of finite orders
Sebastian Pokutta
5th September 2024
========================================================
§ INTRODUCTION
Ever since the pioneering work of Kolyvagin, the machinery of Euler systems has become a standard tool for probing the structure of Selmer groups of global Galois representations and for establishing specific instances of Bloch-Kato and Iwasawa main conjectures. Recently, there has been an interest in constructing Euler systems for Galois representations found in the cohomology of Siegel modular varieties.
In <cit.>, the authors constructed an Euler system for certain four dimensional Galois representations found in the middle degree cohomology of the GSp_4 Siegel modular variety. They also introduced a new technique of using local zeta integrals that has been applied with great success in many other settings (<cit.>, <cit.>, <cit.>, <cit.>).
The natural successor of GSp_4 in Euler system based investigations is the Siegel modular variety attached to GSp_6. This is a sixfold
whose middle degree cohomology realizes the composition of the spin representation with the GSpin_7-valued Galois representation associated under Langlands correspondence with certain
cohomological
cuspidal automorphic representations of GSp_6 <cit.>, <cit.>.
A standard paradigm for constructing Euler systems for such geometric Galois representations is via pushforwards of a special family of motivic cohomology classes known as Eisenstein symbols.
A natural candidate class
in the GSp_6 setting is the pushforward of the Eisenstein symbol pulled
back on one copy in a triple product of modular curves. Besides having the correct numerology, this particular choice of pushforward is motivated by a period integral of Pollack and Shah <cit.>, who showed that integrating certain cusp forms of GSp_6 against an Eisenstein series on one copy in a triple product of _2 retrieves the degree eight (partial) spinor L-function for that cusp form. In <cit.>, the authors use this period integral to relate the regulator of our candidate class in Deligne-Beilinson cohomology to non-critical special values of the spinor L-function, thereby providing
evidence that it sits at the bottom of a non-trivial
Euler system whose behaviour can be explicitly tied to special L-values.
To construct an Euler system above this class, one needs to produce classes going up the abelian tower over that satisfy among themselves two kinds of norm
relations. One of these is the vertical relations that see variation along the _p-extension and are Iwasawa theoretic in nature. These have already been verified in <cit.> using a general method later axiomatized in <cit.>.
The other and typically more challenging kind is the horizontal relations that see variation along ray class extensions and involve local L-factors of the Galois representation. These present an even greater challenge in the GSp_6 case, since one is dealing with a non-spherical pair of groups and the so-called multiplicity one hypothesis on a local space of linear functionals fails to hold. In particular, the technique of local zeta integrals of <cit.> and its variants cannot be applied in this situation to establish horizontal norm compatibility.
The purpose of this article is to establish the ideal version of this compatibility using a fairly general method developed by us in a companion article <cit.>, thereby completing the Euler system construction envisioned in <cit.>.
For convenience and to free up notations that play no role outside the proof of our norm relations, we have chosen to cast our result in the framework of abstract cohomological Mackey (CoMack) functors[the more relaxed notion of “Mackey functor" is referred to as a “cohomology functor" in <cit.>].
The application to p-adic étale cohomology and the actual Euler system construction is recorded in a sequel <cit.>. In future, we also expect to establish an explicit reciprocity law relating this Euler system to special values of the spinor L-function by means of a p-adic L-function, thereby making progress on the Bloch-Kato and Iwasawa main conjectures in this setting.
§.§ Main result
Let = GSp_6, = ×_m and = _2×__m_2×__m_2 where the products in 𝐇 are fibered over the determinant map. There is a natural embedding ι : ↪ and if sim : →_m denotes the similitude map, then post composing ι with 1_×sim : → gives us an embedding
ι̃ : ↪
via which we view as a subgroup of . For ℓ a rational prime, let G_ℓ denote the groups of _ℓ-points of and
let ℋ_R denote the spherical Hecke algebra of G_ℓ with coefficients in a ring R. For c an integer, let ℌ_ℓ,c(X) ∈ℋ_[ℓ^-1][X]
denote the unique polynomial in X such that for any (irreducible) unramified representation π_ℓ of G_ℓ and any spherical vector φ_ℓ∈π_ℓ,
ℌ_ℓ,c(ℓ^-s) ·φ_ℓ = L(s+c, π_ℓ , Spin) ^-1·φ_ℓ
for all s ∈ℂ. Here L(s, π_ℓ, Spin) denotes the spinor L-factor of π_ℓ normalized as in <cit.>.
Fix any finite set S of rational primes and let G, G̃, H denote the group of _S·_f^S-points of , ,
respectively.
Fix also a neat compact open subgroup K ⊂ G such that K is unramified at primes away from S. Let 𝒩 denote the set of all square free products of primes outside S (where the empty product means 1) and for n ∈𝒩, denote
K[n] = K ×∏_ℓ∤ n_ℓ^×∏_ℓ| n ( 1+ ℓ_ℓ) ⊂G̃ .
Let 𝒪 be a characteristic zero integral domain such that ℓ∈𝒪^× for all ℓ∉ S. Denote by 𝒮 = 𝒮_𝒪 the 𝒪-module of all locally constant compactly supported functions χ :Mat_2 × 1 (_f) ∖{ 0 }→𝒪 such that χ =
f_S⊗χ ^S where f_S is a fixed function on Mat_2 × 1 (_S) that is invariant under (_S ) under the natural left action of H on such functions. We view the association V ↦𝒮(V) that sends a compact open subgroup V of H to the V-invariants of 𝒮 as a CoMack functor for H.
Let U = H ∩ K[1] and let
ϕ =
f_S⊗(^S) ∈𝒮_(U)
where ^S = ∏_ℓ∉ S _ℓ denotes integral adeles away from S. Finally, let Frob_ℓ denote (ℓ_ℓ^×).
[Theorem <ref>] For any 𝒪-Mod valued cohomological Mackey functor M_G̃ for G̃,
any Mackey pushforward ι̃_* : 𝒮→ M_G̃ and any integer c, there exists a collection of classes y_n∈ M_G̃(K[n]) indexed by n ∈𝒩 such that y_1 = ι̃ _U, K[1],*(ϕ) and
[ℌ_ℓ,c(Frob_ℓ)]_*( y_n ) = pr_K[nℓ], K [n],*(y_nℓ)
for all n , ℓ∈𝒩 such that ℓ is a prime and ℓ∤ n.
Here for a locally constant compactly supported function f : G̃→𝒪, [ f ]_* denotes the covariant action of f and _* denotes the trace
map of the functor M_G̃. For sufficiently negative c, the Hecke polynomial ℌ_ℓ, c(X) has coefficients in ℋ_. For such c, the condition on invertibility of primes outside S in 𝒪 can be dropped.
In the intended application, the functor 𝒮 over parametrizes weight-k Eisenstein classes in the first motivic cohomology of the modular curve. Its composition with the étale regulator admits a _p-valued version by <cit.>, which ensures integrality of classes in Galois cohomology corresponding to all choices of integral Schwartz functions.
The set S corresponds to the set of “bad primes" where the behaviour of Eisenstein classes is pathological and the function f_S is therefore not perturbed for Euler system purposes. The functor for G̃ is the degree seven absolute étale cohomology on which (ℓ_ℓ^× ) acts covariantly as
arithmetic Frobenius. Moreover the
pushforward ι̃ _* is obtained via the Gysin triangle in Ekedahl's “derived" category of lisse étale p-adic sheaves along with certain branching laws of coefficient sheaves on the underlying Shimura varieties. The abstract formalism of functors used above applies to this cohomology theory by various results established in
<cit.>.
The bottom class y_1 in our Euler system is meant to be a geometric incarnation of the Rankin-Selberg period integral of Pollack-Shah <cit.>[This integral is denoted by I(ϕ, s) in loc. cit.] and is expected to be related to certain special values of the degree eight spinor L-function via this period. See also
<cit.>.
§.§ Our approach
While Theorem <ref> is the key relation required for an Euler system, its proof relies on a far more fundamental and purely local relation that lies at the heart of our approach. In a nutshell, our approach posits that if the convolutions of all ‘twisted’ restrictions to H_ℓ = ( _ℓ ) of the Hecke-Frobenius polynomial with the unramified Schwartz function ϕ_ℓ = ( [ _ℓ; _ℓ ] ) fall in the image of certain trace maps, then Theorem <ref> follows. This local relation is also exactly what is needed in <cit.>, as it allows us to synthesize the results of <cit.> with our own.
We state this relation precisely. In analogy with the global situation, let 𝒮_ℓ denote the set of all 𝒪-valued locally constant compactly supported functions on Mat_2 × 1 ( _ℓ ). Again, this is a smooth H_ℓ-representation which we view as a CoMack functor for H_ℓ. Denote G̃_ℓ = (_ℓ) and K̃_ℓ = (_ℓ). For a compactly supported function ℌ̃ : G̃_ℓ→ and g ∈G̃_ℓ,
the
(H_ℓ,g)-restriction of ℌ̃ is the function
𝔥_g : H_ℓ→𝒪 h ↦ℌ̃(hg) .
If ℌ̃ is K̃_ℓ-biinvariant, then 𝔥_g is left invariant under U_ℓ = H_ℓ∩K̃_ℓ and right invariant under H_ℓ, g = H_ℓ∩ g K̃_ℓ
g^-1. It therefore induces
an 𝒪-linear map 𝔥_g,* : 𝒮_ℓ (U_ℓ) →𝒮_ℓ
(H_ℓ , g ).
Let V_ℓ , g denote the subgroup of all elements in H_ℓ, g whose similitude lies in 1 + ℓ_ℓ.
[Theorem <ref>] Suppose in the notation above, ℌ̃ = ℌ_ℓ,c(Frob_ℓ) where c is any integer. Then 𝔥_g,*(ϕ_ℓ)
lies in the image of the trace map _* : 𝒮_ℓ (V_ℓ,g) →𝒮_ℓ (H_ℓ, g) for every g ∈G̃_ℓ.
Results analogous to Theorem <ref> were obtained in <cit.>, which strengthen the norm relations of <cit.> and <cit.> to their ideal (motivic) versions. The machinery of <cit.> takes Theorem <ref> as input and gives Theorem <ref> as output, and can also easily incorporate vertical norm compatibility once a local result has been established, say, in the style of <cit.>.
Our approach
has also been successfully applied in forthcoming works to obtain new Euler systems for certain exterior square motives in the cohomology of GU_2,2 Shimura varieties <cit.> and for certain rank seven motives of type G_2 <cit.>. All these results taken together point towards an intrinsic “trace-imbuing" property of Hecke polynomials attached to Langlands L-factors that seems to be preserved under twisted restrictions on suitable reductive
subgroups. We
hope to explain this property more conceptually at a future point.
§.§ Outline
We prove Theorem <ref> by explicitly computing the convolutions of twisted restrictions of ℌ̃ = ℌ_ℓ,c(Frob_ℓ) with ϕ_ℓ. As this is rather involved, we have divided the article into two parts, the first containing mainly statements and the second their proofs. Below we provide an outline of the key steps.
Note first of all that if 𝔥_g,*(ϕ_ℓ ) lies in the image of the trace map, so does 𝔥_η g γ,*(ϕ_ℓ ) for any η∈ H_ℓ and γ∈K̃_ℓ. Thus it suffices to compute 𝔥_g,*(ϕ_ℓ ) for g running over a choice of representatives for
H_ℓ\ H_ℓ·(ℌ̃ ) / K̃_ℓ.
Since multiplies of ℓ - 1 obviously lie in the images of trace maps that concern us, it also suffices to compute these functions modulo ℓ - 1. This allows us to completely bypass the computation of ℌ_ℓ,c(X) by a property of Kazhdan-Lusztig polynomials. It is also straightforward to restrict attention to ℌ : = ℌ_ℓ,c(1) ℓ - 1 by first restricting ℌ̃ to G_ℓ. The problem is then reduced to computing U_ℓ-orbits on certain double coset spaces K_ℓ g K_ℓ/ K_ℓ where K_ℓ = (_ℓ ) and (K_ℓ g K_ℓ) is a Hecke operator in ℌ.
The key technique that allows us to compute these orbits is a recipe of decomposing parahoric double cosets proved in <cit.>. It is originally due to Lansky <cit.> in the setting of Chevalley groups.
However even with the full force of this recipe, directly computing the U_ℓ-orbits on all the relevant double coset spaces is a rather formidable task, particularly because the pair ( , ) is not spherical. See also Remark <ref>.
What makes this computation much more tractable is the introduction of an intermediate group that allows us to compute the twisted restrictions in two steps. In the first step, we compute the restrictions of ℌ with respect to the group H' _ℓ = '(_ℓ) where ' = _2× __mGSp_4 .
The pair (', ) is spherical, and a relatively straightforward
computation shows that there are three H'_ℓ-restrictions corresponding to the representative elements
τ_0 = ([ 1 ; 1 ; 1 ; 1; 1; 1 ] ),
τ_1 = ([ ℓ 1; ℓ 1; ℓ ; 1; 1; 1 ] ),
τ_2 = ([ ℓ ℓ^-1; ℓ ℓ^-1; 1 ; ℓ^-1; ℓ^-1; 1 ])
in G_ℓ. This is expected since a general “Schröder type" decomposition holds for the quotient H'_ℓ\ G_ℓ / K_ℓ by a result of Weissauer <cit.>. We denote the (H'_ℓ, τ_i)-restrictions of ℌ by 𝔥_i. This step is recorded in <ref> and justifications are provided in <ref>.
The second step is to compute the H_ℓ-restrictions of 𝔥_i for i = 0,1,2. This essentially turns out to be a study of _2×__m_2-orbits on GSp_4-double cosets. Since (_2× __m , _2 , GSp_4 ) is also a spherical pair, this is again straightforward for i = 0 and even for i =1 as the projection of H'_ℓ∩τ_1 K_ℓτ_1^-1 to the GSp_4(_ℓ)-component turns out to be a non-special maximal compact open subgroup of GSp_4(_ℓ). The more challenging case of i = 2 is handled by comparing the double cosets with a subgroup of GSp_4(_ℓ) deeper than the Iwahori subgroup that sits in the projection of the twisted intersection. For 𝔥_0 (resp., 𝔥_1), there turn out to be three (resp., four) restrictions indexed again by certain “Schröder type" representatives.
For 𝔥_2 however, there turn out to be ℓ + 3 restrictions.
We use the symbols ϱ, ς, ϑ for the set of distinct representatives of H_ℓ\ H_ℓ·(ℌ) /K_ℓ which correspond to the H_ℓ-restrictions of 𝔥_0, 𝔥_1, 𝔥_2 respectively. The diagram below organizes these restrictions in a tree.
ℌ [dd] [rrrd] [lld]
𝔥_0 [rd] [d] [ld] 𝔥_2 [lld] [ld] [d] [rd] [rrd]
𝔥_ϱ_0 𝔥_ϱ_1 𝔥_ϱ_2 𝔥_1 [lld] [ld] [d] [rd] 𝔥_ϑ_0 𝔥_ϑ_1 𝔥_ϑ_2 𝔥_ϑ_3 𝔥_ϑ̃_k
𝔥_ς_0 𝔥_ς_1 𝔥_ς_2 𝔥_ς_3
Here the branch indexed by ϑ̃_k actually designates ℓ - 1 branches, one for each value of k ∈{0,1, 2,3,…, ℓ-2}. Thus H_ℓ\ H_ℓ·(ℌ) / K_ℓ consists of 3 + 4 + (4 + ℓ -1 ) = ℓ + 10 elements. The corresponding ℓ + 10 restrictions are recorded in <ref> and proofs of various claims are provided in <ref>. Once these restrictions are obtained, the final step is to compute their covariant convolution with ϕ_ℓ. We show in <ref> that all resulting convolutions vanish modulo ℓ - 1 except for 𝔥_ϑ_3,*(ϕ_ℓ).
A necessary and sufficient criteria established in <cit.> allows us to easily determine that 𝔥_ϑ_3,*(ϕ_ℓ) lies in the image of the appropriate trace map and thus deduce the truth of Theorem <ref>.
For comparison, the GSp_4 setting studied in <cit.> involved only 2 restrictions, which explains why the test vector of <cit.> only required two terms to produce the L-factor.
The mysterious vanishing of all but one of the convolutions modulo ℓ - 1 and
the simplicity of 𝔥_ϑ_3,*(ϕ_ℓ) strongly suggest that a more conceptual proof of our result is possible.
§.§ Acknowledgements
I would like to express my gratitude to Antonio Cauchi and Joaquín Rodrigues Jacinto, whose work on Beilinson conjectures and vertical norm relations in the GSp_6 setting served as the inspiration for this article. I am especially indebted to Antonio Cauchi for his careful explanation of the unfeasibility of a related construction and for his unwavering support
throughout the course of this project. In addition, I thank Aaron Pollack, Andrew Graham, Christophe Cornut, Barry Mazur, Daniel Disegni, David Loeffler and Wei Zhang for several valuable conversations in relation to the broader aspects of this work. I am also grateful to Francesc Castella, Naomi Sweeting and Raúl Alonso Rodríguez for some useful comments and suggestions. At various stages, the software MATLAB® was used for performing and organizing symbolic matrix manipulations, which proved
invaluable in composing many of the proofs.
PART:
Statements of results
§ GENERAL NOTATION
The notations introduced here are used throughout this article except for <ref>. For aesthetic reasons, we work with an arbitrary local field of characteristic zero, though we only need the results over _ℓ.
Let F denote a local field of characteristic zero, _F its ring of integers, ϖ a uniformizer, = _F / ϖ_F its residue field and q = | |. For a ≥ 0 an integer, we let [_a] ⊂_F denote a fixed set of representatives for _a = _F / ϖ^a_F and we omit the subscript a when a = 1. We let 0,1,-1 ∈ [] denote the elements that represent 0,1,-1 ∈ respectively. For n an integer, let 1_n denote the n × n identity matrix and J_2n = ( [ 1_n; - 1_n ] ) denote the standard 2n × 2n
symplectic matrix. We define GSp_2n to be the group scheme over whose R-points for a ring R are given by
GSp_2n(R) = { (g ,c )∈_2n(R) × R^× | g^t J_2n g = c J_2n} .
Note that GSp_2 is the general linear group _2. We let sim : GSp_2n→_m, (g,c) ↦ c denote the similitude map and refer to an element (g, c) ∈GSp_2n(R) simply by g. The following group schemes will be used throughout:
2
* = _2×__m_2×__m_2,
* _1 = _2,
* _2 = _2×__m_2,
* ' = _2×__mGSp_4,
* '_2 = GSp_4,
* 𝐆 = GSp_6
where all the products are fibered over similitude maps. We define H, H_1, H_2, H ', H_2', G to be respectively the group of F-points of the algebraic groups above and U, U_1, U_2, U', U_2 ', K to be the group of _F-points.
We define projections
_1: ⟶_1 _2 : ⟶_2 _1': ' ⟶_1 _2': ' ⟶'_2
(h_1,h_2,h_3)
⟼h_1 (h_1, h_2, h_3 ) ⟼(h_2,h_3) (h_1, h_2 )
⟼h_1
(h_1, h_2) ⟼h_2
and embeddings
_2 : _2 ⟶'_2 : ⟶' ι' : ' ⟶
( (
[ a b; c d ] ) ,
(
[ a' b'; c' d' ] ) ) ⟼( [ a b; a ' b'; c d; c' d ' ] ) (h_1, h_2, h_3) ⟼(h_1, _2(h_2, h_3) ) ( [ a b; c d ] , [ A B; C D ] ) ⟼[ a b; A B; c d; C D ]
via which we consider U_2, H_2, U, H, U', H' to be subgroups of U_2 ', H_2', U', H', K, G respectively. We let
ι : →
denote the composition ι '
∘ via which we view U, H as subgroups of K, G respectively. If R is a commutative ring with identity and L_1, L_2 are compact open subgroups of G, we write 𝒞_R(L_1\ G / L_2) for the set of R-valued compactly supported functions f : G → R that are left L_1-invariant and right L_2-invariant. Similar notations will be used for functions on H and H '.
Given a function 𝔉 : G → R and an element g ∈ G, we define the (H', g)-restriction of 𝔉 to be the function 𝔣_g : H' → R given by 𝔣_g(h) = 𝔉(hg) for all h ∈ H '. We similarly define (H,g)-restriction of 𝔉 and (H,η)-restrictions of functions on H' and η∈ H'.
It is easy to see that if 𝔉∈𝒞_R(K\ G /K), then 𝔣_g∈𝒞_R(U' \ H' / H_g' ) where H'_g = H '∩ g K g^-1. If η∈ H', then the (H,η)-restriction of 𝔣_g coincides with the (H, η g)-restriction of 𝔉 and lies in 𝒞_R(U\ G / H_η g) where H_η g = H ∩η H_g ' η^-1 = H ∩η g K g^-1η^-1.
§ SPINOR HECKE POLYNOMIAL
§.§ Root datum of
Let 𝐀 = _m^4 and dis : 𝐀→𝐆 to be the embedding given by
(u_0, u_1,u_2, u_3 ) ↦diag( u_1 , u_2 , u_3 , u_0 u_1 ^-1, u_0 u _2 ^-1 , u_0 u _3 ^-1 ) .
Then dis
identifies 𝐀 with a maximal (split) torus in 𝐆. We let A, A^∘ = A ∩ K denote respectively the group of F, _F-points of 𝐀. Let e_i : 𝐀→_m be the projection onto the i-th component, f_i : _m→𝐀 be the cocharacter inserting u into the i-th component with 1 in the remaining components. We will let
Λ = f_0⊕⋯⊕ f_3
denote the cocharacter lattice. An element a_0 f_0 + … + a_3 f_3∈Λ will also be denoted by (a_0, …, a_3 ). The set Φ⊂ X^*(𝐀 ) of roots of 𝐆 are
* ± ( e_i - e_j ) for 1 ≤ i < j ≤ 3,
* ± ( e_i + e_j - e_0 ) for 1 ≤ i < j ≤ 3
* ± ( 2 e_i - e_0 ) for i = 1 , 2 , 3
which makes an irreducible root system of type C_3. We choose
α_1 = e_1 - e_2, α_2 = e_2 - e_3, α_3 = 2 e_3 - e_0
as our simple roots and let Δ = {α_1 , α_2 , α_3}. This determines a subset Φ ^ + ⊂Φ of positive roots. The
resulting half sum of positive roots is
δ = -3 e_0 + 3e_1 + 2 e_2 + e_3∈ X^*( 𝐀)
and the highest root is α_0 = 2 e_1 - e_0.
The simple coroots corresponding to α_i for i = 0, 1, 2, 3 are
α_0 ^ ∨ = f_1, α_1^∨ = f_1 - f_2, α_2^∨ = f_2 - f_3, α_3^∨ = f_3
and their span in Λ is denoted by Q ^∨. The set Δ determines a dominance order on Λ. Explicitly, an element λ = (a_0 , …, a_3 ) ∈Λ is dominant iff
a_1≥ a_2≥ a_3 and 2 a_3 - a_0≥ 0 .
It is anti-dominant if all these inequalities hold in reverse. We denote the set of dominant cocharacters by Λ^+.
Let W denote the Weyl group of (, 𝐀) and s_i be the reflection associated with α_i, i = 0, …, 3. The action of s_i on Λ is given as follows:
* s_i acts by switching f_i↔ f_i+1 for i = 1,2,
* s_3 acts by sending f_0↦ f_0 + f_3, f_3↦ - f_3,
* s_0 = s_1 s_2 s_3 s_2 s_1 acts by sending f_0↦ f_0 + f_1, f_1↦ - f_1.
We have W = ⟨ s_1, s_2, s_3⟩≃ ( /2 )^3⋊ S_3 where S_3 denotes the group of permutations of three elements that acts on ( / 2 ) ^3 in the obvious manner.
§.§ Iwahori Weyl group
Let I denote the Iwahori subgroup of G corresponding to (the alcove determined by) the simple affine roots Δ_aff = {α_1, α_2, - α_0 + 1 }. Explicitly, I is the compact open subgroup of K whose reduction modulo ϖ is the Borel subgroup of 𝐆() determined by Δ. Let W_aff and W_I denote respectively the affine Weyl and Iwahori Weyl groups of the pair (𝐆, 𝐀). We view W_aff as a subgroup of the group of affine transformations of Λ⊗. Given λ∈Λ, we let t(λ) denote translation by λ map on Λ⊗ and write ϖ^λ for the element λ(ϖ) ∈ A. Let v : A / A ^∘→Λ be the inverse of the isomorphism Λ→ A / A ^∘ given by λ↦ϖ^-λ A^∘. Then
* W_aff = t(Q^∨ ) ⋊ W
* W_I = N_G(A) / A^∘ = A/A^∘⋊ W v≃Λ⋊ W,
where N_G(A) denotes the normalizer of A in G. The set S_aff = { s_1 , s_2 ,s_3 , t ( α _ 0 ^ ∨ ) s_0} is a generating set for W_aff and the pair (W_aff , S_aff) forms a Coxeter system of type C̃_3. Identifying W_I with Λ⋊ W as above, we can consider W_aff a subgroup of W_I via W_aff = t(Q^∨) ⋊ W ↪ t(Λ) ⋊ W. The quotient
Ω : = W_I / W_aff
is then an infinite cyclic group and we have a canonical isomorphism W_I≅ W_aff⋊Ω.
We let
ℓ : W_I→
denote the induced length function with respect S_aff. Given λ∈Λ, the minimal length of elements in t ( λ ) W is achieved by a unique element. This length is given by
ℓ_min ( t(λ) ) : = ∑_α∈Φ_λ | ⟨λ , α⟩ | + ∑ _ α∈Φ^λ ( ⟨λ , α⟩ - 1 )
where Φ_ λ = {α∈Φ ^ + | ⟨λ , α⟩≤ 0 } and Φ ^ λ = {α∈Φ ^ + , | ⟨λ , α⟩ > 0 }. When λ is dominant, this is also the minimal length of elements in W t(λ) W. Consider the following elements in N_G (A):
w_1 : = 1.1( [ 0 1 ; 1 0 ; 1 ; 0 1 ; 1 0 ; 1 ] ),
w_2 : = ( 1.1[ 1 ; 0 1 ; 1 0 ; 1 ; 0 1; 1 0 ]) , w_3 : = 1.1( [ 1 ; 1 ; 0 1; -1 ; -1 ; 1 0 ] )
w_ 0 : = 1.1( [ 0 11ϖ ; 1 ; 1 ; ϖ 0; -1; -1 ] ), ρ = 0.95( [ 1; 1 ; 1 ; ϖ ; ϖ ; ϖ ] ) .
The classes of w_0 , w_1 ,w_2, w_3 in W_I represent t(α_0 ^∨ ) s_0, s_1 , s_2, s_3 respectively and the reflection s_0 is represented by w_α_0 : = ϖ^f_1 w_0 = w_1 w_2 w_3 w_2 w_1. The class of ρ represents ω : = t(-f_0 ) s_3 s_2 s_3 s_1 s_2 s_3 which is a generator of Ω and the conjugation by ω acts by switching s_0↔ s_3, s_1↔ s_2. That is, it induces an automorphism of the extended Coxeter-Dynkin diagram
0.9[extended,Coxeter,
edge length=1cm,
labels=0,1,2,3]
C3
where the labels below the vertices correspond to w_i. Note also that ρ^2 = ϖ^(2,1,1,1)∈ A is central. We will henceforth use the letters w_i, ρ to denote both the matrices and the their classes in W_I if no confusion can arise. When referring to action of simple reflections in W on Λ however, we will stick to the letters s_i.
§.§ The Hecke polynomial
Let [ Λ ] denote the group algebra of Λ. For λ∈Λ, we let e ^ λ∈ [ Λ ] denote[this is done to distinguish the addition in Λ from addition in the group algebra] the element corresponding to λ
and e ^ W λ∈ [ Λ ] denote the the (formal) sum of elements in the orbit W λ.
We will denote y_i : = e^f_i∈ [ Λ ] for i = 0 , … 3, so that
[ Λ ] = [ y_0 ^± , …, y_3 ^ ± ] .
Let ℛ = ℛ_q denote the ring [ q ^ ±1/2 ]. The dual group of 𝐆 has an 8-dimensional representation called the spin representation. Its highest (co)weight is f_0 + f_1 + f_2 + f_3 which is minuscule. Thus its (co)weights are 1/2 ( 2 f_0 + f_1 + f_2 + f_3 ) + 1/2 ( ± f_1± f_2± f_3 ) and its characteristic (Satake) polynomial is
𝔖 _spin ( X ) = ( 1- y_0 X ) ( 1- y_0 y_1 X) ( 1- y_0 y_2 X ) ( 1 - y_0 y_3 X)
( 1- y_0 y_1 y_2 X ) ( 1- y_0 y_1 y_3 X )( 1- y_0 y_2 y_3 X ) ( 1 - y_0 y_1 y_2 y_3 X ) ∈ [ Λ ] ^ W (X) .
Let ℋ_ℛ(K \ G / K ) denote the spherical Hecke algebra with coefficients in ℛ that is defined with respect to a measure on G giving K measure one. Let
𝒮 : ℋ_ℛ(K \ G / K ) →ℛ [ Λ ] ^ W
denote the Satake isomorphism. If P = P(X) ∈ℋ_ℛ(K\ G / K )[X] is a polynomial, then 𝒮(P) means the polynomial in ℛ[Λ]^W[X] obtained by applying 𝒮 to the coefficients of the powers of X in P.
For c ∈, we define the degree 8 spinor Hecke polynomial ℌ_spin,c(X) ∈ℋ_ℛ(G) [X] to be unique polynomial such that 𝒮 ( ℌ_spin, c ) = 𝔖_spin( q^-c X).
To work with this Hecke polynomial and to describe the decompositions of the double coset operators appearing in it later on, it would be convenient to record the following.
For each λ∈Λ ^ + below, the element w = w _λ∈ W_I specified is the unique element in W_I of minimal possible length such that K ϖ ^ λ K = K w K.
* λ = (1,1,1,1), w = ρ,
* λ = (2,2,1,1), w = w_0ρ^2,
* λ = (2,2,2,1), w = w_0 w_1 w_0ρ^2,
* λ = (3,3,2,2), w = w_0 w_1 w_2 w_3ρ^3,
* λ = ( 4,3,3,3), w = w_0 w_1 w_0 w_2 w_1 w_0ρ^4,
* λ = ( 4,4,2,2 ), w = w_0 w_1 w_2 w_3 w_2 w_1 w_0ρ^4.
We point out that the translation component of each w_λ above (i.e., the Λ-component in W_I = Λ⋊ W) is t(- λ^opp)
where λ^opp is the anti-dominant element in the Weyl orbit W λ. The minimal
possible length in each case is computed using (<ref>) and that ℓ ( w _λ ) = ℓ_min ( t ( - λ ^ opp ) ) = ℓ _ min ( t ( λ ) ).
For convenience, we will notate
υ_0 = w_0 , υ_1 = w_0 w_1 w_0 , υ _2 : = w_0 w_1 w_2 w_3 , υ_3 : = w_0 w_1 w_0 w_2 w_1 w_0 , υ_4 = w_0 w_1 w_2 w_3 w_2 w_1 w_0
Given g ∈ G, we let (K gK ) denote the characteristic function (K g K ) : G → of the double coset K g K.
For an even integer k, we let ρ^k (K g K ) denotes the function (K g ρ^k K ). We will use similar notation for sums of such functions and for functions on H' and H.
The coefficients of ℌ_spin,c(X) lie in ℋ_[q^-1](K \ G / K ) for all c ∈. If we define
ℌ (X) = ( K ) - (K ρ K ) X + 𝔄 X^2 - 𝔅 X^3 + ( ℭ + 2 ρ^2𝔄 )X ^4
- ρ^2𝔅 X^5 + ρ^4𝔄 X^6
- ( K ρ ^ 7 K ) X ^ 7 + ( K ρ ^ 8 K ) X ^ 8 ∈ ℋ_ ( K \ G / K ) [ X ]
where
* 𝔄 = ( K υ_1ρ^2 K ) + 2 ( K υ _0ρ^2 K ) + 4 ( K ρ^2 K ),
* 𝔅 = ( K υ_2ρ ^3 K ) + 4 ( K ρ ^ 3 K ),
* ℭ = ( K υ_3ρ^4 K ) + ( K υ_4ρ^4 K ),
then ℌ_spin,c (X) is congruent to
ℌ (X) modulo q - 1 for all c ∈.
Since the half sum of positive roots (<ref>) lies in X^*(𝐀), the first claim is obvious from the discussion in <cit.>. Solving the plethysm problem for exterior powers of the spin representation by combining i choices of coweights ( 1 , 12 , 12 , 12 ) + ( 0, ±12, ±12, ±12 ) for i = 0, …, 8 or simply by expanding 𝔖_spin(X), we see that
𝔖 _ spin ( X) = 1 - e ^ W(1,1,1,1) X
+ ( e ^ W ( 2,2,2,1) + 2 e ^W(2,2,1,1) + 4 e ^ (2,1,1,1) ) X^2 - ( e^W(3,3, 2,2) + 4 e ^W(3,2,2,2) ) X ^3
+ ( e ^ W ( 4,4,2,2) + e ^ W ( 4,3,3,3) + 2 e ^ W ( 4,3,3,2) + 4 e ^ W ( 4,3,2,2) + 8 e ^ ( 4,2,2,2) ) X^4
- ( e ^ W ( 5,4,3,3) + 4 e ^ W ( 5, 3 , 3 ,3 ) ) X ^ 5 + ( e ^ W ( 6, 4,4,3) + 2 e ^ W (6,4,3,3) + 4 e ^ ( 6,3,3,3) ) X ^ 6
- e ^ W ( 7,4,4,4) X^7 + e ^ ( 8 , 4 , 4, 4 ) X ^ 8
The claim now follows by Lemma <ref> and <cit.>.
The exact coefficients in the Hecke polynomial are polynomial expressions in q translated by (possibly negative) powers of q. They can be found explicitly using Sage by computing appropriate Kazhdan-Lusztig polynomials P_σ,τ(q) for σ, τ∈ W_I. See <cit.> for an example.
§ RESTRICTION TO GL2 × GSP4
In what follows, we will denote
ℌ = ℌ(1)
= ( 1 + ρ^8 ) (K) - ( 1 + ρ^6 ) (Kρ K) + (1 + 2 ρ^2 + ρ^4 ) 𝔄 - ( 1 + ρ^2 ) 𝔅 + ℭ
considered as an element of 𝒞_ ( K \ G / K ). Note that ℌ≡ℌ_spin, c(1) modulo q - 1 for all c ∈ by Proposition <ref>. Note also that ρ^k for even k is an element of H (and H'). We wish to write the -restrictions of ℌ. To this end, let us introduce the following elements in G:
τ_0 = 1_G,
τ_1 = ([ ϖ 1; ϖ 1; ϖ ; 1; 1; 1 ] ),
τ_2 = ([ ϖ 1ϖ; ϖ 1ϖ; 1 ; 1ϖ; 1ϖ; 1 ]).
For w ∈ W_I, we denote ℛ(w) = U' \ K w K / K. When listing elements of ℛ(w), we will only write the representative element and it will be understood that no two elements represent the same double coset. Similar convention will be used for other double coset spaces.
With notations and conventions as above,
* ℛ(ρ)=
{ϖ^(1,1,1,1), τ_1},
* ℛ(υ_0ρ^2)=
{ϖ^(2,2,1,1), ϖ^(2,1,2,1), ϖ^(1,1,0,0)τ _1},
* ℛ (υ_1ρ^2)=
{ϖ^(2,2,2,1), ϖ^(2,1,2,2), ϖ^(1,1,1,0)τ _1, ϖ^(1,1,0,1)τ_1, ϖ^(2,1,1,1)τ_2},
* ℛ (υ_2ρ^3)
={ϖ^(3,3,2,2), ϖ^(3,2,3,2), ϖ^(2,2,1,1)τ_1, ϖ^(2,1,2,1)τ_1, ϖ^(2,2,0,1)τ_1, ϖ^(2,1,1,2) τ_1, ϖ^(3,2,1,2)τ_2},
* ℛ (υ_3ρ^4)
={ϖ^(4,3,3,3), ϖ^(3,2,2,2)τ_1, ϖ^(4,2,2,3)τ_2},
* ℛ
(υ_4ρ^4)=
{ϖ^(4,4,2,2), ϖ^(4,2,4,2), ϖ^(3,3,1,1) τ_1, ϖ ^ (3,2,0,1)τ_1, ϖ^(4,3,1,2)τ_2}.
Moreover, H'τ_i K ∈ H ' \ G / K are pairwise distinct for i = 0 , 1 , 2.
A proof of this is provided in <ref>.
A quick check on our lists of representatives for each ℛ(w) above is through computing their classes in K \ G / K. These should return ϖ^λ on the diagonal where λ corresponds to w in Lemma <ref>. The distinctness of our representatives is also easily checked using a Cartan style decomposition proved in <ref>. What is difficult however is establishing that these represent all the orbits of U' on K w K / K and this is where bulk of the work lies.
H ' \ H ' · (ℌ ) / K = {τ_0, τ_1, τ_2}. In particular if g ∈ G is such that H'gK ≠ H'τ_iK for i = 0,1,2, then (H', g)-restriction of ℌ is zero.
The is clear from the expression (<ref>) and Proposition <ref>.
For i = 0 , 1 , 2, we let 𝔞_i, 𝔟_i, 𝔠_i, _i∈𝒞_(U' \ H' / H_τ_i') denote the (, τ_i)-restriction of 𝔄, 𝔅, ℭ, ℌ respectively.
Here for g ∈ G, _g denotes the compact open subgroup H ' ∩ g K g^-1 of H '. As before, we omit writing for characteristic functions. By Proposition <ref>, we have
(K ρ K ) = (U' ϖ^(1,1,1,1)K) + ( U' τ_1K ) .
Since U' ϖ^λ K ⊂ H' K for any λ∈Λ and U' τ_1 K ⊂ H' τ_1 K, the (H', τ_i)-restrictions of (K ρ K) for i = 0, 1, 2 are given by
(U' ϖ^(1,1,1,1) U') , (U' H_τ_1') , 0
respectively. Proceeding in a similar fashion, we find that
𝔞_0 = (U' ϖ^(2,2,2,1) U' ) + (ϖ^(2,1,2,2)) + 2 (ϖ^(2,2,1,1)) + 2 (ϖ^(2,1,2,1)) + 4 (ϖ^(2,1,1,1) ),
𝔞_1 = (ϖ^(1,1,1,0)_τ_1 ) + (U ' ϖ^(1,1,0,1)_τ_1 ) + 2 (ϖ^(1,1,0,0)_τ_1 ),
𝔞_2 = (ϖ^(2,1,1,1)_τ_2) ,
𝔟_0 = (ϖ^(3,3,2,2) ) + (ϖ^(3,2,3,2)) + 4 (ϖ^(3,2,2,2) ),
𝔟_1 = (ϖ^(2,2,1,1)_τ_1 ) + (ϖ^(2,1,2,1)_τ_1 ) + (ϖ^(2,2,0,1)_τ_1 ) + (ϖ^(2,1,1,2)_τ_1 ) + 4 (ϖ^(2,1,1,1)_τ_1 ) ,
𝔟_2 = (ϖ^(3,2,1,2) H_τ_2 ' ) ,
𝔠_0 = (ϖ^(4,3,3,3) ) + (ϖ^(4,4,2,2) ) + (ϖ^(4,2,4,2) ) ,
𝔠_1 = (ϖ^(3,2,2,2)_τ_1 ) + (ϖ^(3,3,1,1)_τ_1) + (ϖ^(3,2,0,1)_τ_1 ) ,
𝔠_2 = (ϖ^(4,2,2,3)_τ_2 ) + (ϖ^(4,3,1,2)_τ_2).
Using expression (<ref>), we find that
_0 =
(1 + ρ^8) () - (1+ρ^6) (ϖ^(1,1,1,1) ) +(1+ 2 ρ^2 + ρ^4) 𝔞_0 -(1+ρ^2) 𝔟_0 + 𝔠_0 ,
_1 = - ( 1 + ρ^6 ) (U ' H_τ_1') + ( 1 + 2 ρ^2 + ρ^4 ) 𝔞_1 - ( 1 + ρ^2 ) 𝔟_1 + 𝔠_1 ,
_2 = ( 1 + 2 ρ^2 + ρ^4 ) 𝔞_2 - ( 1 + ρ^2 ) 𝔟_2 + 𝔠_2
where the the central elements ρ^2k distribute over Hecke operators as before.
The particular choice of τ_1, τ_2 is motivated by the structure of the group H' ∩τ_i K τ_i^-1 which is convenient for decomposing double cosets involving these groups (see <ref>). Note that τ_i very closely related to the “Schröder's representatives" for the double coset H'\ G/ K given in <cit.>.
§ RESTRICTION TO GL2 × GL2 × GL2
In this section, we record the twisted restrictions of _0, _1, _2 with respect to H. For i = 0, 1, 2 and h ∈ H', we let ℛ_i(h) denote the double coset space U \ U' h H_τ_i'/ H_τ_i'. The convention used in <ref> for listing elements of double coset spaces will also be be applied to ℛ_i(h).
§.§ H-restrictions of _0
To write the restrictions of 𝔥_0, we introduce the following elements of H '= _2(F) ×_F^×GSp_4(F):
ϱ_0 = 1_H', ϱ_1 = ( 0.9( ϖ
1 ) , ( [ ϖ 1; ϖ 1; 1; 1 ] ) ) , ϱ _2 =
( 0.9( ϖ
ϖ ) ,
( [ ϖ^2 1; ϖ^2 1; 1; 1 ] ) )
which we also view as elements of G via ι'.
With notations and conventions as above, we have
* ℛ_0(ϖ^(1,1,1,1)) = {ϖ^(1,1,1,1), ϱ_1},
* ℛ_0(ϖ^(2,2,2,1))= {ϖ^(2,2,2,1), ϖ^(2,2,1,2), ϖ^(1,1,1,0)ϱ_1},
* ℛ_0(ϖ^(2,1,2,2)) = {ϖ^(2,1,2,2), ϖ^(1,0,1,1)ϱ _1, ϱ_2},
* ℛ_0(ϖ^(3,2,3,2) ) = {ϖ^(3,2,3,2), ϖ^(3,2,2,3), ϖ^(2,1,2,1)ϱ_1, ϖ^(2,1,1,2)ϱ_1, ϖ^(2,1,2,0)ϱ_1, ϖ^(1,1,0,1)ϱ_2},
* ℛ_0(ϖ^(4,2,4,2) ) =
{ϖ^(4,2,4,2), ϖ^(4,2,2,4), ϖ^(3,1,3,1)ϱ_1, ϖ^(3,1,1,3)ϱ_1, ϖ^(2,1,2,0)ϱ_2}.
Moreover H ϱ_i U' ∈ H \ H' / U' are pairwise distinct for i = 0 , 1 , 2.
A proof of this is given in <ref>.
By Lemma <ref>, the representatives of ℛ_0(ϖ^λ) depend only on those for U_2\ U_2' ϖ^_2(λ) U_2'/U_2'. Then one easily obtains the following from Proposition <ref>.
We have
* ℛ_0(ϖ^(2,2,1,1) ) = {ϖ^(2,2,1,1)} ,
* ℛ_0(ϖ^(2,1,2,1)) = {ϖ^(2,1,2,1), ϖ^(2,1,1,2), ϖ^(1,0,1,0)ϱ_1} ,
* ℛ_0(ϖ^(3,3,2,2)) = {ϖ^(3,3,2,2), ϖ^(2,2,1,1)ϱ_1} ,
* ℛ_0(ϖ^(4,3,3,3)) = {ϖ^(4,3,3,3), ϖ^(3,2,2,2)ϱ_1, ϖ^(2,2,1,1)ϱ_2} ,
* ℛ_0(ϖ^(4,4,2,2)) = {ϖ^(4,4,2,2)} .
The last two results describe the the U-orbits of all the double coset spaces arising from (<ref>) up to translation by the central element ρ^2.
This implies the next claim.
H \ H ·(𝔥_0) / U' = {ϱ _0, ϱ _1, ϱ _2}.
For i = 0 ,1 , 2, we let 𝔞_ϱ_i , 𝔟_ϱ_i , 𝔠_ϱ_i , 𝔥_ϱ_i∈𝒞_(U \ H / H_ϱ_i) denote the (H, ϱ_i)-restriction of 𝔞_0, 𝔟_0, 𝔠_0, _0 respectively where as before, we let H_ϱ_i denote H ∩ϱ_i K ϱ_i ^-1 as before. From Proposition <ref> and Corollary <ref>, we find that
𝔞_ϱ_0 =
(Uϖ^(2,2,2,1)U)+(Uϖ^(2,2,1,2)U)
+(Uϖ^(2,1,2,2)U) + 2 (U ϖ^(2,2,1,1)U) + 2 (U ϖ^(2,1,2,1) U )
+ 2 (U ϖ^(2,1,1,2) U ) +
4 (U ϖ^(2,1,1,1) U ) ,
𝔞_ϱ_1 = (U ϖ^(1,1,1,0) H_ϱ_1 ) + (U ϖ^(1,0,1,1) H_ϱ_1) + 2 ( U ϖ^(1,0,1,0) H_ϱ_1 ) ,
𝔞_ϱ_2 = (U H_ϱ_2) ,
𝔟_ϱ_0 = (U ϖ^(3,3,2,2) U ) + (U ϖ^(3,2,3,2) U ) + (U ϖ^(3,2,2,3) U ) + 4 ( U ϖ^(3,2,2,2) U ) ,
𝔟_ϱ_1 = (U ϖ^(2,2,1,1) H _ϱ_1 ) + ( U ϖ^(2,1,2,1) H_ϱ_1 ) + ( U ϖ^(2,1,1,2) H_ϱ_1 ) + (U ϖ^(2,1,2,0) H_ϱ_1 ) + 4 (U ϖ^(2,1,1,1) H_ϱ_1) ,
𝔟_ϱ_2 = (U ϖ^(1,1,0,1) H_ϱ_2) ,
𝔠_ϱ_0 = ( U ϖ^(4,3,3,3) U ) + (U ϖ^(4,4,2,2) U ) + (U ϖ^(4,2,4,2) U ) + ( U ϖ^(4,2,2,4) U ) ,
𝔠_ϱ_1 = (U ϖ^(3,2,2,2) H_ϱ_1 ) + (U ϖ^(3,1,3,1) H _ϱ_1) + (U ϖ^(3,1,1,3) H _ϱ_1 ) ,
𝔠_ϱ_2 = (U ϖ^(2,2,1,1) H_ϱ_2 ) + (U ϖ^(2,1,2,0) H_ϱ_2) .
From the expression (<ref>), we get
_ϱ_0 =
(1 + ρ^8) (U) - (1+ρ^6) ( U
ϖ^(1,1,1,1) U ) +(1+ 2 ρ^2 + ρ^4) 𝔞_ϱ_0 -(1+ρ^2) 𝔟_ϱ_0 + 𝔠_ϱ_0 ,
_ϱ_1 = - ( 1 + ρ^6 ) (U H_ϱ_1) + ( 1 + 2 ρ^2 + ρ^4 ) 𝔞_ϱ_1 - ( 1 + ρ^2 ) 𝔟_ϱ_1 + 𝔠_ϱ_1 ,
_ϱ_2 = ( 1 + 2 ρ^2 + ρ^4 ) 𝔞_ϱ_2 - ( 1 + ρ^2 ) 𝔟_ϱ_2 + 𝔠_ϱ_2 .
§.§ H-restrictions of _1
We consider the following elements in H ':
σ_0 = 1_H' , σ_1 = w_2 , σ_2 = ϱ_1ϖ^-(1,1,1,1), σ_3 = ϱ_1 .
where ϱ_i are as in (<ref>). For i = 0 , 1, 2 , 3, let ς_i∈ G denote σ_iτ_1. Also let ψ = ( [ 1; 1 1 ] , 1 _ H_2 ) ∈ H.
With notations and conventions as above, we have
* ℛ_1(ϖ^(1,1,1,0)) = {ϖ^(1,1,1,0), ϖ^(1,1,0,1)σ_1, ϖ^(1,1,1,0)σ_2},
* ℛ_1(ϖ^(1,1,0,1))= {ϖ^(1,1,0,1), ϖ^(1,1,1,0)σ_1, ϖ^(1,0,0,0)σ_2, ϖ^(1,1,0,1)σ_2, ϖ^(1,0,1,1)σ_2, ϖ^-(0,1,0,0)σ_3},
* ℛ_1(ϖ^(1,1,0,0))= {ϖ^(1,1,0,0), ϖ^(1,1,0,0)σ_1, ϖ^(1,0,1,0)σ_2},
* ℛ_1(ϖ^(2,2,1,1))= {ϖ^(2,2,1,1), ϖ^(2,2,1,1)σ_1, ϖ^(2,2,1,1)σ_2, ϖ^(2,0,1,1)σ_2},
* ℛ_1(ϖ^(2,1,2,1))= {ϖ^(2,1,2,1), ϖ^(2,1,1,2)σ_1, ϖ^(2,1,2,1)σ_2, ϖ^(2,1,2,0)σ_2, ϖ^(2,1,1,0)σ_2, ϖ^(1,0,1,0)σ_3},
* ℛ_1(ϖ^(2,2,0,1))= {ϖ^(2,2,0,1), ϖ^(2,2,1,0)σ_1, ϖ^(2,0,2,1)σ_2, ϖ^(2,0,2,0)σ_2, ϖ^(2,0,1,0)σ_2, ϖ^(1,-1,1,0)σ_3},
* ℛ_1(ϖ^(2,1,1,2))= {ϖ^(2,1,1,2), ϖ^(2,1,2,1)σ _1, ϖ^(2,1,0,1)σ_2, ϖ^(2,1,1,2)σ_2, ϖ^(1,0,0,1)σ_3},
* ℛ_1(ϖ^(2,1,1,1))= {ϖ^(2,1,1,1), ϖ^(2,1,1,1)σ_1, ϖ^(2,1,1,1)σ_2},
* ℛ_1(ϖ^(3,2,2,2)) = [t]1.0{ϖ^(3,2,2,2), ϖ^(3,2,2,2)σ_1, ϖ^(3,2,1,1)σ_2, ϖ^(3,2,2,2)σ_2, ϖ^(3,1,1,2)σ_2, ϖ^(3,2,1,2)ψσ_2, ϖ^(2,1,1,1)σ_3},
* ℛ_1(ϖ^(3,3,1,1))= {ϖ^(3,3,1,1), ϖ^(3,3,1,1)σ_1, ϖ^(3,0,2,1)σ_2} ,
* ℛ_1(ϖ^(3,2,0,1)) = {ϖ^(3,2,0,1), ϖ^(3,2,1,0)σ_1, ϖ^(3,1,3,1)σ_2, ϖ^(3,1,2,0)σ_2, ϖ^(2,0,2,0)σ_3}.
Moreover Hσ_iH_τ_1' ∈ H \ H' / H_τ_i' are pairwise distinct for i = 0,1,2,3.
A proof of this is provided in
<ref>.
We also need ℛ_1(1) = {σ_0, σ_1, σ_2} but this is obtained from ℛ_1(ϖ^(2,1,1,1)).
The appearance of ψ in one of the representatives listed in ℛ_1(ϖ^(3,2,2,2)) seems unavoidable. Curiously, U ϖ^(3,2,1,2)ψ H_ς_2 is the only double coset arising form ℌ
whose degree vanishes modulo q - 1. See Lemma <ref>.
H \ H ·(𝔥_1 ) / H_τ_1' = {σ_0, σ _1, σ_2, σ _3}.
For i = 0 , 1, 2, 3, let 𝔞_ς_i, 𝔟_ς_i, 𝔠_ς_i, 𝔥_ς_i∈𝒞_(U \ H / H_ς_i) denote the (H,σ_i)-restrictions of
𝔞_1, 𝔟_1, 𝔠_1,
𝔥_1 respectively.
Proposition <ref> implies that
𝔞_ς_0 = (U ϖ^(1,1,1,0) H_ς_0) +
(Uϖ^(1,1,0,1)H_ς_0)+ 2 (Uϖ^(1,1,0,0)H_ς_0),
𝔞_ς_2 = ( U ϖ^(1,1,1,0) H_ς_2) + ( U ϖ^(1,0,0,0) H_ς_2) + (U ϖ^(1,1,0,1) H_ς_2) + ( U ϖ^(1,0,1,1) H_ς_2) + 2( U ϖ^(1,0,1,0) H_ς_2) ,
𝔞_ς_3 = ( U ϖ^-(0,1,0,0) H_ς_3 ) ,
𝔟_ς_0 = (U ϖ^(2,2,1,1) H _ς_0 ) + (U ϖ^(2,1,2,1) H_ς_0) + (Uϖ^(2,2,0,1) H _ς_0) + (U ϖ^(2,1,1,2) H_ς_0) + 4 (Uϖ^(2,1,1,1)H_ς_0) ,
𝔟_ς_2 =
(Uϖ^(2,2,1,1)H_ς_2)+ (Uϖ^(2,0,1,1)H_ς_2)+ (Uϖ^(2,1,2,1)H_ς_2)+
(Uϖ^(2,1,2,0)H_ς_2)+
(Uϖ^(2,1,1,0)H_ς_2) +
(Uϖ^(2,0,2,1)H_ς_2)+(Uϖ^(2,0,2,0)H_ς_2)+
(Uϖ^(2,0,1,0)H_ς_2)+
(Uϖ^(2,1,0,1)H_ς_2)+
(Uϖ^(2,1,1,2)H_ς_2) +
4(Uϖ^(2,1,1,1)H_ς_2),
𝔟_ς_3 = (U ϖ^(1,0,1,0) H_ς_3) + (U ϖ^(1,-1,1,0)H_ς_3 ) + (U ϖ^(1,0,0,1)H_ς_3) ,
𝔠_ς_0 =
(Uϖ^(3,2,2,2)H_ς_0)+
(Uϖ^(3,3,1,1)H_ς_0)+(Uϖ^(3,2,0,1)H_ς_0),
𝔠_ς_2 =
(Uϖ^(3,2,1,1)H_ς_2)+
(Uϖ^(3,2,2,2)H_ς_2)+ (Uϖ^(3,1,1,2)H_ς_2)
+ ( U ϖ^(3,2,1,2)ψ H_ς_2)
+ ( U ϖ^(3,0,2,1)
H_ς_2) +
( U ϖ^(3,1,3,1) H_ς_2) + ( U ϖ^(3,1,2,0) H_ς_2) ,
𝔠_ς_3 = (U ϖ^(2,1,1,1) H_ς_3 ) + (U
ϖ^(2,0,2,0) H_ς_3).
Using expression (<ref>), we get
_ς_0 = - (1+ρ^6) ( U H_ς_0 ) +(1+ 2 ρ^2 + ρ^4) 𝔞_ς_0 - (1+ρ^2) 𝔟_ς_0 + 𝔠_ς_0 ,
_ ς_2 = - ( 1 + ρ^6 ) (U H_ς_2) + ( 1 + 2 ρ^2 + ρ^4 ) 𝔞_ς_2 - ( 1 + ρ^2 ) 𝔟_ς_2 + 𝔠_ς_2 ,
_ς_3 = ( 1 + 2 ρ^2 + ρ^4 ) 𝔞_ς_3 - ( 1 + ρ^2 ) 𝔟_ς_3 + 𝔠_ς_3Now observe that each ℛ(ϖ^λ) in Proposition <ref> contains a unique representative of the form ϖ^s_2(λ)σ_1. Moreover ς_1 = w_2ς_0 and w_2 normalizes U (and H). So w_2 U ϖ^λ H_ς_0w_2 = U ϖ^ s_2(λ) H_ς_1 for all λ∈Λ. Therefore 𝔥_ς_1 = w_2𝔥_ς_0 w_2
where w_2 distributes over each double coset characteristic function.
§.§ H-restrictions of _2
For i = 0, 1, 2, denote θ_i : = σ_i and θ_3 : = ϖ^-(1,1,1,1)σ_3 where σ_0, σ_1, σ_2, σ_3 are as in
(<ref>). For i = 0 ,1, 2, 3, set
ϑ_i = σ_iτ_2∈ G. Additionally for k ∈ []^∘ := [] ∖{ -1 }, we define θ̃_k = (1, η̃_k) ∈ H' where
η̃_k = 0.9[ k 1; k+1 1; -1 k + 1; 1 - k ]∈ H_2'
and set ϑ̃_k = θ̃_kτ_2∈ G. Note that θ̃_0 = w_2 w_3θ_2 w_3 and w_3τ_2 = τ_2w_3t_1 where t_1 = diag(1,1,-1,1,1,-1). So
ϑ̃_0 = θ̃_0τ_2 = w_2 w_3θ_2 w_3τ_2 =
w_2 w_3ϑ_2
w_3 t_1.
We have
* ℛ_2 ( ϖ^(0,0,0,0) ) = { 1, θ_1 , θ_2 , θ̃_k | k ∈ []^∘},
* ℛ_2(ϖ^(3,2,1,2)) =
[t]1.0
{ϖ^(3,2,1,2), ϖ^(3,2,2,1)θ_1, ϖ^(3,2,1,2)θ_2, ϖ^(3,1,2,1)θ_2, ϖ^(3,1,2,2)θ_2, ϖ^(3,1,2,2)θ_3}∪
{ϖ^(3,1, 2, 2)θ̃_0, ϖ^(3,1,1,2)θ̃_0, ϖ^(3,2,1,1)θ̃_k | k ∈ []^∘},
* ℛ_2(ϖ^(4,3,1,2)) = {ϖ^(4,3,1,2), ϖ^(4,3,2,1)θ_1 , ϖ^(4,1,3,2)θ_2 , ϖ^(4,1,2,3)θ̃_0, ϖ^(4,1,3,2)θ_3},
* ℛ_2(ϖ^(4,2,2,3)) = {ϖ^(4,2,2,3), ϖ^(4,2,3,2)θ_1, ϖ^(4,2,2,3)θ_2, ϖ^(4,2,1,2)θ̃_0, ϖ^(4,2,2,3)θ_3}.
The proof of this result is provided in <ref>
H \ H · (_2) / H_τ_2' = {θ_0, θ_1, θ_2 , θ_3 , θ̃_k | k ∈ []^∘}.
This follows by Lemma <ref> and Proposition <ref>.
For ϑ∈{ϑ_0, ϑ_1, ϑ_2, ϑ_3, ϑ̃_k | k ∈ []^∘}, we let 𝔥_ϑ∈𝒞_(U \ H / H_ϑ ) denote the (H,ϑτ_2^-1)-restriction of 𝔥_2. By the results above,
𝔥_ϑ_0 =
(ρ^2 + 2ρ^4 + ρ^6) (U H_ϑ_0 ) - ( 1 + ρ^2) (U ϖ^(3,2,1,2) H_ϑ_0 ) + ( U ϖ^(4,2,2,3) H_ϑ_0) + ( U ϖ^(4,3,1,2)H_ϑ_0) ,
𝔥_ϑ_2 =
( ρ^2 + 2 ρ^4 + ρ^6 ) ( U H_ϑ_2 ) - ( 1 + ρ^2 )
( ( U ϖ^(3,2,1,2) H _ϑ_2 ) + ( U ϖ^(3,1,2,1) H _ϑ_2 ) + ( U ϖ^(3,1,2,2) H _ϑ_2 )
) +
( U ϖ^(4,2,2,3) H_ϑ_2 ) + ( U ϖ^(4,1,3,2) H _ ϑ_2 ) ,
𝔥_ϑ_3 =
( U ϖ^(4,2,2,3) H _ϑ_3 ) + (U ϖ^(4,1,3,2) H _ϑ_3 ) - ( 1 + ρ^2 ) (U ϖ^(3,1,2,2) H_ϑ_3 ),
𝔥_ϑ̃_k = (ρ^2 + 2ρ^4 + ρ^6 ) ( U H_ϑ̃_k ) - ( 1 + ρ^2 ) ( U ϖ^(3,2,1,1) H_ϑ̃_k)
where k ∈ [] ∖{ 0 , - 1 }. Observe that H_ϑ_1 = w_2 H_ϑ_0 w_2 and that in each set appearing in Proposition <ref>, ϖ ^λ for some λ∈Λ is listed in that set if and only if ϖ^s_2(λ)θ_1 is.
So as in the case of 𝔥_ς_1, we have
𝔥_ϑ_1 = w _2𝔥_ϑ_0 w _2.
Similarly we have H_ϑ_2 = w_2 w_3 H_ϑ̃_0 w_3 w_2 and ϖ^λϑ_2 appears in Proposition <ref> if and only if ϖ^s_2s_3(λ)ϑ̃_0 does. Therefore
𝔥_ϑ̃_0 = w_2 w_3𝔥_ϑ_2 w_3 w_2 .
§ HORIZONTAL NORM RELATIONS
Let X = Mat_2× 1(F) be the F-vector space of size 2 column vectors over F. We view X as a locally compact totally disconnected topological vector
space. Define a right action X × H → H, (v⃗, h) ↦_1(h)^-1·v⃗ where dot denote matrix multiplication. Let be an integral domain in which ℓ is invertible and let 𝒮_X = 𝒮_X, denote the -module of all locally constant compactly supported functions X →𝒪. Then 𝒮_X inherits a smooth left H-action. We define
ϕ = ( [ _F; _F ] ) ∈𝒮_X .
For any compact open subgroup V of H, we let 𝒮_X(V) denote the submodule V-invariant functions. Let Υ_H denote the collection of all compact open subgroups of H and 𝒫(H, Υ_H) denote the category of compact opens (see <cit.>). Then
𝒮_X : 𝒫(H, Υ_H) →𝒪-Mod, V ↦𝒮_X(V)
is a cohomological Mackey functor. Note that ϕ∈𝒮_X(U). For g ∈ G, let H_g = H ∩ gK g^-1 as before and V_g⊂ H_g denote the subgroup of all elements h ∈ H_g such that sim(g) ∈ 1 + ϖ_F. For g ∈ G, we denote by 𝔥_g∈𝒞_(U \ H / H_g ) the (H,g)-restriction ℌ.
For any g ∈ G, 𝔥_g,*(ϕ) lies in the image of the trace map _* : 𝒮_X(V_g ) →𝒮_X(H_g ).
Since 𝔥_η g γ , * = 𝔥_g, *∘ [η]_H_g, H_η g , *,
it suffices to prove the claim for g ∈ H \ H ·(ℌ) / K. By the results of the previous section, a complete system of representatives for this double quotient is the set {ϱ_0, ϱ_1, ϱ_2, ς_0, ς_1, ς_2,
ς_3, ϑ_0, ϑ_1, ϑ_2, ϑ_3, ϑ̃_k | k ∈ []^∘}. By the results established in <ref>,
𝔥_g,*(ϕ) ≡ 0 q - 1
for all g ≠ϑ_3 in this set and 𝔥_ϑ_3, * ( ϕ ) = - ( [ ϖ ^-1_F^×; ϖ^-2_F^× ] ). So it suffices to show that χ : = ( [ ϖ_F^×; _F^× ] ) ∈𝒮_X(H_ϑ_3) is the trace of a function in 𝒮_X(V_ϑ_3).
By <cit.>, it suffices to verify that for all v⃗∈(χ), the stabilizer Stab_H_ϑ_3 (v⃗) of v⃗ in H_ϑ_3 is contained in V_ϑ_3. So let v⃗ = [ x; y ]∈(χ) and h = (h_1, h_2, h_3 ) ∈Stab_H_ϑ_3(v⃗). If we write h_1 = [ a b; c d ], then v⃗· h = v⃗ is equivalent to v⃗· h^-1 = v⃗ and so
( a - 1 ) x + by = 0 ,
cx + ( d - 1 ) y = 0 .
By Lemma <ref>, h_1∈_2(_F) and b ∈ϖ^2_F. Since x ∈ϖ_F^×, it follows that a ∈ 1 + ϖ_F. Similarly
y ∈_F^×, x ∈ϖ_F ^× implies d ∈ 1 + ϖ_F. Thus sim(h) = ad - bc ∈ 1 + ϖ_F and so h ∈ V_ϑ_3.
Now let : = ×_m, G̃ its group of F-points and K̃ its group of _F-points. Embed into via 1 ×sim and let ι̃ : → denote the embedding (1 ×sim) ∘ι. Fix a c ∈ and define
ℌ̃ = ℌ_spin, c(Frob) ∈𝒞_[q^-1](K̃\G̃ / K̃ )
where Frob = ( ϖ_F^× ). Let L̃ = K × ( 1 + ϖ_F ) ⊂K̃. Let Υ_G̃ denote the collection of all compact open subgroups of G̃ and 𝒫(G̃, Υ_G̃) the associated category.
For any cohomological Mackey
functor M_G̃ : 𝒫(G̃, Υ_G̃) →𝒪-Mod
and
any Mackey pushforward ι̃_* : 𝒮_X→ M_G̃, there exists a class y ∈ M_G̃( L̃ ) such that
ℌ̃_*∘ι̃_U , K̃ , *(ϕ )= _L̃ , K̃ ,*(y )
By the expression in Proposition <ref>, it is clear that (G, g)-restriction of ℌ̃ is non-zero only if g ∈ G K̃ and the (G, 1_G̃ )-restriction is ℌ_spin,c(1). The claim is then a consequence of Theorem <ref>, Proposition <ref> and
<cit.>.
§.§ Global relations
We now repurpose our notation for the global setup. Let , = ×_m, be as before. Fix a set S of rational primes. By _S, be mean the product ∏_ℓ∈ S _ℓ and by _f^S, we mean the group of finite rational adeles away from primes in S. Let G, G̃, H denote the group of _S·_f^S points of , ,
respectively. Let Υ_G̃ denote the collection of all neat compact open subgroups of G̃ and Υ_H denote the collection of compact open subgroups of the form H ∩L̃ where L̃∈Υ_G̃. Let 𝒫(H, Υ_H), 𝒫(G̃, Υ_G̃) denote the corresponding categories of compact opens. These satisfy axioms (T1)-(T3) of <cit.>.
Next fix a neat compact open subgroup K ⊂ G such that if ℓ∉ S is a rational prime, K = K^ℓ K_ℓ where K_ℓ = G(_ℓ) as before and K^ℓ = K / K_ℓ⊂(_f^ℓ ) is the group at primes away from ℓ. Let 𝒩 denote the set of all square free products of primes away from S where the empty product means 1. For each n ∈𝒩, let
K[n] = K ×∏_ℓ∤ n_ℓ^×∏_ℓ| n ( 1+ ℓ_ℓ) ∈Υ_G̃ .
We also denote K[1] as K̃.
Let X = Mat_2 × 1 (_f) ∖{0⃗} and let H act on X in a manner analogous to the local situation. Let 𝒪 be a characteristic zero integral domain such that ℓ∈𝒪^× for all ℓ∉ S. Let 𝒮_X = 𝒮_X, 𝒪 denote the set
of all
functions χ : X →𝒪 such that χ =
f_S⊗χ^S where f_S is a fixed locally constant compactly supported function on Mat_2 × 1 (_S) that is invariant under (_S ) and χ ^S is any locally constant compactly supported function on Mat_2× 1(_f^S). Then
𝒮_X : 𝒫(H, Υ_H) →𝒪-Mod , V ↦𝒮_X(V)
is a CoMack functor with Galois descent. Let U = H ∩K̃ and ϕ∈𝒮_X(U) be the function
f_S⊗(^S) where ^S = ∏_ℓ∉ S _ℓ denotes integral adeles away from S. Note that ϕ^S is the restricted tensor product of ⊗_ℓ∉ S ϕ_ℓ where ϕ_ℓ = ( [ _ℓ; _ℓ ] ). Fix an integer c and for each ℓ∈ S, let
ℌ̃_ℓ = ℌ_spin,c,ℓ(Frob_ℓ) ⊗( K̃^ℓ) ∈𝒞_[ℓ^-1] ( K̃\G̃ / K̃)
where Frob_ℓ = ( ℓ_ℓ ^ × ) is as before.
For any cohomological Mackey functor M_G̃ : 𝒫(G̃, Υ_G̃) →𝒪-Mod
and any Mackey pushforward ι̃_* : 𝒮_X→ M_G̃, there exists a collection of classes y_n∈ M_G̃(K[n]) indexed by integers n ∈𝒩 such that y_1 = ι̃ _U,K̃,*(ϕ) and
ℌ̃_*
( y_n ) = pr_K[nℓ], K [n],*(y_nℓ)
for all n , ℓ∈𝒩 such that ℓ is a prime and ℓ∤ n.
Combine Theorem <ref>, <cit.> and the results referred to in Corollary <ref>.
PART:
Proofs
§ DOUBLE COSETS OF GSP6
Throughout, we maintain the notations introduced in Part 1.
§.§ Desiderata
The embedding ι' : → identifies the set Φ_H' of roots of 𝐇' with
{±α_0 , ±α_2 , ±α_3 , ± ( α_2 + α_3) , ± ( 2 α_2 + α_3 ) }⊂Φ.
The Weyl group W' of H ' is then the subgroup of W generated by s_0 , s_ 2 , s_3 and W' ≅ S_2× ( ( / 2 ) ^2⋊ S_2 ). We let Φ_H'^+ = Φ^+∩Φ_H be the set of positive roots. The base is then Δ_H' = {α_0, α_2, α_3}
and the corresponding Iwahori subgroup I' of H ' equals the intersection I ∩ G.
Since the normalizer N_H' (A) of A in H' equals the intersection N_G(A) ∩ H', the Iwahori Weyl group
W_I' = N_H'(A)/A^∘ is also identified with a subgroup of W_I. We let W_aff' denote the affine Weyl group of H '.
For notational convenience in referring to the roots corresponding to the projection _2' = GSp_4 of ', we will denote
β_0 = 2 e_2 - e_0 , β_1 : = e_2 - e_3 , β_2 = 2 e_3 - e_0,
and let r_0, r_1, r_2 denote the reflections associated with β_0, β_1 , β_2 respectively. In this notation, the generators of W_aff' of are given by S_aff' = { s_0 , t ( f_1 ) s_0, r_1, r_2 , t( f_2 ) r_0} .
The group W_I' is equals the semidirect product of W_aff' with the cyclic subgroup Ω_H'⊂ W_I generated by ω_H' := t(-f_0) s_0 r_2 r _ 1 r_2∈ W_I. The action of ω_H' on S_aff' is given by s_0↔ t(f_1) s_0, r_2↔ t(f_2) r_0 and fixing r_1. It can be visualized as the order 2 automorphism of the extended Coxeter-Dynkin diagram
[extended, Coxeter, edge length = 1cm, labels = t(f_1) s_0 , s_0]A1 [extended,Coxeter,
edge length=1cm,
labels=t(f_2) r_0 , r_1 , r_2]
C2
A representative element in N_H'(A) for ω_H' is given by (ρ_1, ρ_2) ∈_2(F) ×_F^×GSp_4(F) where
ρ_1 =
[ 1; ϖ ] , ρ_2 =
[ 1; 1 ; ϖ ; ϖ ].
Note that ρ normalizes I'.
§.§ Intersections with H'
In this subsection, we record some results on the structure of the twisted intersections H' ∩τ _i K τ_i^-1.
If h ∈ H ', we will often write h = ( [ a b ; a_1 a_2 b_1 b_2; a_3 a_4 b_3 b_4; c d; c_1 c_2 d_1 d_2; c_3 c_4 d_3 d_4 ] ) or h = ( ( a b
c d ) , ( [ a_1 a_2 b_1 b_2; a_3 a_4 b_3 b_4; c_1 c_2 d_1 d_2; c_3 c_4 d_3 d_4 ] ) ) .
H ' K, H ' τ_1 K and H ' τ_2 K
are pairwise disjoint.
If τ _i K = τ _j K for distinct i and j, then τ_i^-1 h τ_j∈ K for some h ∈ H.
Requiring the entries of k : = τ_i ^-1 h τ _j to be in _F, one easily deduces that ( k ) ∈ϖ_F^×, a contradiction. For instance,
τ_1^-1 h τ_2 = [ a * * * a - d_1ϖ^2 *; - c * * * * *; * * * * *; c ϖ * cϖ ; * * * d_1ϖ *; * * * * * ]
where a * denotes an expression in the matrix entries of h and the empty spaces are zeros. From the entries displayed above, we see that a , c ∈ϖ_F and so the first column is an integral multiple of ϖ.
This also follows by an analogue of Schröder's decomposition proved in <cit.>.
We let W^∘⊂ W' be the Coxeter subgroup generated by T : =
S_aff' \{ s_0 , r_1} and U^∘ = I'
W^∘ I' the corresponding maximal parahoric subgroup of H'. We let λ_∘ = (1, 1, 1, 1) and τ_∘ = ϖ^-λ_∘τ_1.
As usual, we denote _τ_∘ : = ∩τ_∘ K τ_∘ ^-1. Then H_τ_1 ' is the conjugate of H_τ_∘ ' by ϖ^λ_∘. Note that
U^∘ is exactly the subgroup of whose elements lie in
[ _F ϖ^-1_F; ϖ_F _F ]×[ _F _F ϖ^-1_F _F; ϖ_F _F _F _F; ϖ_F ϖ_F _F ϖ_F; ϖ_F _F _F _F ]
and whose similitude is in _F^×.
_τ_∘ is a subgroup of U^∘ and '_2( _τ_∘ ) = '_2( U^∘ ).
Let h ∈ H _ τ_∘ ' and write h as in Notation <ref>. Then
τ_∘^-1 h τ_∘ = [ a - c_1ϖ -c_2ϖ b - c_1/ϖ^2 a-d_1ϖ - d_2ϖ; - cϖ a_1 a_2 a_1 - d ϖ b_1 - cϖ^2 b_2; a_3 a_4 a_3/ϖ b_3 b_4; c d cϖ ; c_1 c_2 c_1ϖ d_1 d_2; c_3 c_4 c_3 ϖ d_3 d_4 ]∈ K
From the matrix above, one sees that h satisfies all the conditions that are satisfied by elements of U^∘, e.g., c ∈ϖ_F and b ∈ϖ^-1_F and (h) = (τ_∘ h τ_∘^-1 ) ∈(K) ⊂_F^×. Therefore _τ_∘⊂ U^∘. In particular, _2'( _τ_∘ ) ⊆_2 ' ( U^∘ ). To see the reverse inclusion, say h = ( h_1 , h_2 ) ∈ U^∘
and again write h as in Notation <ref>. Clearly,
a_1 d_1 - b_1 c_1∈_F.
Since
sim(h_2) = a_1 d_1 - b_1 c_1 + a_3d_3 - b_3 c_3
∈ a_1 d_1 - b_1 c_1 + ϖ_F ,
we may find a', d' ∈_F, b ' ∈ϖ^-1_F and c ' ∈ϖ_F such that a' - d_1/ϖ, a_1 - d' /ϖ, b' - c_1/ϖ^2, b_1 - c'/ϖ^2 are all integral and a' d' - b ' c ' = sim(h_2). Then h ' = ( ( [ a' b'; c' d' ] ) , h_2 ) ∈_τ_∘ and _2'(h') = h_2.
We let U^⊂ U' denote the compact open subgroup of all elements whose reduction modulo ϖ equals (𝐇()).
_τ_2 is a subgroup of U' and _2(_τ_2) = _2 ( U^ ).
If we write h ∈ H_τ_2' as in <ref>, then
τ_2^-1 h τ_2 = [ a -c_1 -c_2ϖ b -c_1ϖ^2 a-d_1/ϖ^2 - d_2ϖ; - c a_1 a_2ϖ a_1 - d ϖ ^2 b_1 - c ϖ^2 b_2ϖ; * a_4 a_3ϖ b_3ϖ b_4; * d c ; * * * d_1 *; * c_4 c_3 ϖ d_3ϖ d_4 ]∈ K
From the matrix above, one sees that all the entries of h are integral. Since H_τ_2 is compact, sim(h) ∈_F^× and so h ∈ U'. Similarly, it is easy to see from the matrix above that _2' ( H_τ_2 ) ⊂_2'( U^ ). For the reverse inclusion, say y ∈_2( U^) is given. Choose any h ∈ H ' such that '_2(h) = y and write h as in Notation <ref>. Then
sim (y) = a_1 d_1 - b_1c_1 + a_3 d_3 - b_3 c_3
∈ a_1 d_1 - b_1 c_1 + ϖ^2_F
We may therefore find a', b', c' , d' ∈_F which are congruent to d_1, c_1, b_1, a_1 modulo ϖ^2 such that a'd' - b'c' = sim ( y). Then h ' = ( ( [ a' b'; c ' d ' ] ) , y ) ∈_τ_2 and _2 ' ( h') = y.
Let _τ : _2→ be the embedding given by the embedding
[ a b; c d ]↦ ( ( a b
c d ) , 1.1( [ ; d c; 1; b a; ad - bc ] ) ) .
We let 𝒳_τ := _τ(_2(_F) ) and _τ∈𝒳_τ denote _τ ( [ 1; 1 ] ).
For i = 0, 1, 2, 𝒳_τ is a subgroup of _τ_i. In particular, '_1( _τ_i) = _2(_F ).
The first claim is easily verified by checking that τ_i^-1𝒳τ_i⊆ K for each i. For the second, note that _1'(H_τ_i') are compact open subgroups of H_1 = _2(F) that contains _2(_F ) and U_1 = _2(_F) is a maximal compact open subgroup of H_1.
If h ∈ H_τ_i, a_1 - d , a - d_1 , b_1 - c, b - c_1∈ϖ^i_F.
Follows by matrix computations above.
§.§ Cartan decompositions
Throughout this article, we let ϖ^Λ denote the subset {ϖ^λ | λ∈Λ} of A.
For i = 0, 1, 2, define
p_i : Λ → U ' ϖ^Λτ_i K, λ↦ U ' ϖ^λτ_i K .
By <cit.>, we have an identification U ' ϖ^Λ H_τ_i ' U' ϖ^Λτ_i K given by U ' ϖ^λ_τ_i↦ U ' ϖ^λτ_i K.
So we may equivalently view p_i as a map to U ' ϖ^Λ_τ_i. For i = 0, Cartan decomposition for H' implies the following.
p_0 induces a bijection W' \Λ U' ϖ^λ K.
Observe that _τ∈ N_H'(A^∘ ) is a lift of the element s_0 r_0∈ W '.
Moreover
[ 1 ; 1 ; 0 ϖ; 1 ; 1; -1ϖ 0 ]∈ H_τ_1' , [ 1 ; 1 ; 0 1; 1 ; 1; -1 0 ]∈ H_τ_2' .
Thus p_1 factors through ⟨ s_0r_0, t(-f_3) r_2⟩\Λ and p_2 factor through ⟨ s_0r_0,
r_2⟩\Λ.
For i = 1, 2, p_i ( λ) is distinct from p_i ( s_0λ ) if λ∉{ s_0λ , r_0λ}.
Write λ = ( a_0, a_1, a_2, a_3 ). Since p_i factors through ⟨ s_0 r_0⟩\Λ, we may assume by replacing λ with s_0(λ) etc., that 2 a_1≥ a_0 and 2 a_2≥ a_0. Then we need to show that U'
ϖ^λ H_τ_i≠ U' ϖ^s_0(λ)_τ_i whenever 2 p_1 > p_0 and 2p_2 > p_0. Assume on the contrary that there exists an h ∈ U' such that γ : = ϖ^-λ h ϖ^s_0(λ)∈
H_τ_i. Write h = (h_1, h_2 ) as in Notation <ref>. Then γ = (γ_1 , γ_2 ) satisfies
γ_1 = [ a ϖ^p_0 - 2p_1 b; c d ϖ^2p_1 - p_0 ] , γ _2 = [ * * * *; * * *; c_1ϖ^2p_2 - p_0 * * *; * * * ] .
Lemma <ref> implies that a ϖ ^ p_0 - 2p_1∈_F and Corollary <ref> implies that b - c_1ϖ^2p_2 - p_0∈ϖ^i_F. Thus a , b ∈ϖ_F. Since c, d ∈_F as h ∈ U', we see that sim(h) = (h_1) = ad - bc ∈ϖ_F, a contradiction.
Recall that λ_∘∈Λ denotes the cocharacter (1,1,1,1).
If the W ^ ∘-orbits of λ + λ_∘ and μ + λ_∘ are distinct, p_1(λ ) is distinct from p_1( μ ).
Since W^∘ is a Coxeter subgroup of the Iwahori Weyl group, there is a bijection
W^∘\ W_I' / W' U ^ ∘ϖ^Λ U'
W ^ ∘ w W ' ↦ U ^∘ w U ' .
Recall that we have an isomorphism W_I'≃Λ⋊ W ' which sends ϖ^λ∈ W_I' to (t(-λ), 1). Via this isomorphism, we obtain bijection W^∘\Λ→ U ^∘ϖ^λ U' given by W ^∘λ↦ U ^∘ϖ^-λ U ' and hence a bijection
W^∘\Λ U ' ϖ^λ U ^∘ , W^∘λ↦ U 'ϖ^λ U ^∘.
Now H_τ_∘⊂ U^∘ by Lemma <ref>. So (the inverse of) the bijection above induces a well-defined surjection U ' ϖ^Λ H_τ_∘ ' → U' ϖ^Λ U^∘ W^∘\Λ. Thus if λ_1, μ_1∈Λ are in different W^∘-orbits, U ' ϖ^λ_1 H_τ_∘' is distinct from U ' ϖ^μ_1 H_τ_∘ '. Now apply this to λ_1 : = λ + λ_0 and μ_1 : = μ + λ_∘ and use that H _τ_1 ' = ϖ^λ_∘ H_τ_∘ 'ϖ^-λ_∘.
If the W'-orbits of λ, μ are distinct, p_2(λ) is distinct from p_2(μ).
This follows similarly since H_τ_2 ' ⊂ U'.
We denote W_τ_1 ' = ⟨ s_0 r_0 , t(-f_3) r_2⟩⊂ W_I' and W_τ_2' = ⟨ s_0 r_0 , r_2⟩. We also denote W' by W_τ_1' for consistency.
For i = 0, 1, 2, the maps p_i induce bijections W _ τ_i ' \Λ U ' ϖ^λτ_i K.
Follows from the results above.
§.§ Schubert cells
The decompositions of various double cosets is accomplished by a recipe proved in <cit.>. Below, we provide its formulation in the special case of G = GSp_6(F).
Recall that I denotes the Iwahori subgroup of G contained in U whose reduction modulo ϖ lies in the Borel of () determined by Δ. For i = 0 ,1 ,2 ,3, let
x_i : _a→𝐆 denote the root group maps
x_0 : u ↦ ( [ 1 ; 1 ; 1 ; ϖ u 1 ; 1 ; 1 ] ),
x_1 : u ↦[ 1 u ; 1 ; 1 ; 1 ; - u 1 ; 1 ], x_2 : u ↦[ 1 ; 1 u ; 1 ; 1 ; 1 ; -u 1 ],
x_3 : u ↦[ 1 ; 1 ; 1 u; 1 ; 1 ; 1 ]
and let g_i : [ ] → G be the maps κ↦ x_i(κ ) w_i. Then Iw_i I/ I = _κ∈ [] g_i(κ) I for i = 0, 1, 2, 3.
For w ∈ W_I, choose a reduced word decomposition w= s_w,1 s_w,2⋯ s_w, ℓ(w) ρ_w where s_w,i∈ S_aff, ρ_w∈Ω and define
𝒳_w : [ ] ^ ℓ(w) → G
(κ_1 , …, κ_ℓ(w) ) ↦ g_s_w,1 ( κ_1 ) ⋯ g_s_w, ℓ(w) ( κ_ℓ (w) ) ρ_w
Here, we have suppressed the dependence on the choice of the reduced word decomposition in light of the following result,
which is a consequence of the braid relations in Iwahori Hecke algebras.
I w I = _ κ⃗∈ []^ℓ(w)𝒳_w() I. If w has minimal possible length in w W, then I w K = _κ⃗∈ []^ℓ(w)𝒳_w(κ⃗)K.
Thus the image of 𝒳_w modulo I is independent of the choice of decomposition and we have | im ( 𝒳_w)I/I | = q ^ ℓ(w). Moreover, the same facts holds with right K-cosets if w has the aforementioned minimal length property. For such w, ℓ(w) = ℓ_min( t(-λ_w ) ) where λ_w∈Λ is the unique cocharacter such that w K = ϖ^λ_w K. We refer to the image of 𝒳_w as a Schubert cell since these images are reminiscent of the Schubert cells that appear in the stratification of the classical Grassmannians.
Now given a λ∈Λ^+, a set of representatives for U' \ K ϖ^λ K / K can be obtained by studying U '-orbits on a decomposition for K ϖ^λ K / K. Let W^λ denote the stabilizer of λ in W. The next result shows that the study of such orbits amounts to studying U'-orbits on certain Schubert cells.
There exists a unique w = w_λ∈ W_I of minimal possible length such that K ϖ^λ K = K w K. If [W / W^λ] denotes the set of minimal length representatives in W for W / W^λ, then
K ϖ^λ K = _τ_κ⃗∈ []^ℓ(τ w )𝒳_τ w ( κ⃗ ) K .
Moreover, ℓ ( τ w ) = ℓ (τ) + ℓ(w) for all τ∈ [ W / W^λ ].
In what follows, we will write these Schubert cells for various words in W_I. Note W / W^λ is identified with the orbit W λ of λ. The set of possible reduced words decompositions for τ∈ [W / W_λ] can be visualized by a Weyl orbit diagram. This is the Hasse diagram on the subset [ W / W^λ ] ⊂ W under the weak left Bruhat order. Via the bijection [W/ W^λ ] ≃ Wλ, the nodes of this diagram can be viewed as elements of W λ and its edges are labelled by one of the simple reflections in Δ = { s_1 , s_2 , s_3}. The unique minimal element of this diagram is λ^opp (the unique anti-dominant element in Wλ) and the unique maximal element in this diagram is λ.
Let λ = (2,2,1,1). Then λ ^ opp = (2,0,1,1) and the Weyl orbit diagram is
(2,0,1,1) [r, "s_1"] (2,1,0,1) [r, "s_2"] (2,1,1,0) [r, "s_3"] (2,1,1,2) [r, "s_2"] (2,1,2,1) [r, "s_1"] (2,2,1,1)
By Lemma <ref>, we have w_λ = w_0ρ^2. So the decomposition of K ϖ^λ K / K can be given by six Schubert cells, corresponding to the reduced words
w_0ρ^2, w_1 w_0ρ^2 , w_2 w_1 w_0ρ^2, w_3 w_2 w_1 w_0ρ^2, w_2 w_3 w_2 w_1 w_0ρ^2 , w_1 w_2 w_3 w_2 w_1 w_0ρ^2
which are obtained by “going down" the Weyl orbit diagram. Each cell down this diagram can be obtained from one preceding it by applying two elementary row operations, one for the reflection and one for the root group map. We also apply an optional column operation to “match" the diagonal with the value of the cocharacter at ϖ at each node (for aesthetic reasons). For instance, let ε_0 = w_0ρ^2 and ε_1 = w_1ε_1. We have
im( 𝒳_ε_0) K / K = 0.8*([ 1 ; ϖ ; ϖ ; x ϖ ϖ^2 ; ϖ ; ϖ ]) K
x ∈ []
im( 𝒳_ε_1)K/ K =
0.8*([ ϖ a ; 1 ; ϖ ; ϖ ; x ϖ - a ϖ ϖ^2 ; ϖ ]) K
a , x ∈ []
.
Note that for ε = w_2 w_3 w_2 w_1 w_0ρ^2, our recipe gives
im(𝒳_ε) K / K
=
0.8*([ ϖ a ; ϖ^2 c_1ϖ a ϖ z + cc_1 + ϖ x c ϖ; ϖ ; ϖ ; 1 ; - c_1 ϖ ]) K
a , c, c_1 , x , z ∈ []
However, we can replace z + cc_1 + ϖ x with a variable y running over [_2], since for a fixed value of c, c_1 and a, the expression z + c c_1 + ϖ x runs over such a set of representatives of _F /ϖ^2_F and a column operation between fifth and second columns allows us to choose any such set of representatives. In what follows, such replacements will be made without further comment.
Convention. To save space, we will often write the descriptors of parameters below the Schubert cells rather than within the set. We will also write 𝒳_ε for the Schubert cell where we really mean im(𝒳_ε)K/K and omit writing K next to the matrices. When drawing Weyl orbit diagrams, we remove all the labels of the nodes as they can be read off by following the labels on the edges.
That the listed representatives are distinct follows by Lemma <ref> and Lemma <ref>. The goal therefore is to show that the Schubert cells reduce to the claimed representatives in each case. For each of the words w, we will draw the Weyl orbit diagram beginning in the anti-dominant cocharacter λ_w associated with w. In these diagrams, we pick the first vertex and the vertices that only have one incoming arrow labelled s_1 (all of which we mark on the diagrams) and study the U'-orbits on Schubert cells corresponding to these vertices. This suffices since the orbits of U ' on the remaining cells are contained in these by the recursive nature of the cell maps.
We list all of the relevant cells and record all of our conclusions. However since the reduction steps involved are just elementary row and column operations[row operations coming from _2(_F) ×__F^×GSp_4(_F) and column operations coming from GSp_6(_F)], we only provide detailed justifications for one cell in each case, and leave the remaining for the reader to verify (all of which are completely straightforward).
∙ w = ρ. Here λ_w = (1,0,0,0) and the Weyl orbit diagram is as follows.
[rd, "s_1" ]
[r, "s_3"] [r, "s_2"] [ru, "s_3"] [rd, "s_1"' , "∘" marking ] [r, "s_2"] [r, "s_3"]
[ru, "s_3"']
Thus there are two cells of interests, corresponding to the words ε_0 = ρ and ε_1 = w_1 w_2 w_3ρ. The cell 𝒳_ρ obviously reduces to ϖ^(1,1,1,1). As for ε_1, we have
𝒳_ε_1 = 0.8*([ ϖ a c z ; 1 ; 1 ; 1 ; -a ϖ; - c ϖ ]) a, c, z ∈ [ ]
We can eliminate z via a row operation. Then we conjugate by reflections w_3 and v_2 = w_2 w_3 w_2 to make the diagonal ϖ^(1,1,1,1) which puts the entries a, c in the top right 3 × 3 block. Conjugation by w_1 switches a, c and one execute Euclidean division (using row/column operations) to make one of a or c equal to zero. Conjugating by an element of A^∘ if necessary, we get ϖ ^(1,1,1,1) or τ_1 as possible representatives from this cell.
∙ w = w_0ρ^2. The Weyl orbit diagram of λ_w = (2,0,1,1) is
[r, "s_1", "∘" marking] [r, "s_2"] [r, "s_3"] [r, "s_2"] [r, "s_1", "∘" marking ]
There are three cells of interests corresponding to ε_0 = w_0ρ^2, ε_1 = w_1ε_0 and ε_2 = w_1 w_2 w_3 w_2ε_1. The cells 𝒳_ε_0, 𝒳_ε_1 were recorded in Example <ref> and
𝒳_ε_2 = 0.8*([ ϖ ^2 a_1ϖ c_1ϖ z + ϖ x a ϖ c ϖ; ϖ a ; ϖ c ; 1 ; -a_1 ϖ ; -c_1 ϖ ]) a , a_1 , c , c_1,
x, z ∈ [] .
We claim that the U '-orbits on
* 𝒳_ε_0 are represented by ϖ^(2,2,1,1),
* 𝒳_ε_1
are represented by ϖ ^(2,1,2,1), ϖ^(1,1,0,0)τ_1,
* 𝒳_ε_2 are represented by ϖ^(2,2,1,1) , ϖ^(1,1,0,0) τ_1.
We record our steps for reducing 𝒳_ε_2. Eliminate the entry z + ϖ x using a row operation. Conjugation by w_3∈ U (resp., w_2w_3w_2∈ U ') switches a_1 , a (resp., c_1 , c) and keeps the diagonal ϖ^(2,2,1,1). Using row/column operations, we may make one a, a_1 (resp., c , c_1) zero while still keeping the diagonal ϖ^(2,2,1,1). Without loss of generality, assume a_1 , c_1 are zero. Conjugation by w_2∈ U ' switches a, c and we may again apply row-column operations to make one of a, c zero, say c. Normalizing by an appropriate diagonal matrix in A^∘, we get the representatives ϖ^(2,2,1,1) or ϖ^(1,1,0,0)τ_1 depending on whether a = 0 or not.
∙ w = υ_1ρ^2. We have λ_w = (2,0,0,1) and the Weyl orbit diagram is
[r, "s_1", "∘" marking ] [rd, "s_2"]
[r, "s_2"] [r, "s_3"] [rd, "s_1"', "∘" marking ] [rd, "s_1" ] [ru, "s_2"] [r, "s_3"] [r, "s_2"]
[r, "s_3"'] [r, "s_2"'] [ru, "s_1"] [r, "s_3"'] [ru, "s_1"']
So we need to study the U-orbits on the analyze Schubert cells corresponding to the words
ε_0 = w, ε_1 = w_1 w_2 w and ε_2 = w_1 w_2 w_3 w_2 w .. The cells corresponding to these words are
𝒳_ε_0 = 0.8*([ 1 ; 1 ; ϖ ; x_1ϖ a ϖ ϖ^2 ; a ϖ - x ϖ ϖ^2 ; ϖ ]), 𝒳_ε_1 = 0.8*([ ϖ a_1 c ; 1 ; 1 ; ϖ ; x_1ϖ a ϖ -a_1ϖ ϖ ^ 2 ; a ϖ - x ϖ - c ϖ ϖ ^2 ])
𝒳_ε_2 = 0.8*([ ϖ^2 a_1 +a ϖ c_1 ϖ z + ϖ x c ϖ; 1 ; ϖ c ; 1 ; x_1 ϖ -(a_1 + a ϖ) ϖ^2 ; -c_1 ϖ ])
where a, a_1, c , c_1 , x , x_1, z ∈ [].
We claim that the U '-orbits on
* 𝒳_ε_0 are given by ϖ^(2,2,2,1), ϖ^(1,1,1,0)τ_1,
* 𝒳_ε_1 are given by ϖ^(2,1,2,2), ϖ^(1,1,0,1)τ_1,
* 𝒳_ε_2 are given by ϖ^(2,2,2,1) , ϖ^(1,1,1,0)τ_1 , ϖ^(2,1,1,1)τ_2.
We record our analysis for 𝒳_ε_2.
Begin by eliminating the entries z + ϖ x and x_1ϖ using row operations. Conjugation by w_3∈ U ' switches c_1, c while keeping the diagonal ϖ^(2,2,0,1) and we can apply row-column operations to make either c or c_1 zero, say c_1. Conjugating by r_0 = w_2 w_3 w_2∈ U ', we arrive at
0.8([ ϖ^2 a_1 +a ϖ c ϖ; ϖ^2 a_1 +a ϖ ; ϖ c ; 1 ; 1 ; ϖ ])
for some a, a_1, c ∈ []. We now divide in two case. Suppose first that c is zero. Then (a_1 + a ϖ ) is in _F^×, ϖ_F^× or is equal to zero, and we can normalize by conjugating with an element of A^∘ to get the representatives ϖ^(2,2,2,1), ϖ ^(1,1,1,0)τ_1, ϖ^(2,1,1,1)τ_2. Now suppose that c ≠ 0. Then we may assume a = 0 by applying row-column operations. If now a_1≠ 0, we may make c = 0 and normalizing by A^∘
leads us to the representative ϖ^(1,1,1,0)τ_1.
If a_1 = 0 however, then conjugating by w_2 and normalizing by A^∘ gives us the representative ϖ^(1,1,0,1)τ_1.
∙ w = υ_2ρ^3. Here λ_w = (3,0,1,1) and the Weyl orbit diagram is
[r, "s_1"] [r, "s_2"] [rd, "s_3"]
[ru, "s_3"] [r, "s_1", "∘" marking ] [rd, "s_2"'] [ru, "s_3"'] [rd, "s_2"]
[rd, "s_1"] [ru, "s_2"] [rd, "s_3"] [rd, "s_1"]
[rd, "s_1"', "∘" marking] [ru, "s_3"] [r, "s_2"] [rd, "s_3"'] [ru, "s_1"] [r, "s_2"] [rd, "s_1"'] [ru, "s_3"]
[ru, "s_3"'] [rd, "s_2"'] [ru, "s_1"'] [rd, "s_2"] [ru, "s_3"']
[rd, "s_3"'] [r, "s_1"] [ru, "s_2"']
[r, "s_2"'] [r, "s_1",swap, "∘" marking ] [ru, "s_3"] [ru, "s_3"']
There are four cells of interest corresponding to words ε_0 = w_0 w_1 w_2 w_3ρ^3, ε_1 = w_1ε_0, ε_2 = w_1 w_2 w_3ε_0 and ε_3 = w_1 w_2 w_3 w_2 w_1ε_0. Their Schubert cells are
𝒳_ε_0 = 0.8*([ 1 ; ϖ ; ϖ ; yϖ a ϖ^2 c ϖ^2 ϖ^3 ; a ϖ ϖ^2 ; c ϖ ϖ^2 ]), 𝒳_ε_2 : = 0.8*([ ϖ^2 a_1 +c ϖ c_1 ϖ ϖ z ; 1 ; ϖ ; ϖ ; - yϖ -a ϖ^2 -ϖ (a_1 +c ϖ) ϖ^3; -a ϖ -c_1 ϖ ϖ^2 ])
𝒳_ε_1 = 0.8*([ ϖ a_1 ; 1 ; ϖ ; a ϖ ϖ^2 ; a ϖ^2 y ϖ c ϖ^2 -a_1 ϖ^2 ϖ^3; c ϖ ϖ^2 ]), 𝒳_ε_3 = 0.8*([ ϖ^3 a ϖ^2 +a_2 ϖ c ϖ^2 +c_2 ϖ z a_1 ϖ^2 c_1 ϖ^2; ϖ a_1 ; ϖ c_1 ; 1 ; - (a_2 + a ϖ ) ϖ^2 ; -( c_2 + c ϖ ) ϖ^2 ])
where a, a_1, a_2 , c, c_1, c_2∈ [], y ∈ [ _2] and z ∈ [_3]. Then we claim that the U'-orbits on
* 𝒳_ε_0 are represented by ϖ^(3,3,2,2), ϖ^(2,2,1,1)τ_1,
* 𝒳_ε_1 are represented by ϖ^(3,2,3,2) , ϖ^(2,1,2,1) τ _1,
* 𝒳_ε_2 are represented by ϖ^(3,2,3,2), ϖ^(2,1,2,1)τ _1,
ϖ^(2,1,1,2)τ _1, ϖ^(3,2,1,2)τ_2,
* 𝒳_ε_3 are
represented by ϖ^ (3,3,2,2) , ϖ^(2,2,1,1) τ _1, ϖ^(2,2,0,1)τ _1, ϖ^ (3,2,1,2) τ _2.
We record our reduction steps for 𝒳_ε_3.
Begin by eliminating the entry y by a row operation. Observe that if a_1 (resp., c_1) is not zero, then we can assume a (resp., c) is zero by row column operations. Moreover, conjugation by w_2 switches the places of a, a_1, a_2 by c, c_1, c_2 respectively and keeps the diagonal ϖ ^ ( 3,3,1,1). We have three cases to discuss.
Case 1. Suppose a_1 = c_1 = 0. Apply row column operations to replace a ϖ^2 + a_2ϖ, c ϖ^2 + c_2ϖ by their greatest common divisor (with the other entry being zero). Since we can swap entries by w_2, let's assume that a ϖ^2 + a_2ϖ = 0. We may normalize the gcd by an element of A^∘ so that the greatest common divisor is 0 or ϖ or ϖ^2. Now conjugate by s_2 r_0 r_2 = s_2 (s_1 s_0s_1) s_3∈ U to makes the diagonal ϖ^(3,3,2,2) and put the non-diagonal entries in right place. Thus this case leads us to representatives ϖ^(3,3,2,2)τ_1, ϖ^(3,3,2,2)τ_2.
Case 2. Suppose exactly one of a_1, c_1 is non-zero. Since we can swap these, we may assume wlog a_1≠ 0, c_1 = 0. Then we are free to make a = 0. Now if a_2≠ 0, it can be used to replace the entries a_1 , c , c_2 by zero. Conjugating by r_0 r_2 = w_2 w_3 w_2 w_3 and normalizing by A^∘ gives us ϖ^(3,3,2,2)τ_2. If however a_2 = 0, then we can conjugate by w_3 to make the diagonal ϖ^(3,3,1,2) while moving the c ϖ^2 + c_2ϖ entry corresponding to the root group of e_1 + e_3 - e_0. As a_1≠ 0, we are free to eliminate c_1. There are now two further sub-cases. If c_2 = 0, we obtain the representative ϖ^(3,3,1,2)τ_1 after normalizing by an element of A^∘. If however c_2≠ 0, we can replace a_1 = 0 and conjugating by w_2 w_3∈ U and normalizing by A^∘ gives us ϖ^(3,3,2,2)τ_2.
Case 3. Suppose both a_1, c_1 are non-zero. Then we may assume a, c are zero. If a_2 (resp., c_2) is not zero, we can eliminate entries containing a_1 (resp., c_1). Then an argument similar to Case 2 yields ϖ^(3,3,2,2)τ_2, ϖ^(3,3,1,2)τ_1 as representatives.
∙ w = υ_3ρ^4. The Weyl orbit diagram for λ_w = (4,1,1,1) is the same as for (1,0,0,0) and so we have to analyze cells of length ε_0 = w_0 w_1 w_0 w_2 w_1 w_0ρ^4 and ε_1 = w_1 w_2 w_3ε_0. The two cells are as follows:
𝒳_ε_0 = 0.8*([ 1 ; 1 ; 1 ; x_2 ϖ a_1 ϖ c ϖ ϖ^2 ; a_1 ϖ - x_1 ϖ - a ϖ ϖ^2 ; c ϖ - a ϖ x
ϖ ϖ^2 ]) ρ^2 a , a_1 , c, x
x_1, x_2∈ [ ]
𝒳_ε_1 = 0.8*([ ϖ^2 a_2 + c ϖ c_1 +a ϖ z + x ϖ ; 1 ; 1 ; 1 ; - x_2 ϖ - a_1ϖ - ( a_2 + c ϖ ) ϖ^2 ; - a_1ϖ x_1 ϖ - ( c_1 + a ϖ ) ϖ^2 ]) ρ^2 a , a_1 , a_2 , c, c_1 ,
x , x_1, x_2, z ∈ [ ]
We claim that the U'-orbits on
* 𝒳_ε_0 are given by ϖ^(4,3,3,3), ϖ^(3,2,2,2)σ _1,
* 𝒳_ε_1 are given ϖ^(4,3,3,3), ϖ^(3,2,2, 2)τ_1, ϖ^(4,2,2,3)τ _2,
We record our analysis for orbits on 𝒳_ε_1.
We can eliminate the entries involving a_1, x , x_1 , x_2 , z using row operations. Conjugating by w_3 and w_2 w_3 w_2 gives us
0.8([ ϖ^2 a_2 +c ϖ c_1 + a ϖ; ϖ^2 a_2 +c ϖ ; ϖ^2 c_1 + a ϖ ; 1 ; 1 ; 1 ]) ρ^2
and one can apply Euclidean algorithm to the entries c_1 + a ϖ, a_2 + c ϖ to replace one of them with 0 and the other by the greatest common divisor which is either 0, 1 or ϖ. Conjugating by w_2 and normalizing by A^∘ if necessary, we obtain the three representatives.
∙ w =
υ_4ρ^4. We have λ_w = (4,0,2,2) and the Weyl orbit diagram is the same as for (2,0, 1,1). We need to analyze the
Schubert cells corresponding to ε_0 = w_0 w_1 w_2 w_3 w_2 w_1 w_0ρ^4, ε_1 = w_1 w and ε_2 = w_1 w_2 w_3 w_2 w_1 w. These cells are
𝒳_ε_0 = 0.8*([ 1 ; a ϖ ϖ^2 ; c ϖ ϖ^2 ; yϖ a_1 ϖ^3 c_1 ϖ^3 ϖ^4 -a ϖ^3 -c ϖ^3; a_1 ϖ ϖ^2 ; c_1 ϖ ϖ^2 ]), 𝒳_ε_1 = 0.8*([ ϖ^2 a_2 +a ϖ ; 1 ; c ϖ ϖ^2 ; a_1 ϖ ϖ^2 ; a_1 ϖ^3 y ϖ c_1 ϖ^3 - (a_2 + a ϖ) ϖ^2 ϖ^4 -c ϖ^3; c_1 ϖ ϖ^2 ])
𝒳_ε_2 = 0.8*([ ϖ^4 a_1 ϖ^3 +a_3 ϖ^2 c_1 ϖ^3 +c_3 ϖ^2 a ϖ^3 +a_2 ϖ^2 c ϖ^3 +c_2 ϖ^2; ϖ^2 a_2 +a ϖ ; ϖ^2 c_2 +c ϖ ; 1 ; -a_3 -a_1 ϖ ϖ^2 ; -c_3 -c_1 ϖ ϖ^2 ])
where a , a_1, a_2, a_3, c , c_1 , c_2 , c_3∈ [] and y ∈ [_3]. We claim that the U '-orbits on
* 𝒳_ε_0 are given by ϖ^(4,4,2,2), ϖ^(3,3,1,1)τ _1,
* 𝒳_ε_1 are given by
ϖ^(4, 2, 4, 2 ), ϖ^(3,2,0,1)τ_1, ϖ^(4,3,1,2)τ _2
* 𝒳_ε_2 are given by
ϖ^(4,4,2,2), ϖ^(3,3,1,1)τ _1,
Let us record our steps for the reduction of 𝒳_ε_1.
We begin by eliminating the entries involving y, c, c_1 using row operations. If a_1 = 0, then conjugating r_0 = w_2 w_3 w_2 and normalizing by an appropriate element of A^∘, we obtain ϖ^(4,2,4,2), ϖ^(3,1,3,1)τ_1, ϖ^(4,1,3,2)τ_2 depending on the valuation of a_2 + a ϖ. Now
U ' ϖ^(3,1,3,1)τ_1 K = U ' ϖ^(3,2,0,1)τ_2 K, U ' ϖ^(4,1,3,2) K = U ' ϖ^(4,3,1,2)τ_2 K
by Proposition <ref>. If however a_1≠ 0, then a can be made zero via row-column operations. We then have two further
subcases. If a_2 = 0, then we can conjugate by s_0 =
s_α_0 and normalize by A^∘ to obtain ϖ^(3,1,3,1)τ_1 which is the same as ϖ^(3,2,0,1). On the other hand, if a_2≠ 0, then a_1 can be made zero and normalizing by A^∘ gives ϖ^(4,1,3,2)τ_2 which is the same as ϖ^(4,3,1,2)τ_2.
If one instead tries to directly study the U-orbits on the double cosets in the proof above, one needs to study far more Schubert cells and distinguish an enormous number of representatives from each other. For instance for w = υ_2ρ^3, one would need to study 12 cells instead of 4.
§ DOUBLE COSETS OF GL2 X GSP4
In this section, we record the proofs of various claims involving the action of U on double cosets spaces of H '. Since both U and U' have a common _2(_F ) component, the computation of orbits is facilitated by studying the orbits of U_2 on double cosets of H _2 '. This in turn is achieved by techniques analogous to the one used in <ref> for decomposing double cosets of parahoric subgroups of an unramified group.
If h ∈ H_2⊂ H_2 ', we will often write
h = ( [ a b; a_1 b_1; c d; c_1 d_1 ] ) or h = ( ( a b
c d ) , ( a_1 b_1
c_1 d_1 ) ) .
We let Λ_2 denote f_0⊕ f_2⊕ f_3. Given λ = a_0 f_0 + a_2 f_2 + a_3 f_3∈Λ_2 as (a_0, a_2, a_3) and let ϖ^λ denote the element diag(ϖ^a_2, ϖ^a_3, ϖ^a_0 -a_2, ϖ^a_0 - a_3) ∈ H_2'.
§.§ Projections
Let s : '_2→' denote the section of _2' given by γ↦ ( ( [ sim(γ); 1 ] ) , γ ). Fix a compact open subgroup V ⊂ H ' such that _1'(V) = _2(_F ) and an arbitrary element h = (h_1, h_2 ) ∈ H '. Denote V_2 = _2'(V). We refer to
_h, V : U \ U' h V / V
→ U_2\ U_2' h_2 V_2 / V_2 ,
U γ V ↦ U_2_2(γ ) V_2
as the projection map. We are interested in the fibers of pr_h,V.
Suppose h_1∈_2(F) is diagonal and either s(V_2) ⊂ V or h_1 is central. If η∈ H_2' has the same similitude as h and U_2η V_2∈ U_2\ U_2 ' h_2 V_2/ V_2, then { U (h_1, η) V } = _h,V^-1(U_2η V_2 ). In particular, pr_h,V is a bijection.
Note that any element of U \ U ' h V / V can be written as U (1, γ) h V for some γ∈Sp_4(_F) and similarly for elements of U_2\ U_2' h_2 V_2/ V_2. This immediately implies that _h,V is surjective.
Suppose now that γ∈Sp_4(_F ) is such that U (1, γ ) h V maps to U_2η V_2 under pr_h,V.
Then there exist u_2∈ U_2, v_2∈ V_2 such that η = u_2γ h_2 v_2. Taking similitudes, we see that sim(u_2) = sim(v_2) ^-1. Let u_1 = diag(1, sim(u_2) ) ∈_2(_F) and set u = (u_1 , u_2) ∈ U.
Take v = s(v_2) ∈ V if s(V_2) ⊂ V or an arbitrary element in (_2')^-1(V) if h_1 is central. Write v = (v_1, v_2). Then
U (1,γ) h V = U u (1,γ) h v V
= U (u_1h_1 v_1, u_2γ_2 h_2 v_2 ) V
= U ( u_1v_1 h_1, η ) V
= U (h_1, η) h V
where we used that h_1 commutes with v_1 in both cases and that (u_1v_1, 1) ∈ U_2.
In case h_1 is non-central or s(V_2) ⊄V, one needs to perform an additional check to determine the fibers of _h, V. Define
S^- = { ( [ 1 ; x 1 ] )
| x ∈_F } , S^+ =
{
([ 1; -1 ])
( [ 1 ϖx; 1 ] ) | x ∈_F }.
For a positive integer a, define S_a^- to be the subset S^- where we require the variable x to lie in [_a ] (see <ref> for notation) and S_a^+ the subset of S^+ where we require x to lie in [_a-1 ].
We also denote S^± = S^-∪ S^+ and S^±_a = S^-_a∪ S^+_a.
Suppose h_1 = diag (ϖ^u, ϖ^v) with u > v and η∈ H_2' is such that U_2η V_2∈ U_2\ U_2' h V_2 / V_2 with sim(η) = ϖ^ u+v. Then
^-1_h , V ( U_2η V_2 ) = { U (h_1χ , η ) V | χ∈ S^± _ u - v and U' (h_1χ , η ) V = U' h V }
In the proof of Lemma <ref>, one obtains the equality U ( 1, γ ) h V = U (h u_1 v_1 , η V ) with u_1 v_1∈_2(_F ). Now u_1 v_1 can be replaced with a representative in the quotient
( _2(_F ) ∩ h_1 ^-1_2(_F ) h_1 ) \_2(_F)
and S^±_u-v forms such a set of representatives.
We will need to use the last result for V ∈{ H_τ_1' , H_τ_2' } when lifting coset representatives η for U_2\ U_2' h_2 V_2/ V_2 to U \ U' h V/ V. In almost all cases, it will turn out that there is essentially one choice of γ∈ S^± that satisfies U ' (h_1γ , η)V = U' h V. If there are more than one element in the fiber, we will invoke a suitable Bruhat-Tits decomposition for parahoric double cosets to distinguish them.
§.§ The GSp_4-players
Recall that the roots of H_2' = GSp_4 are identified with
{±β_0, ±β_1, ±β_2 , ± (β_1 + β_2) } .
To compute these decompositions, we let
v_0 = 0.9(
1ϖ
1
ϖ
-1
) ,
v_1 = 0.9(
1
1
1
1 ) ,
v_2 =
0.9( 1
1
- 1
1 )
which respectively represent the reflections t(f_2) r_0, r_1, r_2 which generate the affine Weyl group W_2, aff' of H_2'. We also denote v_β_0 = diag(ϖ, 1, ϖ^-1, 1) v_0 which represents the reflection r_0 in the root β_0. For i = 0, 1, 2, let y_i : _a→_2' be the maps
y_0 : u ↦0.9( 1
1
u ϖ 1
1 ),
y_1 : u ↦0.9( 1 u
1
1
- u 1 ) ,
y_2 : u ↦0.9(
1
1 u
1
1 ) .
If I_2' ⊂ H'_2 denote the Iwahori subgroup given by _2(I'), then y_i([]) forms a set distinct q representatives for the quotients I_2'/ I_2' ∩ v_i I_2' v_i. For each i = 0, 1, 2, let
h_r_i : [] → H_2 ' , κ↦ y_i(κ) v_i .
Let W_I_2' denote the Iwahori Weyl group of H_2' and l : W_I_2'→ denote the length function induced by _2(S_aff') = { r_1, r_2 , t(f_2) r_0}. For v ∈ W_I_2' and v = r_v,1 r_v,2⋯ r_v, lω_v (where r_v,i∈_2(S_aff'), ω_v∈_2( Ω_H') is a power of _2(ω_H')) is a reduced word decomposition, we set
𝒴_v : []^l(v) ⟼ H_2 '
(κ_1, …, κ_l(v)) ↦ h_r_v,1(κ_1) ⋯ h_r_v,l(v)(κ_l(v)) ρ_2,v
where ρ_2,v∈ H is the element representing ω_v. For a compact open subgroup V ⊂ H_2', we let 𝒴_v/V to denote the coset space im(𝒴_v)V/V, which we will also refer to as a Schubert cell.
§.§ Orbits on U' h U'/U'
Let W_2 denote the Weyl group of H_2 = _2×__m_2. We can identify W_2 as the subgroup of W_2' generated by r_0 and r_2. For η∈ H_2', denote H_2∩η U_2' η^-1 by H_2,η. Then the map
U_2ϖ^Λ_2η U_2' → U_2ϖ^Λ_2 H_2,η U_2ϖ^λη U_2 ' ↦ U_2ϖ^λ H_2,η
is a bijection. Let η_1, η_2 denote the projection of ϱ_1, ϱ_2 given in (<ref>) to H_2 '. Explicitly,
η_1
= 0.9( ϖ 1
ϖ 1
1
1 ) , η_2 = 0.9( ϖ^2 1
ϖ^2 1
1
1 )
The cosets H_2 U_2', H_2η_1 U_2' and H_2η_2U_2' are pairwise disjoint.
This is similar to Lemma <ref>. See also Remark <ref>.
The map W_2\Λ_2→ U_2ϖ^Λ_2 U _2' given by W_2λ↦ U_2ϖ^λ U_2' is a bijection. If λ, μ∈Λ_2 are not in the same W_2-orbit, then U_2ϖ^λη_1 U_2' is distinct from U_2ϖ^μη_1 U_2'.
The first claim follows by the bijection (<ref>) and Cartan decomposition for H_2. It is easily verified that H_2,η_1⊆ U_2, so the second claim also follows by Cartan decomposition for H_2.
For i = 1 ,2 and any λ∈Λ_2, U_2ϖ^λη_i U_2' = U_2ϖ^r_0r_2(λ) η_i U_2'.
This follows by noting that η_i^-1 v_β_0 v_2η_i∈ U_2' for i = 1 , 2 and v_β_0 v_2∈ U_2.
For h ∈ H', Let ℛ(h) denote the double coset space U_2\ U_2' h U_2'/ U_2'.
By Lemma <ref>, it suffices to establish that
* ℛ(ϖ^(1,1,1)) =
{ϖ^(1,1,1), η_1},
* ℛ(ϖ^(2,2,1)) = {ϖ^(2,2,1), ϖ^(2,1,2), ϖ^(1,1,0)η_1},
* ℛ(ϖ^(2,2,2)) =
{ϖ^(2,2,2), ϖ^(1,1,1)η_1, η_2}
* ℛ(ϖ^(3,3,2)) =
{ϖ^(3,3,2), ϖ^(3,2,3), ϖ^(2,2,1)η_1, ϖ^(2,2,0)η_1, ϖ^(1,1,0)η_2},
* ℛ(ϖ^(4,4,2)) = {ϖ^(4,4,2), ϖ^(4,2,4), ϖ^(3,3,1)η_1, ϖ^(2,2,0)η_2}.
It is easy to check using Lemma <ref> and Lemma <ref> that the listed elements in each case represent distinct double cosets. It remains to show that they form a complete set of representatives. Here we again use the recipe given by <cit.>. As before, we will write the parameters of the below them and omit writing U_2 ' next to the matrices.
(a) & (b) These were calculated in <cit.>.
(c) We have U_2' ϖ^(2,2,2) U_2' = U_2' v_0v_1v_0ρ^2 _2 U_2' and v_0 v_1 v_0ρ^2_2 is of minimal possible length. The Weyl orbit diagram of (2,2,2) is
[r, "r_2"] [r, "r_1"] [r, "r_2"]
So we need to analyze the cells corresponding to the first and the third node, which are of length 3 and 5 respectively. Let ε_0 = v_0v_1v_0ρ^2_2 and ε _1 = v_1 v_2v_0v_1v_0ρ^2_2 be the words corresponding to these nodes. We have
𝒴_ε_0 / U_2' = 0.9*([ 1 ; 1 ; x_1ϖ a ϖ ϖ^2 ; a ϖ x ϖ ϖ^2 ]), 𝒴_ε_1 / U_2' = 0.9*([ ϖ^2 a_1+aϖ y+ϖ x; 1 ; 1; x_1ϖ -(a_1 + a ϖ) ϖ^2 ])
where a, a_1 , x , y run over [].
For the first cell, eliminate ϖ x_1, ϖ x via row operations and conjugate by v_α_0v_0. For the second, eliminate y+ ϖ x, ϖ x_1 similarly and conjugate by v_2. The resulting matrices are
0.9(
ϖ^2 aϖ
ϖ^2 aϖ
1
1
), 0.9(
ϖ^2 a_1 + aϖ
ϖ^2 a_1 + aϖ
1
1
)
respectively. By conjugating with appropraite diagonal matrices, the left matrix can be simplified to ϖ^(2,2,2) or ϖ^(1,1,1)η_1 depending on whether a is zero or not. Similarly the second one simplifies to one of ϖ^(2,2,2), ϖ^(1,1,1)η_1, η_2.
(d) We have U_2 ' ϖ^(3,3,2) U_2' = U_2' v_0v_1v_2ρ^3_2 U_2' with v_0 v_1 v_2ρ^3_2 of minimal possible length. The Weyl orbit diagram of (3,3,2) is
[r, "r_2"] [r,"r_1"] [rd,"r_2"]
[ru,"r_1"][rd,"r_2"]
[r, "r_1"] [r, "r_2"] [ru, "r_1"]
There are four cells to analyze which have lengths 3, 4, 5 and 6. These correspond to ε _1 = v_0 v_1 v_2ρ^3_2, ε _ 2 = v_1ε _1, ε _3 = v_1 v_2ε _1 and ε _4 = v_1 v_2 v_1ε _ 1 .
The matrices in the corresponding cells are as follows:
𝒴_ε_0/ U_2' = 0.9*([ 1 ; ϖ ; z ϖ a ϖ^2 ϖ^3 ; a ϖ ϖ^2 ]),
𝒴_ε_2 / U_2' = 0.9*([ ϖ^2 a_1 + aϖ y_1ϖ ; 1 ; ϖ ; z ϖ (a_1 + a ϖ)ϖ ϖ^3 ]),
𝒴_ε_1 / U_2' = 0.9
*([ ϖ a_1 ; 1 ; a ϖ ϖ^2 ; a ϖ^2 z ϖ -a_1ϖ^2 ϖ^3 ]),
𝒴_ε_3 / U_2 ' = 0.9
*([ ϖ^3 (a + a_2ϖ) ϖ y_1 + z ϖ a_1ϖ^2; ϖ a_1 ; 1 ; -(a_2+ a ϖ ) ϖ^2 ])
where a, a_1 , a_2 , y_1∈ [] and z ∈ [_2 ]. From these matrices and using elementary row/column operations[A slightly non-obvious operation is ϖ^(2,0,2)η_2→ϖ^(2,2,0)η_2 obtained from Lemma <ref>.] arising from U_2, U_2', one can deduce that the orbits of U on
* 𝒴_ε_0/ U_2 ' are given by ϖ^(3,3,2), ϖ^(2,2,1)η_1,
* 𝒴_ε_1/ U_2' are given by ϖ^(3,2,3), ϖ^(2,2,0)η_1, ϖ^(2,1,2)η_1,
* 𝒴_ε_2/ U_2' are given by ϖ^(3,2,3), ϖ^(2,1,2)η_1, ϖ^(1,1,0)η_2,
* 𝒴_ε_3/ U_2 ' are given by ϖ^(3,3,2), ϖ^(2,2,0)η_1, ϖ^(2,2,1)η_1, ϖ^(1,1,0)η_2.
(e) We have U_2' ϖ^(4,2,2) U_2' = U_2' v_0v_1v_2v_1v_0ρ^4_2 U_2' and v_0 v_1v_2 v_1v_0ρ^4_2 is of minimal possible length. The Weyl orbit diagram for (4,2,2) is
[r, "r_1"] [r, "r_2"] [r, "r_1"]
So we have three cells to check, corresponding to π_1 = v_0 v_1 v_2 v_1 v_0ρ^4_2, σ_2 = v_1σ_1 and σ_3 = v_1 v_2 v_1σ_1. The matrices in the corresponding cells are as follows:
𝒴_ε_0 /U_2 ' = 0.9*([ 1 ; a ϖ ϖ ^2 ; z ϖ a_1ϖ^3 ϖ^4 - aϖ^3; a_1ϖ ϖ^2 ]),
𝒴_ε_1/ U_2' = 0.9*([ ϖ^2 a_2 + aϖ ; 1 ; a_1ϖ ϖ^2 ; a_1ϖ^3 z ϖ - ( a_2 + a ϖ)ϖ^2 ϖ^4 ])
𝒴_ε_2/U_2' =
0.9*([ ϖ^4 (a_1 + a_3ϖ)ϖ^2 y_1 + z ϖ (a_2 + aϖ) ϖ^2; ϖ^2 a_2 + a ϖ ; 1 ; (a_3 + a_1ϖ) ϖ^2 ])
where a, a_1, a_2, a_3, y_1∈ [] and z ∈ [_3 ]. From these, one deduces that the orbits of U on
* 𝒴_ε_0/U_2' are given by ϖ^(4,4,2), ϖ^(3,3,1)η _1,
* 𝒴_ε_1/U_2' are given by ϖ^(4,2,4), ϖ^(3,1,3) η_1, ϖ^(2,2,0)η_2
* 𝒴_ε_2/U_2 ' are given by ϖ^(4,4,2), ϖ^(3,3,1)η_1, ϖ^(2,2,0)η_2.
Note that we make use of U_2ϖ^(2,2,0)η_2 U_2' = U_2ϖ^(2,0,2)η_2 U_2' which holds by Lemma <ref>.
§.§ Orbits on U'h H_τ_1'/H_τ_1'
The proof of Proposition <ref> is based on Lemma <ref>. To compute the decompositions of the projections of U'ϖ^λ H_τ_1' to H_2', it will be convenient to work the with the conjugate H_τ_∘' of H_τ_1' introduced in Notation <ref>. This is done since the projection U_2^∘ : = _2'(H_τ_∘') is a (standard) maximal parahoric subgroup of GSp_4(F). It is possible to perform these computations with _2'(H_τ_1') instead, but this requires us to introduce a different Iwahori subgroup of GSp_4.
Recall that W_2' denotes the Weyl group of H_2' and W_I_2' the Iwahori Weyl group. Let W^∘_2 denote Coxeter subgroup of W_I_2' generated by T_2 := { t(f_2)r_0 , r_2 }. Each coset W_2' w W_2^∘∈ W_2' \ W_I_2' / W_2^∘ contains a unique element of minimal possible length which we refer to as (W_2', W_2^∘)-reduced element. We let [W_2' \ W _ I_2' / W_2^∘] denote the subset of W_I_2' of all (W_2', W_2^∘)-reduced elements. If w ∈ W_I_2' is such a reduced element, the intersection
W_2, w' := W_2' ∩ w W_2^∘ w^-1
is a Coxeter subgroup of W_2' generated by T_2,w := w T_2w^-1∩ W_2'. Then each coset in W_2'/ W_2,w' contains a unique element of minimal possible length. The set of all representatives elements for W_2'/W_2,w' of minimal length denoted by [ W_2' / W_2,w' ].
Then the decomposition recipe of <cit.> says the following.
For any w ∈ [ W_2' \ W_I_2' / W_2^∘ ],
U_2' w U^∘_2 = _τ_∈ []^l(τ w)𝒴_τ w() U_2^∘
where τ runs over [W_2'/W_2,w'].
Note that l(τ w) = l(τ) + l(w) for τ∈ [W_2' / W_2,w'] and w ∈ [W_2'\ W_I_2'/ W_2^∘].
For each λ∈Λ_2^+, the element w = w_λ∈ W_I_2' specified is the unique element in W_I_2' of minimal possible length such that U_2' ϖ^λ U_2^∘ = U_2' w U_2 ^ ∘
* λ = (1,1,1), w = ρ_2
* λ = (2,2,2), w = v_0 v_1ρ^2_2
* λ = (3,3,2), w = v_0 v_1ρ^3_2
* λ = (3,2,3), w = v_0 v_1 v_2 v_1ρ^3_2
* λ = (4,4,2), w = v_0 v_1 v_2 v_1ρ^4_2
It is easy to verify the equality of cosets for each λ and w. To check that the length is indeed minimal, one can proceed as follows. Under the isomorphism, U_2' \ H_2'/U_2^∘≃ W_2'\ W_I_2'/ W_2^∘, the coset U_2' ϖ^λ U_2 ^ ∘ corresponds to W_2' t(-λ) U_2 ^∘. The minimal possible length of elements in W_2' t(-λ ) U_2 ^ ∘ is the same as that for U_2 ^∘ t(λ) W_2' (taking inverse establishes a bijection). One can then use analogue of (<ref>) for GSp_4 to find the
minimal possible length in each of γ t(λ) W_2' for every γ∈ W^∘_2 = { 1, r_2 , t(f_2) r_0, t(f_2)r_0r_2}. For instance,
W^∘_2 t(3,2,3) = { t(3,2,3), t(3,2,0) }
and the minimal lengths of elements in t(3,2,3)W_2' is 4 while that of t(3,2,0)W_2' is 5.
1, v_1, η_1 and η_2
represent distinct classes in H_2\ H_2' / U_2^∘.
We need to show that for distinct γ , γ' ∈{ 1, v_1, η_1, η_2}, γ ^-1 h γ ' ∉ H for any h ∈ H. Writing h as in Notation <ref>, we have
hv_1 = 0.9[ a b; a_1 b_1 ; c d; c_1 d_1 ] ,
h η_i = 0.9[ a ϖ^i b a; a_1ϖ^i a_1 b_1; c ϖ^i d c; c _1ϖ^i c_1 d_1 ],
v_1 h η_i =
0.9[ a_1ϖ ^i a_1 b_1; a ϖ ^i b a; c_1ϖ ^i c_1 d_1; c ϖ d c ]
where i = 1, 2 and
η^-1_1 h η_2 =
0.9[ a ϖ - c_1ϖ b-c_1/ϖ a - d _1/ϖ; -c ϖ a_1ϖ a_1 -d_1/ϖ b_1-c/ϖ; c ϖ^2 d c; c _1ϖ^2 c_1 d_1 ].
If any of hv_1∈ U_2^∘, then a_1, c_1∈ϖ_F. Since all entries of hv _1 are integral, this would mean (h v_1 ) ∈ϖ_F, a contradiction. If hη_i∈ U_2^∘, then all entries of h excluding b are integral and b ∈ϖ^-1_F. Since the first two columns of h η_i are integral multiples of ϖ, this would still make ( h η_i ) ∈ϖ_F, a contradiction. Similarly for v_1 h η_i. Finally, η_1^-1 h η_2∈ U_2^∘ implies that c, d, c_1, d_1∈_F and the top right 2 × 2 block implies a , a_1∈_F. So again, the first two columns are integral multiples of ϖ making ( η_1 ^-1 h η_2 ) ∈ϖ_F, a contradiction.
For this subsection only, we let ℛ_V(h), denote the double coset space U_2\ U_2' h V / V where h ∈ H_2' and V ⊂ H_2' a compact open subgroup.
We have
* ℛ_U_2^∘(ϖ^(2,1,1)) = {ϖ^(2,1,1), ϖ^(2,1,1)v_1 , ϖ^(1,1,0)η_1}
* ℛ_U_2^∘(ϖ^(2,2,2) ) = {ϖ^(2,2,2), ϖ^(2,2,2)v_1, ϖ^(1,0,0)η_1, ϖ^(1,0,1)η_1, ϖ^(1,1,1)η_1, η_2}
* ℛ_U_2^∘(ϖ^(3,2,3)) = {ϖ^(3,2,3), ϖ^(3,3,2)v_1, ϖ^(2,0,1)η_1, ϖ^(2,1,2)η_1, ϖ^(1,0,1)η_2}
and ℛ_U_2^∘(ϖ^(2,2,1)) = ℛ_U_2^∘(ϖ^(2,1,1)).
That the representatives are distinct follows by Lemma <ref> and by checking that H_2∩η_1 U_2^∘η_1^-1 is contained in an Iwahori subgroup of H_2 (see e.g., the argument in Lemma <ref>). As usual, we show that all the orbits are represented by studying the U_2-orbits on Schubert cells. Note that
W_2' = { 1, r_1, r_2, r_2r_1, r_1 r_2, r_1r_2r_1, r_2r_1r_2, r_2r_1r_2r_1}
and (r_2r_1)^2 = (r_1 r_2)^2.
(a) w = ρ_2^2. We have W_2,w ' = W_2' ∩ W_2 ^∘ = ⟨ r_2⟩, so W_2'/W_2,w =
{ W_2,w' , r_1 W_2,w', r_2r_1 W_2,w' , r_1r_2 r_1 W_2,w' } .
So [W_2'/W_2,w] = { 1 , r_1, r_2 r_1 , r_1 r_2 r_1}. Thus to study ℛ_U_2^∘(w), it suffices to study the U_2-orbits on cells corresponding to ε_0 = ρ_2^2, ε_1 = r_1ρ_2^2 and ε_2 = r_1 r_2 r_1ρ_2^2. Now 𝒴_ε_0/U_2^∘ = ϖ^(2,1,1)U_2^∘ and
1/U_2^∘ = *0.9[ a ϖ ϖ; ϖ; ϖ; ϖ -a ϖ ] 2/U_2^∘ = 0.9*[ y a_1ϖ ϖ - a ϖ; a ϖ ϖ; ϖ; -a_1ϖ -ϖ ]
where a, a_1, y ∈ []. For 1/U_2^∘, the case a = 0 clearly leads to ϖ^(2,1,1)v_1. If a ≠ 0, then we can multiply by diag(a^-1,1,1,a^-1 ) on the left and diag(1,a,a,1) on the right to assume a = 1. We then hit with v_β_0∈ U_2 on left and v_0∈ U_2^∘ on right to arrive at diag(-1,1,1,-1) (which we can ignore) times
[ ϖ; 1; ϖ 1; ϖ^2 ϖ ].
Now a simple column operation and a left multiplication by a diagonal matrix in the compact torus transforms this into ϖ^(1,1,0)η_1. As for 2/U_2^∘, begin by eliminating y with a row operation. Then note that conjugation by v_2 swaps a with a_1 and reverses all signs. So after applying operations involving second and fourth row and columns, we may assume that wlog that a_1 = 0. Right multiplication by v_2 yields the matrix
[ ϖ^2 a ϖ; ϖ a; 1; ϖ ] .
which results in either ϖ^(2,2,1) (which represents the same class as ϖ^(2,1,1)) or ϖ^(1,1,0)η_0. So all in all, we have three representatives: ϖ^(2,1,1), ϖ^(2,1,1)v_1, ϖ^(1,1,0)η_1.
(b) w = v_0 v_1ρ^2_2. Here w W^∘_2w^-1 = ⟨ t(f_3)r_2, t(f_2)r_0⟩,
so W_2,w' is trivial. So we need to analyze cells corresponding to ε_0 = v_0v_1ρ_2^2, ε_1 = v_1ε_0, ε_2 = v_1 v_2ε_0 and ε_3 = v_1 v_2v_1ε_0.
The corresponding cells are
𝒴_ε_0/ U_2 ^ ∘ =
0.9*([ 1; ϖ ; a ϖ^2 ϖ^2 ϖ x; - ϖ a ϖ ]),
𝒴_ε_2 / U_2^∘ =
0.9*([ y ϖ - ϖ a_1 + a ϖ; 1; -ϖ ; (a_1 + a ϖ ) ϖ ϖ^2 x ϖ ]),
𝒴_ε_1 / U_2 ^∘ = 0.9
*([ ϖ a_1; 1; -ϖ a ϖ; a ϖ^2 ϖ^2 a_1ϖ x ϖ ]),
𝒴_ε_3 / U_2^∘ = 0.85
*([ (a_2 + aϖ) ϖ ϖ^2 a_1ϖ y + ϖ x; ϖ a_1; 1; ϖ -(a_2+ a ϖ ) ])
where a, a_1, a_2, x, y ∈ []. Using similar arguments on these, one deduces that the orbits of U_2 on
* 0/U_2^∘ are represented by ϖ^(2,2,2)v_1, ϖ^(1,0,0)η_0,
* 1/U_2^∘ are represented by ϖ^(2,2,2), ϖ^(1,0,1)η_0, ϖ^(1,1,1)η_1,
* 2/U_2^∘ are represented by ϖ^(2,2,2), ϖ^(1,1,1)η_1, η_2
* 3/U_2^∘ are represented by ϖ^(1,0,0) η_1, ϖ^(1,0,1)η_1, η_2.
(c) w = v_0 v_1 v_2 v_1 ρ_2^3. Here w W^∘_2w^-1 = ⟨ r_2 , t(3f_2)r_0⟩ which means that W _2, w' = ⟨ r_2⟩. So as in part (a), we have [W_2'/W_2,w] = { 1, r_1 , r_2 r_1, r_1 r_2 r_1}. Again, we have three cells to analyze, which correspond to ε_0 = v_0v_1v_2v_1ρ_2^3, ε_1 = v_1ε_0 and ε_2 = v_1 v_2 v_1ε_0. The corresponding cells are
𝒴_ε_0 /U_2 ^ ∘ = 0.9*([ 1; ϖ a ϖ; - aϖ^3 ϖ^3 a_1ϖ^2 (x + y ϖ) ϖ; ϖ^2 a_1ϖ ]),
𝒴_ε_1 /U_2^∘ = 0.9*([ ϖ a_2 + aϖ; 1; ϖ^2 a_1ϖ; -(a_2 + aϖ) ϖ^2 ϖ^3 a_1ϖ^2 ϖ ( x + ϖ y ) ])
𝒴_ε_2/U_2^∘ =
0.9*([ -(a_2 + aϖ) ϖ^2 ϖ^3 (a_3 + a_1ϖ) ϖ z; ϖ a_2 + a ϖ; 1; - ϖ^2 - (a_3 + a_1ϖ) ])
where a,a_1,a_2, a_3 , x,y ∈ []. From these, we deduce that
* 0/U_2^∘ are represented by ϖ^(3,3,2) v_1, ϖ^(2,0,1)η_1,
* 1/U_2^∘ are represented by ϖ^(3,2,3), ϖ^(2,1,2) η_1, ϖ^(1,0,1)η_2,
* 2/U_2^∘ are represented by ϖ^(3,3,2) v_1, ϖ^(2,0,1)η_1, ϖ^(1,0,1)η_2.
We can use Proposition <ref> to obtain representatives for the remaining words computed in Lemma <ref> without computing Schubert cells.
We have
* ℛ_U^∘_2(ϖ^(1,1,1)) = {ϖ^(1,1,1) v_1, ϖ^(1,1,1) , η_1}
* ℛ_U^∘_2(ϖ^(3,3,2)) = {ϖ^(3,2,3)v_1 , ϖ^(3,3,2), ϖ^(2,2,1)η_1, ϖ^(2,2,0)η_1, ϖ^(2,1,0)η_1, ϖ^(1,1,0)η_2}
* ℛ_U^∘_2 ( ϖ^(4,4,2) ) = {ϖ^(4,2,4)v_1 , ϖ^(4,4,2),
ϖ^(3,3,1)η_1, ϖ^(3,2,0)η_1, ϖ^(2,2,0)η_2}
Since the class of ρ_2 normalizes W_2^∘ (see diagram (<ref>)) and ρ_2 normalizes the Iwahori subgroup I_2', it normalizes U_2^∘. Thus for any integer k, the representatives for ℛ_U_2^∘(hρ^k_2) can be obtained from ℛ_U_2^∘(h) by multiplying representatives on the right by ρ^k_2. Now we have the following relations:
ρ_2 U_2^∘ = ϖ^(1,0,1) v _1 U_2^∘, v_1ρ_2 U_2^∘ = ϖ^(1,1,0) U_2^∘, η_iρ_2 U_2^∘ = v_α_0v_2ϖ^(1,1,0)η_i U_2^∘
for i = 1 , 2.
By Lemma <ref>, parts (a), (b) and (c) are obtained by the corresponding parts of Proposition <ref>. For instance, ϖ^(1,1,0)η_1∈ℛ_U_2^∘(ϖ^(2,1,1)) corresponds to ϖ^-(1,0,1)η_1ρ_2 and
U_2ϖ^-(1,0,1)η_1ρ_2 U_2 = U_2ϖ^-(1,0,1) v_α_0 v_2ϖ^(1,1,0)η_1 U_2^∘
= U_2v_α_0 v_2η_1 U_2^∘ = U_2η_1 U_2^∘
which gives the representative η_1 in ℛ_U_2^∘(ϖ^(1,1,1)).
Let us denote U_2^† : = _2'(H_τ_1'). Since H_τ_1 is the conjugate of H_τ_∘ by ϖ^(1,1,1,1), U_2^† is the conjugate of U^∘_2 by ϖ^(1,1,1). Set
η_0: = η_1ϖ^-(1,1,1) = 0.9( 1 1
1 1
1
1 ) .
Then η_iϖ^-(1,1,1) = η_i-1 for i = 1 , 2. Moreover v_1 commutes with ϖ^(1,1,1). So by Proposition <ref> and Corollary <ref>, we obtain the following.
We have
* ℛ_U_2^†(ϖ^(1,0,0)) = {ϖ^(1,0,0), ϖ^(1,0,0) v _1, ϖ^(1,1,0)η_0},
* ℛ_U_2^†(ϖ^(1,1,1)) = {ϖ^(1,1,1), ϖ^(1,1,1) v _1, ϖ^(1,0,0)η_0 , ϖ^(1,0,1)η_0, ϖ^(1,1,1)η_0 , η_1}
* ℛ_U_2^†(ϖ^(2,1,2)) = {ϖ^(2,1,2), ϖ^(2,2,1) v _1, ϖ^(2,0,1)η_0, ϖ^(2,1,2)η_0, ϖ^(1,0,1)η_1}
* ℛ_U_2^†(ϖ^(0,0,0)) = { 1, v _1, η_0}
* ℛ_U_2^†(ϖ^(2,2,1)) = {ϖ^(2,1,2) v _1 , ϖ^(2,2,1), ϖ^(2,2,1)η_0,
ϖ^(2,2,0)η_0, ϖ^(2,1,0)η_0, ϖ^(1,1,0)η_1}
* ℛ_U_2^†(ϖ^(3,3,1)) = {ϖ^(3,1,3)v _1 , ϖ^(3,3,1),
ϖ^(3,3,1)η_0, ϖ^(3,2,0)η_0, ϖ^(2,2,0)η_1}
Note that Proposition <ref> implies that for
ℛ_U^†_2(ϖ^λ) = ℛ_U_2^† (ϖ^r_0(λ)) = ℛ_U_2^†(ϖ^r_2(λ) - (0,0,1) )
Thus Corollary <ref> records the decompositions of the all the projections in Proposition <ref>.
Next, we study the fibers of the projection _λ : ℛ_1(ϖ^λ) →ℛ_U_2^†(ϖ^_2(λ)) and use Corollary <ref> to calculate coset representatives given in Proposition <ref>. Let us denote by Λ^α_0 > 0 the set of all λ∈Λ such that α_0(λ) > 0. We first specialize Corollary
<ref> to the case of H_τ_1'.
For any λ = (a, b, c , d ) ∈Λ^α_0 > 0 and η∈ H_2' such that U_2η U_2^†∈ U_2\
U_2 ' ϖ^_2(λ) U_2^† / U_2^† with sim(η) = a, the fiber of _λ above U_2η U_2^† is
{ U
( ϖ^(a,b)χ , η ) H_τ_1' | χ∈ S^±_1 and U ' (ϖ^(a,b)χ, η ) H_τ_1' = U' ϖ^λ H_τ_1' }
This follows since ( [ 1 ; x 1 ] , 1 ), ( [ 1 x; 1 ] , 1 ) lie in H_τ_1' if x ∈ϖ_F.
For x ∈_F, define
ϰ^+(x) = ( 0.9( 1 x
1 ) , 1.1( [ ; 1 ; 1; x 1; 1 ] ) ) , ϰ^-(x) = ( 0.9( 1
x 1 ) , 1.1( [ ; 1 x; 1; 1; 1 ] ) ) .
Note that ϰ^±(x) ∈ H_τ_1 ' as these elements are in the subgroup 𝒳_τ⊂_τ_1 introduced in Notation <ref>. We let κ^±_1(x) = _1'( ϰ_± (x)) and κ^±_2(x) = _2' (ϰ^∓(x)) denote their projections.
If λ∈Λ_α_0^>, then
U ϖ^λχ H_τ_1' ∈{ U ϖ^λ H_τ_1', U ϖ^s_0(λ) H_τ_1'} for any χ∈ S_1^±×{ 1 }.
Write χ = (χ_1, 1 ). If χ_1∈ S^+ = {[ 1; - 1 ]}, the claim is clear. If χ∈ S_1^-, let x ∈_F be such that χ_1 = κ_1^-(x). The case x = 0 is also obvious, so we assume that x ∈_F^×. Observe that
U ϖ^λχ
H_τ_1' = U ϖ^λχϰ^-(-x) H_τ_1'
=
U ϖ^λ ( 1 , κ_2^+(-x) ) H_τ_1' .
If 2c - a ≥ 0, the conjugate of ( 1 , κ^+_2(-x) ) by ϖ^λ lies in U and so our double coset equals U ϖ^λ H_τ_1'.
If 2c - a < 0 however, then the conjugate of ( κ^+_1(-x^-1) , κ^-_2(x^-1) ) by ϖ^λ lies in U.
So U ϖ^λγ H_τ_1'
= U ϖ^λ (κ_1^+(-x^-1), κ_2^-(x^-1)κ_2^+(-x) ) H_τ_1'
Now note that
(κ_1^+(-x^-1), κ_2^-(x^-1)κ_2^+(-x) ) ·ϰ ^+(x^-1)
= ( 1 ,
[ -x; 1; 1/x; 1 ] ) .
From this, it follows that U ϖ^λχ H_τ_1 ' = U ϖ^r_0(λ)H_τ_1' which equals U ϖ^s_0(λ) H_τ_1 '.
For λ∈Λ^α_0 >,
U ϖ^λσ_1χ H_τ_1' ∈{ U ϖ^λσ_1 H_τ_1', U ϖ^s_0(λ)σ_1 H_τ_1' }
for any χ∈ S^± _ 1 ×{ 1 }.
Since v _1 normalizes U_2 and σ_1 = (1, v_1 ) commutes with χ
we see that U ϖ^λσ_1χ H_τ_1' = σ _1 U ϖ^r_1(λ)χ
H_τ_1 '. The claim now
follows from part by noting that r_1 commutes with s_0.
Next we record results on double cosets involving σ_2.
H ∩σ_2 H_τ_1' σ_2^-1 = H ∩ς_2 K ς_2 ^-1 is contained in the Iwahori subgroup J_ς_2 of triples (h_1, h_2, h_3) ∈ U where h_2 reduces to an upper triangular matrix modulo ϖ and h_1, h_3 reduce to lower triangular matrices.
This follows by a stronger result established in Lemma <ref>.
Suppose λ = (a, b, c, d ) ∈Λ_α_0^> and let χ = (χ_1, 1) ∈ S^±_1×{ 1 }.
* If χ _1 = [ 1; x 1 ]∈ S^-_1, then
U ' ϖ^λσ_2χ H_τ_1' =
U 'ϖ^λ H_τ_1' if x ∈_F^×, c + d ≥ a , 2c ≥ a or if x = 0, c+ d ≥ a ,
U' ϖ^r_0(λ) H_τ_1' if x ∈_F^× , c + d ≥ a > 2c ,
U ' ϖ^r_1 r_0(λ) H_τ_1' if x ∈_F^× , 2d > a > c+ d ,
U ' ϖ^r_0r_1 r_0(λ) H_τ_1' if x ∈_F^×, a > c + d, a ≥ 2d or if x = 0, a > c + d .
* If χ_1 = [ 1; - 1 ]∈ S^+_1, then
U ' ϖ^λσ_2χ H_τ_1' = U 'ϖ^r_0(λ) H_τ_1' if c + d ≥ a
U ' ϖ^r_1 r_0(λ) H_τ_1' if
a > c + d
For x ∈_F, define
ν_2^+ = [ 1 -1; 1 - 1; 1; 1 ] , ν_2^- = [ 1 ; 1; -1 1; -1 1 ]
and set ν^± = (1, ν_2^±). Note that ν ^ - ∈ H_τ_1'. Now if c + d ≥ a, we have ϖ^λν^+ϖ^-λ∈ U'. Thus
U' ϖ^λσ_2χ H_τ_1' = U ' ϖ^λν^+σ_2χ H_τ_1' = U ' ϖ^λχ
H_τ_1'
since ν^+σ_1 = ( 1, ν_2^+η_0 ) = (1, 1). If on the other hand a > c + d, then ϖ^λν^-ϖ^-λ∈ U', and so
U ' ϖ ^ λσ_2χ H_τ_1' = U ϖ^λν^-σ_2χν^- H_τ_1'
= U' ϖ^λ ( χ_1 , ν_2^-[ 1; 1; -1 1; -1 1 ] ) H_τ_1
= U' ϖ^λ ( χ_1 , [ 1; 1; -1 ; -1 ] ) H_τ_1
= U ' ϖ^r_0 r_1 r_0 (λ)χ H_τ_1' .
The rest of the proof now proceeds along exactly the same lines as Lemma <ref> which determines the classes of U ϖ^μχ H_τ_1' for any χ∈ S^± _ 1 ×{ 1 }, μ∈Λ and hence those of U' ϖ^μχ H_τ_1'.
Suppose λ∈Λ^α_0 >0 satisfies either β_0(λ) ≥ 0 or β_2(λ) ≤ 0. Then Uϖ^λσ_2χ H_τ_1 = U ϖ^λσ_2 H_τ_1' for any χ∈ S^-_1×{ 1 }.
Write χ = (χ_1, 1) and χ_1 = [ 1 ; x 1 ]. If 2c ≥ a. replace ϖ^λσ_2χ with ϖ^λσ_2ϰ^-(-x) = ϖ^λσ_2κ_2^+(-x). If 2d ≥ a, replace ϖ^λσ_2χ with ϖ^λσ_2χν^-ϰ^-(-x) = ϖ^λσ_2ν^-κ_2^+(-x) where ν^-∈ H_τ_1' is as in proof Lemma <ref>. Explicitly if λ = (a,b,c,d), then
ϖ^λσ_2κ_2^+(-x) = ( ϖ^(a,b) , ( [ ϖ^c - x ϖ^c ϖ^c; ϖ^d ϖ^d; ϖ^a-c; ϖ^a-d ] ) ) , ϖ^λσ_2ν^-κ_2^+(-x) = ( ϖ^(a,b) , ( [ ϖ^c; ϖ^d; ϖ^a-c ϖ^a-c; ϖ^a-d - x ϖ^a-d ϖ^a-d ]) )
Now an obvious row operation transforms these into ϖ^λσ_2.
Suppose λ = (a,b,c,d) ∈Λ^α_0 > 0 satisfies 2c + 1 ≥ a and c + d ≥ a. Then
U ϖ^λσ_3χ H_τ_1' = U ϖ^λσ_3 H_τ_1' if χ∈ S_1^-×{ 1 }
U ϖ^s_0(λ) - f_1σ_3 H_τ_1' if χ∈ S_1^+×{ 1 }
Moreover U ' ϖ^λσ_3 H_τ_1' = U' ϖ^λ+ λ_∘ H_τ_1' and U' ϖ^s_0(λ)-f_1σ_3 H_τ_1' = U' ϖ^s_0(λ) - f_1 + λ_∘ H_τ_1' = U ' ϖ^s_0(λ+ λ_∘) H_τ_1'.
This is entirely similar Lemma <ref> and <ref>.
The proof in each case
goes by
applying either Lemma <ref> or Corollary <ref> to the coset representatives computed in Corollary <ref>. In the latter case, we will need to determine the fibers of the projection
_μ : ℛ_1( ϖ^μ ) →ℛ_U^†_2(ϖ^_2(μ))
for a given μ = (u_0, u_1, u_2, u_3) ∈Λ above each ϖ^(a,c,d)γ∈ℛ_U_2^†(ϖ^_2(μ)) where γ∈{ 1, v_1 , η_0, η_1}. Let λ = (a, b, c, d ) where b = u_1 if γ∈{ 1, v_1 , η_0} and u_1 - 1 if γ = η_1. Let i ∈{0,1,2,3} be the unique integer
such that _2(σ_i) = γ. Then the fiber consists of cosets of the form U ϖ^λσ_iχ H_τ_1' where χ∈ S^±_1.
Let us first address the case where γ∈{ 1, v_1}.
Note that in each of the projections computed in Corollary <ref>, there is a unique element of the form ϖ^(a,c,d)γ. So the projection of ℛ_1(μ) in each case has a unique element of this form (see Remark <ref>). Lemma <ref> and Corollary <ref> tell us that the fiber of _ 2, μ above each such element is contained in {ϖ^λγ , ϖ^s_0(λ)γ}. If λ≠ s_0(λ), then Corollary <ref> implies
that only one of ϖ^λγ or ϖ^s_0(λ)γ can belong to the fiber.
Thus the fiber is necessarily a singleton.
Now U ϖ^μ H_τ_1', U ϖ^r_1(μ)σ_1 H_τ_1 ' are clearly subsets of U' ϖ^μ H_τ_1' and their projections ϖ^(u_0, u_2, u_3), ϖ^(u_0,u_3,u_2) v_1 respectively have the desired form[that is, Uϖ^(u_0,u_2,u_3) H_τ_1' = U ϖ^(a,c,d) H_τ_1' and U ϖ^(u_0,u_3,u_2)v_1H_τ_1' = U ϖ^(a,c,d)v_1H_τ_1']. So we are free to choose ϖ^μ as the representative element in the fiber if γ = 1 and ϖ^s_2(μ)σ_1 if γ = v_1.
The case where γ = η_0 requires a closer case-by-case analysis. Here we need study the possible values χ∈ S_1^± such that U' ϖ^λσ_iχ H_τ_1 ' = U ' ϖ^μ H_τ_1'.
We let C(λ) = {λ, r_0(λ) , r_1 r_0 (λ), r_0 r_1 r_0( λ) }. In each case, we compute the intersection C(λ) ∩ W_τ_1'μ, using which we read off the possible values of χ from Lemma <ref>, i.e., we only consider χ for which U 'ϖ^λσ_2χ H_τ_1' = U' ϖ^μ' H_τ_1' for μ' ∈ C(λ) ∩ W_τ_1' which is a necessary condition by Proposition <ref>. We then use Lemma <ref> to simplify these cosets if possible. In most cases, this results in a single element in the fiber. For γ = η_1, the analysis is similar but much easier and we will only need Lemma <ref> to decide the elements of the fiber.
∙ μ = (1,1,1,0).
The projection is ℛ_U_2^†(ϖ^(1,1,0) ) = ℛ_U_2^† ( ϖ^(1,0,0) ) = {ϖ^(1,0,0) , ϖ^(1,0,0) v_1 , ϖ^(1,1,0) η_0}. To determine the lift of ϖ^(1,1,0)η_0, let λ : = (1,1,1,0). Then C(λ) = { (1,1,1,0), (1,1,0,0) } and ϖ^(1,1,0,0)∉ U' ϖ^μ H_τ_1' by Lemma <ref>. Lemma <ref> tells us for χ∈ S^±_1, U' ϖ^λσ_2χ H_τ_1 = U' ϖ^μ H_τ_1' only when
χ∈ S^-. But then U ϖ^λσ_2χ H_τ_1 ' = U ϖ^λσ_2 H_τ_1' by Lemma <ref>. Thus ϖ^(1,1,1,0)σ_2 is the unique element of the fiber above ϖ^(1,1,0)η_1.
∙ μ = (1,1,0,1 ).
We have ℛ_U^†_2(ϖ^(1,0,1)) = ℛ_U^†_2(ϖ^(1,1,1)) = {ϖ^(1,1,1) , ϖ^(1,1,1) v_1 , ϖ^(1,0,0)η_0 , ϖ^(1,0,1)η_0 , ϖ^(1,1,1) η_0 , η_1} .
Let λ_1 = (1,1,0,0), λ_2 = (1,1,0,1), λ_3 = (1,1,1,1). Then
C(λ_i) ∩ W_τ_1μ
=
{ (1,1,0,1) }.
For λ_1 and λ_3, the only choice is χ = [ 1; -1 ] and so the unique elements in the fibers above ϖ^(1,0,0)η_0 and ϖ^(1,1,1)η_0 are respectively ϖ^s_0(λ_1)σ_2 = ϖ^(1,0,0,0)σ_2 and ϖ^s_0(λ_3)σ_2 = ϖ^(1,0,1,1)σ_2. For λ_2,
the only choice is χ = [ 1; 1 ]∈ S^-, ϖ^(λ_2)σ_2 is the unique element in the fiber above ϖ^(1,0,1)η_0.
For η_1, the unique element above is ϖ^-f_1σ_3 by Lemma <ref>, since λ_∘ = (1,1,1,1) does not belong to W_τ_1' μ but (1,0,1,1) = s_0( λ_∘) does.
∙ μ = (1,1,0,0).
This is similar to the first case except now we work with ϖ^(1,1,0,0)∈ C(λ) where λ = (1,1,1,0). In this case, the only possible choice for χ = [ 1; -1 ]. The fiber is therefore U ϖ^λσ_2χ H_τ_1' = U ϖ^s_0(λ) σ_2 H_τ_1 and we take the representative ϖ^(1,0,1,0)σ_2.
∙ μ = ( 2,2,1,1) .
The projection is ℛ_U_2^†(ϖ^(2,1,1)) = {ϖ^(2,1,1), ϖ^(2,1,1) v_1, ϖ^(2,1,1)η_0}. Lemma <ref> implies that U ' ϖ^(2,2,1,1)σ_2χ H_τ_1' coincides with U ' ϖ^(2,2,1,1) Hτ_1' for any χ∈ S^±_1. Now if χ∈ S^-, then U ϖ^(2,2,1,1)σ_2χ H_τ_1 ' = U ϖ^(2,2,1,1)σ_2 H_τ_1' by Lemma <ref>. If however χ = [ 1; -1 ], then U ' ϖ^(2,2,1,1)σ_2 H_τ_1' = U' ϖ^(2,0,1,1) H_τ_1'. Thus the fiber above ϖ^(2,1,1)η_0 consists of
U ϖ^(2,2,1,1)σ_2 H_τ_1' and U ϖ^(2,0,1,1)σ_2 H_τ_1'.
These are distinct elements of the fiber, since U ϖ^(2,2,1,1) H_ς_2⊂ U ϖ^(2,2,1,1) J_ς_2, Uϖ^(2,0,1,1) H_ς_2⊂ U ϖ^(2,0,1,1) J_ς_2 by Lemma <ref> and U \ H / J_ς_2≃Λ.
∙ μ = ( 2,1,2,1), (2,1,1,2), (2,1,1,1).
These are handled by Lemma <ref>.
∙ μ = (2,2,0,1)
The projection is ℛ_U_2^†(ϖ^(2,2,1)) = {ϖ^(2,1,2)v_1 , ϖ^(2,2,1), ϖ^(2,2,1)η_0,
ϖ^(2,2,0)η_0, ϖ^(2,1,0)η_0, ϖ^(1,1,0)η_1}. Let λ_1 = (2,2,2,1), λ_2 = (2,2,2,0), λ_3 = (2,2,1,0). Then for χ∈ S^±_1 and any i = 1, 2, 3, the double coset U ' ϖ^λ_iσ_2χ H_τ_1' coincides with
U ' ϖ^(2,2,0,1) H_τ_1' only when χ = [ 1; - 1 ]. This gives the three desired representatives.
As for ϖ^(1,1,0)η_1, the unique element in the fiber is ϖ^s_0(1,1,1,0) - f_1σ_3 = ϖ^(1,-1,1,0)σ_3 by Lemma <ref> since (1,1,1,0) + λ_∘ = (2,2,2,1) ∉ W_τ_1' μ but (2,0,2,1) = s_0(1,1,1,0) + s_0 ( λ_∘ ) = ( 2,0,2,1) ∈ W_τ_1'.
∙
μ = (3,2,2,2)
We have ℛ(ϖ^(3,2,2)) = ϖ^(2,1,1)ℛ_U_2^†(ϖ^(1,1,1)), so
ℛ_U_2^† (ϖ^(3,2,2)) = {ϖ^(3,2,2), ϖ^(3,2,2)v_1, ϖ^(3,1,1)η_0, ϖ^(3,1,2)η_0, ϖ^(3,2,2)η_0, ϖ^(2,1,1)η_1}.
Let λ_1 = (3,2,1,1), λ_2 = (3,2,1,2), λ_3 =(3,2,2,2). Then C(λ_i) ∩ W_τ_1' μ = { (3,2,2,2) } for all i. For λ_1 and λ_3, Lemma <ref> forces χ to be in S^-_1, and Lemma <ref> allow us to conclude that U ϖ^λ_1σ_2 H_τ_1', Uϖ^λ_2σ_2 H_τ_1' are the only elements of the fibers above ϖ^(3,1,1)η_0, ϖ^(3,2,2)η_0 respectively. For λ_2, the possible choices are χ = [ 1; -1 ] or χ = [ 1 ; x 1 ] for x ∈_F ^×. In the latter case, we have U ϖ^λ_2σ_2χ H_τ_1' = U ϖ^λ_2σ_2ψ H_τ_1'
since the conjugate of ϖ^λσ_2χ by diag(x, 1, x , 1 , x, 1) ∈ U ∩ H_τ_1' equals ϖ^λ_2σ_2ψ. So the fiber above ϖ^(3,1,2)η_0 contains
U ϖ^s_0(λ_2)σ_2 H_τ_1' and U ϖ^λ_2σ_2ψ H_τ_1' .
Since U ϖ^λ_2ψ J_ς_2 = U ϖ^λ_2 J_ς_2, so the same argument used in the case μ = (2,2,1,1) shows that the two displayed elements are distinct.
For ϖ^(2,1,1)η_1, the only element in the fiber is ϖ^(2,1,1,1)σ_3 by Lemma <ref>, since (3,2,2,2) = (2,1,1,1) + λ_∘ belongs to W_τ_1' but (3,1,2,2) = s_0 (2,1,1,1) + s_0 ( λ_∘ ) does not.
∙ μ = (3,3,1,1)
The projection is ϖ^(2,1,1)·ℛ_U_2^†(ϖ^(1,0,0)) = {ϖ^(3,1,1), ϖ^(3,1,1) v_1, ϖ^(3,2,1)η_0}. For λ = (3,3,2,1), the only possibility is χ = [ 1; -1 ] which gives the representative ϖ^(3,0,2,1)σ_2 in the fiber above ϖ^(3,2,1)η_0.
∙ μ = ( 3 , 2 , 0 , 1 ).
The projection is ℛ(ϖ^(3,3,1)) = {ϖ^(3,1,3)v_1, ϖ^(3,3,1), ϖ^(3,3,1)η_0, ϖ^(3,2,0)η_0, ϖ^(2,2,0)η_1}. Set λ_1 : = (3,2,3,1) and λ_2 : = (3, 2,2,0). Then C(λ_i) ∩ W_τ_1'μ = {μ} for i = 1, 2. In both cases, the only possibility is χ = [ 1; -1 ] which gives the rerpresentatives ϖ^(3,1,3,1)σ_2, ϖ^(3,1,2,0)σ_2 above ϖ^(3,3,1)η_0, ϖ^(3,2,0)η_0 respectively.
For ϖ^(2,2,0)η_1, the unique element in the fiber is ϖ^(2,0,2,0)σ_3 since (2,1,2,0) + λ_∘∉ W_τ_1' but (3,1,3,1) = s_0 (2,1,2,0) + s_0(λ_∘) ∈ W_τ_1'μ.
§.§ Orbits on U' h H_τ_2'/ H_τ_2'
Let E = _2(U^) denote the projection of the group U^. Thus E ⊂ U_2 ' is the endohoric[a portmanteu of Iwahori and endoscopic] subgroup of all elements whose reduction modulo ϖ lies in 𝐇_2 () = _2() × _^×_2().
For a , b ∈_F, let
γ(u, v) =
[ 1 u v; 1 v; 1; -u 1 ] .
I_2 '
E / E = _a,b ∈ [] γ (a, b ) E.
Let 𝐍'_2 (resp., 𝐍_2) denote the unipotent radical of the Borel subgroup of _2' (resp., _2) determined by {β_0 , β_2}. Let Z ⊂ E the subgroup of all elements that reduce modulo ϖ to the Borel subgroup of H_2.
Then Z = I_2' ∩ E and so
I_2' E / E ≃ I_2'/ Z ≃𝐍_2' ()/ 𝐍_2() .
Now | 𝐍_2'() | = q^4 and | 𝐍_2() | = q^2 and so | I_2 ' / Z | = q^2 and it is easily seen that the reduction of γ(u,v) for u, v ∈ [] form a complete set of representatives for 𝐍_2'() / 𝐍_2().
Let v_1 be as in <ref> and η_0, η_1 be as in (<ref>), (<ref>). Recall (<ref>) that for k ∈ [], we denote
η̃_k = 0.9[ k 1; k+1 1; -1 k + 1; 1 - k ]
and []^∘ = [] ∖{ -1 }.
1, v_1, η_0, η_1, and η̃_k for k ∈ []^∘ represent pairwise distinct classes in H_2\ H_2'/ E.
This is
handled as in Lemma <ref>. The matrix formulas shown therein for η_i, i =1 , 2 also apply for i = 0 and it is easy to deduce the pairwise distinction for 1, v_1, η_0, η_1 from these formulas. Let us distinguish the class of η̃_k for k ∈ []^∘ from γ∈{ 1, v_1 ,η_0, η_1}. Write h ∈ H as in Notation <ref>. Then
h η̃_k
= [ ak a -b b(k+1); * * *; * * *; * * * ], η_i^-1 h η̃_k
= [ * * * *; * * *; ck c -d d (k+1); * * * ]
for i = 0, 1.
If h η̃_k∈ E, we see from the entries shown above that the first row is a multiple of ϖ which makes (hη̃_k) ∈ϖ_F, a contradiction. Since v_1 just swaps the rows of hη̃_k, the same argument applies to v_1 h η̃_k. Similarly for η_i^-1 h η̃_k. Finally for k ,k' ∈ []^∘ and k ≠ k', we see from the matrix
(η̃_k)^-1 h η̃_k' = [ * a_1 - a * - b - b_1 k'; a k' - a_1 k * - b - b_1 k *; * * *; * * * ]
that (η̃_k)^-1 h η̃_k' lies in E only if a, b ∈ϖ_F. But since η̃_k , η̃_k'∈ U_2' and E ⊂ U_2', we also have h ∈ U_2'. But then a, b ∈ϖ_F implies that sim(h) = ad - bc ∈ϖ_F, a contradiction.
Note that U v_2η̃_-1v_2 E = U η_0 E.
For γ∈{η_0, η̃_0}, the map Λ_2→ U_2ϖ^Λγ E, λ↦ U_2ϖ^λγ E is a bijection.
It is easy to see that H_2∩γ E γ^-1 are contained in certain Iwahori subgroups of H_2. So the Bruhat-Tits decomposition along with the identification U_2ϖ^Λγ E U ϖ^Λ (H_2∩γ E γ^-1) implies the result.
If λ∈Λ_2 is such that β_1(λ), β_2(λ) ∈{ 0 ,1 }, then U_2' ϖ^λ E = U_2' ϖ^λ I_2' E.
The conditions ensure that ϖ^λ I_2' ϖ^-λ⊂ U_2'.
For this subsection only, we let ℛ_E(h), denote the double coset space U_2\ U_2' h E/ E for h ∈ H_2'.
We have
* ℛ_E (ϖ^(0,0,0)) =
{ 1 , v_1 , η_0 , η̃_k | k ∈ []^∘}.
* ℛ_E(ϖ^(1,1,1)) =
[t]1.0
{ϖ^(1,1,1), ϖ^(1,1,1)v_1, ϖ^(1,0,1)η_0, ϖ^(1,1,0)η_0, ϖ^(1,1,1)η_0, η_1}∪
*ϖ^(1, 1, 1)η̃_0, ϖ^(1,0,1)η̃_0, ϖ^(1,0,0)η̃_k k ∈ []^∘
* ℛ_E(ϖ^(2,2,1))
= * ϖ^(2, 2, 1), ϖ^(2,1,2) v_1 , ϖ^(2,2,1) η_0 , ϖ^(2,1,2)η̃_0 , ϖ^(1,1,0) η_1.
By Lemma <ref> and <ref>, the elements listed in part (a), (b), (c) represent distinct classes. We show that these also form a full set of representatives.
Say for λ∈Λ_2 is such that 0 ≤β_1 ( λ) , β_2(λ) ≤ 1 and say U_2' ϖ^λ I_2' = _γ∈ΓγĨ_2 for some finite set Γ.
Then by Lemma <ref> and <ref>,
U _2 ' h E = U_2' h I_2' E = ⋃_γ∈Γ U_2γ I_2' E = ⋃_γ∈Γ
u ,v ∈ [] U_2γγ_u, v E
Since (0,0,0), (1,1,1), (2,2,1) satisfy the condition of Lemma <ref>, the decomposition (<ref>) applies. Now we can compute the set Γ for each λ by replacing ϖ^λ with w ∈ W_I_2' of minimal possible length such that U_2' ϖ^λ E = U_2'w E and invoking the analogue of Proposition <ref> for GSp_4.
Since we are only interested in computing the double cosets U_2γγ_u, v E appearing in U_2' w E, we only need to study the cells corresponding to
ε_0 : = w , ε_1 : = r_1w, ε_2 : = r_1 r_2w , ε_3 : = r_1r_2r_1w .
Thus we need to study the classes in U_2\ H_2' / E of {𝒴_ε_i(κ⃗) γ_u,v E | κ⃗∈ []^l(ε_i) , u, v ∈ [] } for each i = 0 , 1, 2 , 3. We will refer to these sets Schubert cells as well and as usual, abuse notation to denote them by iE/E.
(a) Here w = 1 and the four cells are
𝒴_ε_0E/ E =
0.9*([ 1 u v; 1 v ; 1 ; - u ]),
𝒴_ε_2 E / E =
0.9*([ a y + au vy - u av + 1; 1 u v; 1 v ; - a - (av+1) ]),
𝒴_ε_1 E / E = 0.9
*([ a au + 1 v av; 1 u v; - u 1; au + 1 -a ]),
𝒴_ε_3 E / E = 0.9
*([ z u z + a_1 a u + a_1 v + 1 vz - a; a au + 1 v av; 1 u v; -a_1 -a_1 u u -a_1v - 1 ])
where a, a_1, u, v , y ∈ [] and z : = y + aa_1. Note that the ε_1-cell is obtained from ε_0-cell by multiplying on the left by y_1(a)v_1. If a = 0, the orbits of U_2 are v_1 times those of ε_0-cell since v_1 normalizes U_2. Similarly we can assume that a ≠ 0 in ε_2-cell and a_1≠ 0 in ε_3-cell.
Consider the ε_0-cell. Conjugation by v_2 swaps the entries u, v and row column operations arising from U_2, E allow us to make at least one of u, v zero. So say u = 0. Then we obtain either identity or η_0 as representative from this cell. Next consider the ε_1-cell. As observed above, the case a = 0 leads to orbits of v_1 and v_1η_0 and we have U_2 v_1η_0 E = U_2η̃_0 E. If a ≠ 0, we apply the following sequence of row-column operations:
0.9([ a au + 1 v av; 1 u v; - u 1; au + 1 -a ])⟶0.9([ a au + 1 av; 1 u -v/a v; - u 1; au + 1 -a ])⟶0.9([ a au + 1 av; 1 u uv ; - u 1; au + 1 -a ])
⟶0.9([ a au + 1 a uv av; 1 u ; - u 1; au + 1 -a ])⟶0.9([ a au + 1 ; 1 u ; - u 1; au + 1 -a ])⟶0.9([ 1 au + 1 ; 1 au ; - au 1; au + 1 -1 ]).
Let us denote k = au. The structure of Y allows us to restrict k ∈ []. Conjugating this matrix by v_β_0v_β_2 and scaling by -1 gives us the matrix η̃_k if k ∈ []^∘, i.e., au ≠ - 1. If au = -1 however, then conjugating by v_2 further gives us η_0. So the ε_1-cells decomposes into U_2-orbits of v_1, η_0 and η̃_k for k ∈ []^∘.
For the case of ε_2-cell and a ≠ 0, use
0.9([ a y + au vy - u av + 1; 1 u v; 1 v ; - a - (av+1) ])⟶0.9([ a -auv-u av + 1; 1 u v; 1 v ; - a - (av+1) ])⟶0.9([ a av + 1; 1 u uv + u/a v; 1 v ; - a - (av+1) ])
⟶0.9([ a av + 1; 1 v; 1 v ; - a - (av+1) ])⟶0.9([ 1 v ; a (av + 1) ; -a - (av + 1); 1 v ])⟶0.9([ v 1 ; (av+1) a ; - a av + 1; 1 - v ])
and multiply on the left by diag(a, 1, 1, a ) and diag(1, a^-1 , a^-1 ,1) on the right to arrive at the same situation as the ε_1-cell. Finally the case for ε_3-cell with a_1≠ 0, use
0.9([ z u z + a_1 a u + a_1 v + 1 vz - a; a au + 1 v av; 1 u v; -a_1 -a_1 u u -a_1v - 1 ])⟶0.9([ a_1 a u + a_1 v + 1 - a; a au + 1 v av; 1 u v; -a_1 -a_1 u u -a_1v - 1 ])
0.9([ a_1 a u + a_1 v + 1 - a; 1 (au + a_1v)/a_1 -a/a_1; u v; -a_1 -a_1 u u -a_1v - 1 ])⟶0.9([ a_1 a u + a_1 v + 1 ; 1 (au+ a_1v)/a_1 ; 1 u (au + a_1 v ) / a_1; -a_1 -a_1 u u - (au + a_1v + 1 ) ]) .
Next substitute k_1 = au + a_1 v and use
0.8([ a_1 k_1 + 1 ; 1 k_1/a_1 ; 1 u k _ 1 / a_1; -a_1 -a_1 u u - k_1 - 1 ])⟶0.8([ a_1 k_1 + 1 ; 1 k_1 /a_1 ; 1 - u ( k_1 + 1 ) / a_1 k_1 / a_1; -a_1 -a_1 u u - k_1 - 1 ])⟶0.8([ a_1 k_1 + 1 ; 1 k_1 / a_1 ; 1 k_1 / a_1; -a_1 -a_1 u -uk_1 - k_1 - 1 ])
⟶0.8([ a_1 k_1 + 1 ; 1 k_1 / a_1 ; 1 k_1 / a_1; -a_1 - k_1 - 1 ])⟶0.85([ a_1 k_1 + 1 ; 1 k_1 / a_1 ; 1 k_1 / a_1; -a_1 - k _1 - 1 ])⟶0.8([ k_1 + 1 a_1 ; k_1/a_1 1 ; 1 - k_1 / a_1; - a_1 k _1 + 1 ]) .
Now multiply by diag(-1,-a_1,a_1,1) on the left, diag(1,-a_1^-1,-a_1^-1,1) on the right and use the substitution k = -k_1 - 1. If a_1 = 0 in the ε_3-cell, then one gets v_1, 1 , v_1η_0, v_1η̃_k and the latter two can be replaced with η̃_0, η̃_k' where k' = -(k+1).
(b) We have w = ρ _2 and the four cells are
𝒴_ε_0E/ E =
0.85*([ - u 1; 1 ; ϖ v ϖ ; ϖ u ϖ v ϖ ]),
𝒴_ε_2 E / E =
0.85*([ ϖ u ϖ y - au a + v ϖ; - u 1; 1 ; - ϖ - a - v ϖ ]),
𝒴_ε_1 / U_2 ^∘ = 0.85
*([ 1 - au a; - u 1; ϖ u ϖ v ϖ; -a ϖ (1-au) ϖ v ϖ -a v ϖ ]),
𝒴_ε_3 E / E = 0.85
*([ - a ϖ ( 1 - au) ϖ a_1 + v ϖ - u z z - av ϖ; 1 - au a; -u 1; - ϖ - u ϖ a_1 u -a_1 - v ϖ ])
where a, a_1 , y , u , v ∈ [] and z = y + a a_1 in the ε_3-cell. Using analogous arguments on these cells, one deduces that the U_2-orbits on
* 0E/ E are represented by ϖ^(1,1,1)v_1, ϖ^(1,0,1)η̃_0, ϖ^(1,1,1)η̃_0,
* 1E/E are represented by ϖ^(1,1,1), ϖ^(1,1,1)η_0, ϖ^(1,1,0)η_0 when a equals zero[these are obtained by applying v_1 to the representatives of the ε_1-cell] and ϖ^(1,0,0)η̃_k for k ∈ [] when a is non-zero,
* 2E/E are represented by ϖ^(1,1,1), ϖ^(1,1,1)η_0, ϖ^(1,1,0)η_0 when a equals zero and η_1 when a ≠ 0
* 3E/E are represented by ϖ^(1,1,1) v_1, ϖ^(1,0,1)η̃_0, ϖ^(1,1,1)η̃_0, η̃_k for k ∈ [] when a_1 equals zero and η_1 when a_1≠ 0.
(c) In this case, w = v_0ρ^2_2 and the four cells are
𝒴_ε_0E/ E =
0.85*([ 1 ; ϖ v ϖ ; ϖ ^ 2 u ϖ^2 x ϖ v ϖ^2; u ϖ - ϖ ]),
𝒴_ε_2 E / E =
0.85*([ y ϖ a + (u + vy)ϖ - ϖ; 1 ; ϖ v ϖ ; - ϖ^2 - ( a + u ϖ ) - (x + av ) ϖ - v ϖ ^2 ]),
𝒴_ε_1 / U_2 ^∘ = 0.85
*([ ϖ a + v ϖ ; 1 ; u ϖ - ϖ; ϖ^2 u ϖ ^2 ( x - a u ) ϖ ( a + v ϖ ) ϖ ]),
𝒴_ε_3 E / E = 0.85
*([ ϖ^2 ( a _1 + u ϖ ) ϖ z ( a + v ϖ ) ϖ; ϖ a + v ϖ ; 1 ; - a_1 - u ϖ ϖ ])
where a, a_1, x , y, u, v ∈ [] and z denotes y + aa_1 + ( x - au + a_1 v ) ϖ in the ε_3-cell. From these, one deduces that the orbits of U_2 on
* 0E/E are represented by ϖ^(2,2,1), ϖ^(2,2,1)η_0,
* 1E/E are represented by ϖ^(2,1,2) v_1, ϖ^(2,1,2)η̃_0 when a = 0 and ϖ^(1,1,0) when a ≠ 0,
* 2E/E are represented by ϖ^(2,1,2) v_1, ϖ^(2,1,2)η̃_0 when a = 0 and ϖ^(1,1,0) when a ≠ 0,
* 3E/E are represented by ϖ^(2,2,1), ϖ^(2,2,1)η_0 when both a, a_1 are 0 and η_1 when at least one of a, a_1 is non-zero.
The result above implies that (the reductions of) 1, v_1 and η̃_k for k ∈ [] form a complete system
of representatives for 𝐇_2() \𝐇_2'() / 𝐇_2().
ℛ_E(ϖ^(2,1,2)) = {ϖ^(2,2,1) v_1 , ϖ^(2,1,2), ϖ^(2,0,1)η̃_0 , ϖ^(2,1,2)η_0 , ϖ^(1,0,1)η_1}
First note that U ' ϖ^(2,1,2) = U' v_0 v_1ρ_2^2. Since v_1 normalizes E and ρ_2^2∈ H' is central, U_2 ' v_0 v_1ρ_2^2 E = U_2 ' v_0ρ_2^2 E v_1. So the result follows by Proposition <ref> (c).
Now we address the lifts of these cosets to H'. Let S_1^± be as in <ref>
Suppose λ is in Λ^+. Then for any χ∈ S_1^±,
U ϖ^λχ H_τ_2' ∈{ U ϖ^λ H_τ_2' , U ϖ^s_0(λ)H_τ_2' } and U ϖ^r_1(λ)θ_1χ H_τ_2' ∈{ Uϖ^r_1(λ)θ_1 H_τ_2' , U ϖ^s_0r_1(λ)θ_1 H_τ_2' }.
This first part is proved in the same manner as Lemma <ref>. Since θ_1 = σ_1 = w_2 normalizes U, commutes with χ, w_α_0 and w_2ϖ^λ = ϖ^r_1(λ) w_2, the second claim also follows easily.
Let λ∈Λ^α_0 > 0 and χ = ( χ_1, 1 ) where χ_1∈ S^±_1.
* Suppose (β_1 + β_2)(λ) ≥ 0. Then
U ' ϖ^λθ_2χ H_τ_2' = U' ϖ^λ H_τ_2' if χ_1 = 1 or if χ_1∈ S^-_1∖{ 1 }, β_0(λ) ≥ 0
U ' ϖ^s_0(λ)
H_τ_2' if χ_1∈ S^-_1∖{ 1 }, β_0(λ) < 0 or if χ_1∈ S_1^+
* Suppose β_1 ( λ) ≤
0. Then
U ' ϖ^λθ̃_0χ H_τ_2' = U ' ϖ^r_1(λ) H_τ_2 ' if χ_1 = 1 or if χ_1∈ S_1^-∖{1 }, β_2(λ) ≥ 0,
U ' ϖ^s_0r_1(λ) H_τ_2' if χ_1∈ S_1^-∖{ 1 }, β_2(λ) < 0 or if χ_1∈ S^+_1
* Suppose β_1( λ) = 0. Then for any k ∈ [],
U' ϖ^λθ̃_kχ H_τ_2' = U' ϖ^λ H_τ_2' if χ_1 = 1 or if χ_1∈ S_1^-∖{ 1 }, β_0(λ) ≥ 0
U' ϖ^s_0(λ) H_τ_2' if χ_1∈ S^-_1∖{ 1 }, β_0(λ) < 0 or if χ_1∈ S^+ .
* Suppose (β_1 + β_2)(λ) ≥ 1. Then
U ' ϖ^λθ_3χ H_τ_2' = U' ϖ^λ H_τ_2' if χ_1 = 1 or if χ_1∈ S^-_1∖{ 1 }, β_0(λ) ≥ 0
U ' ϖ^s_0(λ
)
H_τ_2' if χ_1∈ S^-_1∖{ 1 }, β_0(λ) < 0 or if χ_1∈ S_1^+
In each of the parts (a), (c) and (d), the assumption made implies the equality U' ϖ^λγ = U' ϖ^λ where γ denotes σ_2, σ^k , σ_3. In part (b), the assumption implies that U' ϖ^λσ^0 = U'ϖ^r_1(λ). Using this and the fact that the matrix ϰ^-(-x) in (<ref>) lies in H_τ_2' for x ∈_F, one easily deduces each of the claims.
For ℛ_2(1) (resp., ℛ_2(ϖ^(4,2,2,3))), the result is obtained by applying Lemma <ref> to Proposition <ref> (resp., Corollary <ref>). The other two cases are handled by studying the fibers of the projection
_μ : ℛ_2(ϖ^μ) →ℛ_E(ϖ^_2(μ))
using Corollary <ref>. That is, if μ∈{ (3,2,1,2), (4,3,1,2) } and ϖ^(a,c,d)γ lies in ℛ_E(ϖ^_2(μ)) for some γ∈{ 1, v_1, η_0, ϖ^-(1,1,1)η_1, η̃_k | k ∈ []^∘}, the fiber _μ above ϖ^(a,c,d)γ consists of all elements of the form ϖ^λγ̂χ where γ̂∈{1, θ_1, θ_2, θ_3, θ̃_k | k ∈ []^∘} satisfies _2(γ̂) = γ, the cocharacter λ = (a,b,c,d) ∈Λ^α_0 > 0 is such that b = _1'(ϖ^μ) and χ∈ S_α_0(μ)^± is arbitrary. Note that α_0(μ) = 1 for both μ.
∙ μ = (3,2,1,2)
The projection is ℛ_E(ϖ^(3,1,2) ) = ℛ_E(ϖ^(3,2,2)) = ϖ^(2,1,1)ℛ_E(ϖ^(1,1,1)) which by Proposition <ref>(b), equals
{ϖ^(3,2,2), ϖ^(3,2,2)v_1, ϖ^(3,1,2)η_0, ϖ^(3,2,1)η_0, ϖ^(3,2,2)η_0, ϖ^(2,1,1) η_1, ϖ^(3, 2, 2)η̃_0, ϖ^(3,1,2)η̃_0, ϖ^(3,1,1)η̃_k | k ∈ []^∘}
By Lemma <ref> and Proposition <ref>, the fibers above ϖ^(3,2,2) and ϖ^(3,2,2)v_1 are singletons. Since ϖ^(3,2,1,2), ϖ^(3,2,2,1)σ_1 clearly belong to ℛ(ϖ^μ), we choose these as the representative elements above the corresponding fibers. For the remaining elements of ℛ_E(ϖ^(3,2,2)), one deduces from Lemma <ref> that χ must be either identity or in S_1^+ in each case (but not both), and the corresponding unique representative in the fiber is easily obtained.
∙ μ = (4,3,1,2)
The projection ϖ^(2,1,1)·ℛ_E(ϖ^(2,2,1)) = {ϖ^(4,3,2), ϖ^(4,2,3)v_1, ϖ^(4,3,2)η_0, ϖ^(4,2,3)η̃_0, ϖ^(3,2,1)η_1}. Again, we decide the lifts for ϖ^(4,3,2), ϖ^(4,2,3)v_1 using Lemma <ref> and use Lemma <ref> to show that χ∈ S^-_1 is the only possible for choice for each of the remaining representatives in ℛ_E(ϖ^(4,3,2) ).
§ CONVOLUTIONS
Recall that X denotes the topological vector space Mat_2 × 1(F) and 𝒮 = 𝒮_𝒪, X denotes the set of all locally constant compactly supported 𝒪-valued functions on X. The space X admits a continuous right action of H_1 = _2(F) via left matrix multiplication by inverse and we extend this action to H via _1 : H → H_1. These induce left actions of H_1 and H on 𝒮. If 𝔭 is an ideal of 𝒪 and ξ_1, ξ_2∈𝒮, we write ξ_1≡ξ_2𝔭 if ξ_1(x) - ξ_2(x) ∈𝔭 for all x ∈ X. If V is a compact open subgroup of H_1 or H, we let 𝒮(V) denote the space of V-invariants of 𝒮.
If m, n are integers, we let
X_m,n = {[ x; y ] | x ∈ϖ^m_F , y ∈ϖ^n_F}
which are compact open subset of X. We denote
ϕ_(m,n) : = (X_m,n ) , ϕ̅_(m,n) = ϕ_(-m,-n).
We let z_0 denote the inverse of the central element ρ_1^2 = diag(ϖ, ϖ) ∈ H_1.
For n a positive integer, we let U_ϖ^n denote the subgroup of all elements in U whose reduction modulo ϖ^n is identity in 𝐇(/ϖ^n ). For λ∈Λ, we define the depth of λ to be dep(λ) : = max{±α_0(λ) , ±β_0(λ) , ±β_2(λ) }. Then for λ of depth at most n, ϖ^-λ U_ϖ^nϖ^λ⊂ U.
We will often write h = (h_1, h_2, h_3) ∈_2(F) ×_F^×_2(F) ×_F^×_2(F) ⊂GSp_6(F) as
h = ( [ a b; a_1 b_1; a_2 b_2; c d; c_1 d_1; c_2 d_2 ] ) or h = ( ( a b
c d ) , ( a_1 b_1
c_1 d_1 ) , ( a_2 b_2
c_2 d_2 ) ) .
If we wish to refer to another element in H, we will write h ' and all its entries will be
adorned with a prime. Given a, b ∈, we write ϖ^(a,b) to denote diag(ϖ^b , ϖ^a-b ) ∈_2(F).
§.§ Action of _2
It will be useful to record a few general results on convolution of Hecke operators of _2(F) with ϕ. Let 𝒯 _u,v denote the double coset Hecke operator [U_1 diag(ϖ^u, ϖ^v )
U_1].
It acts on 𝒮(U_1) and in particular, on ϕ∈𝒮(U_1). It is clear that 𝒯_u,v(ϕ) = 𝒯_v,u(ϕ) and 𝒯_u,u(ϕ) = ϕ_(u,u).
𝒯_u,v(ϕ) = ϕ_(v,v) + q^u-vϕ_(u,u) + ∑_ i=1^u-v-1 (q^i - q^i-1) ϕ_(i+v, i+v) when u > v. Here the sum in the expression is zero if u - v = 1.
Let ξ = 𝒯_u,v(ϕ) = ∑_γγ·ϕ where γ runs over representatives of of U_1diag(ϖ^u , ϖ^v ) U_1 / U _1. Translating everything by (z_0)^v, it suffices to establish our formula when v = 0. Then u ≥ 1 and
U_1[ ϖ^u; 1 ]U_1 / U_1 = _κ∈ [_u] [ ϖ^u κ; 1 ] U_1⊔ _κ∈ [_u-1] [ 1; ϖκ ϖ^u ] U_1 .
From the decomposition above, we see that ξ(v⃗) = q^i whenever v ∈ ( X_i,i∖ X_i,i+1 ) ∪ ( X_i,i+1∖ X_i+1,i+1 ) = X_i,i∖ X_i+1,i+1 for all i ∈{ 0, 1 , …, u - 1 }
and that ξ(x⃗ ) = q^u + q^u-1 when x⃗∈ X_u,u.
Let 𝒯_u,v,* : = 𝒯_-u,-v = [U_1diag(ϖ^u, ϖ^v)U_1]_* denote the dual (or transpose) of 𝒯_u,v.
If u ≠ v, then 𝒯_u,v,* (ϕ) ≡ (z_0^u + z_v^v) ·ϕq - 1 and 𝒯_u,u,*(ϕ) = z_0^u·ϕ.
This is clear by Lemma <ref>.
Let I_1^+ denote the Iwahori subgroup of U_1 = _2(_F) of upper triangular matrices and I_1^- the Iwahori subgroup of lower triangular matrices. For u, v integers, let ℐ_u,v^± denote the double coset Hecke operator [I_1^±diag(ϖ^u,ϖ^v) U_1 ].
Let u, v be integers. Then
ℐ_u,v^+ (ϕ) = q^u-vϕ_(u,u) + ∑_i=0^u-v-1
q^iρ_1^2(i+v)· ( ϕ - ϕ_(0,1) ) if u ≥ v
q^v-u-1ϕ_(v-1, v) + ∑_i=0^v-u-2
q^iρ_1^2(i+u)· ( ϕ_(0,1) - ϕ_(1,1)) if u < v
and
ℐ_u,v^- (ϕ) = q^v-uϕ_(v,v) + ∑_i=0^v-u-1
q^iρ_1^2(i+u)· ( ϕ - ϕ_(1,0)) if u ≤ v
q^u-v-1ϕ_(u,u-1) + ∑_i=0^u-v-2 q^iρ_1^2(i+v) · (ϕ_(1,0) - ϕ_(1,1)) if u > v
where ρ _1^2 = z_0^-1 = [ ϖ ; ϖ ].
The first equality is established in the same manner as Lemma <ref> using the decompositions
I^+_1[ ϖ^u; 1 ] U_1 / U_1 = _κ∈ []_u[ ϖ^u κ; 1 ], I_1^+[ 1; ϖ^v ] U_1 / U_1 = _κ∈ []_v-1[ 1; κϖ ϖ^v ]
which hold for integers u ≥ 0, v ≥ 1. The second is obtained from the first by notation that I_1^-, I_1^+ are conjugates of each other by the reflection matrix [ 1; 1 ].
§.§ Convolutions with restrictions of 𝔥_0
This subsection is devoted to computing 𝔥_ϱ_i,*(ϕ) for i = 0 ,1 ,2. Recall that
ϱ_0 =
1.1([ 1 ; 1 ; 1 ; 1; 1; 1 ])
, ϱ_1 = 1.1([ ϖ ; ϖ 1; ϖ 1; 1; 1; 1 ]) , ϱ_1 = 1.1([ ϖ ; ϖ^2 1; ϖ^2 1; ϖ; 1; 1 ]).
Modulo q - 1,
* 𝔞_ϱ_0,*(ϕ) ≡ ( 6 + 16 z_0 + 6z_0^2 ) ϕ
* 𝔟_ϱ_0,*(ϕ) ≡ 4 ( 1 + z_0 ^3 + 6 z_0 + 6 z_0^2 ) ϕ
* 𝔠_ϱ_0,*(ϕ) ≡ ( ( z_0 + 1 )^4 - 2 z_0^2 ) ϕ
and 𝔥_ϱ_0,*( ϕ ) ≡ 0
For λ = (a,b,c,d) ∈Λ, the map
U ϖ^λ U / U ⟶ ( U_1ϖ^(a,b) U_1/ U_1 ) × (U_1ϖ^(a,c)U_1/U_1) × (U_1ϖ^(a,d) U_1 / U_1)
(h_1, h_2, h_3 ) U ⟼ (h_1U_1 , h_2 U_1 , h_3 U_1)
is a bijection. Corollary <ref> implies that
[U ϖ^λ U / U ] ( ϕ ) = | U_1ϖ^(a,c) U_1/U_1 | · | U_1ϖ^(a,d)U_1/ U_1 | · (z_0^b + z_0^a-b) ϕ .
Now | U_1ϖ^(u,v) U_1/U_1 | ≡ 1 or 2 q-1 depending on whether 2v - u = 0 or not. So parts (a)-(c) are all easily obtained.
Now recall from (<ref>) that
𝔥_ϱ_0,*(ϕ) =
(1 + ρ^8) (U) - (1+ρ^6) ( U
ϖ^(1,1,1,1) U ) +(1+ 2 ρ^2 + ρ^4) 𝔞_ϱ_0 -(1+ρ^2) 𝔟_ϱ_0 + 𝔠_ϱ_0
Using our formulas, we find that
𝔥_ϱ_0,*(ϕ) ≡ ( ( 1+ z_0^4 ) - 4 ( 1 + z_0 ^3 ) ( 1 + z_0 ) + ( 1 + z_0)^2 ( 6 + 16 z_0 + 6z_0^2) - 4( 1 + z_0 ) ( 1 + z_0^3 + 6z_0 + 6z_0^2 )
+ (z_0+1)^4 - 2z_0^2 ) ϕ
and one verifies that the polynomial expression in z_0 above is identically zero.
Let 𝐏 : = _2×__m_2 and define embeddings
_ϱ_1 : 𝐏 ↪𝐇 , _ϱ_1 : 𝐏 ↪𝐇
(γ_1, γ_2) ↦( ∂γ_1 ∂^-1 , γ_2 , γ_2 ) (γ_1, γ_2 )
↦( ∂γ_1 ∂^-1, γ_2 , γ_2 )
where = [ 1; 1 ] and ∂ : = ρ_1 = [ ϖ; 1 ]. We let 𝒳_ϱ_1 denote the common image _ϱ_1(P^∘ ), _ϱ_1(P^∘ ). We denote by M_ϱ_1 (resp., M_ϱ_1') denote the subgroup of U_ϖ in which the first and second (resp., first and third) components are identity. We also let
_2,3 : →𝐏 (h_1, h_2, h_3) ↦ (h_2, h_3 ) .
Finally, we let
𝒴_ϱ_1, L_ϱ_1, L'_ϱ_1, P_ϖ^∘ denote respectively the projections of 𝒳_ϱ_1, M_ϱ_1, M'_ϱ_1, U_ϖ under _2,3.
H_ϱ_1 = 𝒳_ϱ_1 M_ϱ_1 = 𝒳_ϱ_1 M _ ϱ_1 '.
Writing h ∈ H as in <ref>, we see that
ϱ_1 ^-1 h ϱ_1 =
[ a bϖ ; a_1 -c_2 b_1 - c_2 ϖ a_1 - d_2 ϖ; -c_1 a_2 a_2 - d_1 ϖ b_2 - c_1 ϖ; c ϖ d ; c_1 ϖ d_1 c_1; c_2 ϖ c_2 d_2 ]
From this, one immediately sees that h = (h_1, h_2, h_3 ) ∈ H_ϱ_1 if and only if ∂^-1 h_1∂, h_2, h_3∈ U_1 and the modulo ϖ reductions of h_2, h_3 coincide. So H_ϱ_1⊃𝒳_ϱ_1, M_ϱ_1, M_ϱ_1'. To see that H_ϱ_1 equals the stated products, we note that for any h = (h_1, h_2, h_3 ) ∈ H_ϱ_1, ι_ϱ_1 ( ∂^-1 h_1^-1∂ , h_2 ^-1 ) · h ∈ M_ϱ_1 and _ϱ_1(∂^-1 h_1 ^-1∂ , h_3 ^-1 ) · h ∈ M_ϱ_1'.
Modulo q - 1,
* 𝔞_ϱ_1,*(ϕ) ≡ 2 ( 1 + 3z_0 + z_0^2)
ϕ,
* 𝔟_ϱ_1,*(ϕ) ≡ ( 1 + 10 z_0 + 10 z_0^2 +
z_0^3 )
ϕ
* 𝔠_ϱ_1,* (ϕ) ≡ 2z_0 ( 1 + z_0 )
ϕ
and 𝔥_ϱ_1,*(ϕ) ≡ 0.
For λ = (a,b,c,d) ∈Λ, let ξ_λ = [ U ϖ^λ H_ϱ_1]_*( ϕ). Then Lemma <ref> implies that
ξ_λ = |
P^∘\ P^∘ϖ^(a,c,d)_2,3(H_ϱ_1) | · [ U_1ϖ^(a,b)∂ U_1∂^-1]_* (ϕ )
Now Corollary <ref> implies that
[ U_1ϖ^(a,b)∂ U_1∂^-1]_* (ϕ ) = ∂·𝒯_b+1, a - b, *(ϕ)
≡ (z_0^b+1 + z_0^a-b ) ϕ _(0,1) if a ≠ 2b + 1
z_0^b+1·ϕ_(0,1) if a = 2b + 1
If moreover |β_0(λ)|,| β_2(λ) | ∈{ 0,1 }, then P^∘ϖ^(a,c,d)_2,3( H_ϱ_1 ) simplifies to P^∘ϖ^(a,c,d)𝒴_ϱ_1. So in this case,
| P^∘\ P^∘ϖ^(a,c,d)_2,3(H_ϱ_1) | = [ 𝒴 _ϱ_1 : 𝒴
_ϱ_1∩ P^∘_(a,c,d) ]
where P_(a,c,d) ^ ∘ : = ϖ^-(a,c,d) P^∘ϖ^(a,c,d). Since 𝒴 _ϱ_1≃_2(_F ) = U_1 (via the projection 𝐏→_1, (γ_1, γ_2 ) ↦γ_1), the index on the RHS of (<ref>) can be found by comparing the intersection 𝒴_ϱ_1∩ P^∘_(a,c,d) with the Iwahori subgroups I_1^± in U_1. One easily sees that that the RHS of (<ref>) is congruent to 1 or 2 modulo q+1, and that the former only happens if and only if β_0(λ) = β_2(λ) = 0. This takes care of the index calculations for all the Hecke operators in parts (a)-(c) except for (U ϖ^(2,1,2,0) H_ϱ_1 ). Here, we invoke <cit.>. More precisely, we use that _2,3(H_ϱ_1) =
P_ϖ^∘𝒴_ϱ_1 and the result in loc.cit. implies that
| P^∘\ P^∘ϖ^(2,2,0) P_ϖ^∘𝒴 _ϱ_1 ] = e^-1· | P ^ ∘\ P^∘ϖ^(2,2,0) P^∘ _ϖ | · | ( P_ϖ ^∘∩𝒴 _ϱ_1 )
\𝒴
_ϱ_1 |
where e = [ 𝒴_ϱ_1 P_ϖ^∘∩ P_(2,2,0)^∘ : P_ϖ^∘∩ P_(2,2,0)^∘ ]. Now P_ϖ ^∘∩ Y_ϱ_1 is identified with I_1^±, and [U_1 : I_1∩ I_1^-] = q ( q+ 1 ) and similarly | P^∘\ P ^∘ϖ^(2,2,0) P^∘_ϖ | = q^2. Moreover 𝒴_ϱ_1 P_ϖ^∘∩ P_(2,2,0)^∘ = (𝒴_ϱ_1∩ P_(2,2,0)^∘ ) · ( P_ϖ^∘∩ P_(2,2,0)^∘ ), which implies that
e = [ 𝒴 _ϱ_1∩ P_(2,2,0)^∘ : 𝒴 _ϱ_1∩ P_ϖ^∘∩ P_(2,2,0)^∘ ] .
from which it is not too hard to see that e = q.
It follows that ξ_(2,1,2,0)≡ 2 z_0·ϕ. Now recall from (<ref>) that
𝔥_ϱ_1 =
- ( 1 + ρ^6 ) (U H_ϱ_1) + ( 1 + 2 ρ^2 + ρ^4 ) 𝔞_ϱ_1 - ( 1 + ρ^2 ) 𝔟_ϱ_1 + 𝔠_ϱ_1
So we see that
𝔥_ϱ_1, * ( ϕ ) ≡ ( - ( 1 + z_0^3) ( 1 + z_0 ) + (1 + z_0)^2
( 2 + 6z_0 + 2 z_0 ^2 ) - ( 1 + z_0 ) ( 1 + 10 z_0 + 10 z_0 ^2 + z_0 ^3 )
+ 2 z_0 ( 1 + z_0 ) ^ 2 )
which is zero since the polynomial expression in z_0 vanishes.
Let 𝐏, be as in Notation <ref>, _ϱ_2 : 𝐏↪𝐇 be the given by (γ _1, γ _2)
↦ (
γ _1
, γ _2 , γ_2 ) and 𝒳_ϱ_2 = _ϱ_2(P^∘ ).
Let pr_2,3 : →𝐏 be the projection as before and let 𝒴_ϱ_2, P_ϖ^2^∘ denote respectively the projections of 𝒳_ϱ_0, U_ϖ^2 under _2,3.
H _ ϱ_2 = 𝒳_ϱ_2 U_ϖ^2.
If h ∈ H is written as in Notation <ref>, then
ϱ_2^-1 h ϱ_2 =
[ a b ; a_1 -c_2 b_1 - c_2 ϖ^2 a_1 - d_2 ϖ ^2; -c_1 a_2 a_2 - d_1 ϖ ^2 b_2 - c_1 ϖ ^2; c d ; c_1 ϖ d_1 c_1; c_2 ϖ c_2 d_2 ] .
Now an argument similar to Lemma <ref> yields the desired factorization.
Modulo q - 1,
𝔥_ϱ_2,*(ϕ) ≡ 0.
Recall from (<ref>) that
𝔥_ϱ_2 = ( 1 + ρ^2 + ρ^4 ) (U H _ϱ_2 ) - ( 1 + ρ ^2 ) ( U ϖ^(1,1,0,1) H_ϱ_2 ) + ( U ϖ^( 2,2,1,1) H_ϱ_2 ) + ( U ϖ^(2,2,1,0) H_ϱ_2 ) .
If λ = ( a , b, c, d ) ∈Λ has depth at most 2, then Uϖ^λ H_ϱ_2 = U ϖ^λ𝒳 _ϱ_2 by Lemma <ref> and so
[U ϖ^λ H_ϱ_2]_*( ϕ ) = | P^∘\ P^∘ϖ^(a,c,d)𝒴_ϱ_2 | ·𝒯_b,a-b,*(ϕ) .
Now 𝒯_b, a-b,*(ϕ) is computed (modulo q-1) by Corollary <ref>. As 𝒴_ϱ_2≃_2(_F ), | P^∘\ P^∘ϖ^(a,c,d)𝒴_ϱ_2 | ≡ 1 or 2 q - 1 depending on whether c = d = 2a or not. So one finds that
[U H_ϱ_2]_*(ϕ) = ϕ,
[U ϖ^(1,1,0,1) H_ϱ_2 ]_*( ϕ) ≡2 ( 1 + z_0 ) ϕ ,
[U ϖ^(2,2,1,1) H_ϱ_2]_*(ϕ) ≡ ( 1 + z_0^2) ϕ ,
[ U ϖ^(2,1,2,0) H_ϱ_2]_*(ϕ) ≡ 2 z_0·ϕ .
From these, the claim easily follows.
§.§ Convolutions with restrictions of 𝔥_1
In this subsection, we compute the convolution 𝔥_ς_i,*(ϕ) for i = 0 , 1 ,2 , 3. Recall that ς_i = σ_iτ_1. Explicitly,
ς_0 =
1.1([ ϖ 1; ϖ 1; ϖ ; 1; 1; 1 ])
, ς_1 = 1.1([ ϖ 1; ϖ ; ϖ 1; 1; 1; 1 ]), ς_2 = 1.1( [ ϖ 1; ϖ 1 1; ϖ 1; 1; 1; 1 ] ) , ς_3 = 1.1( [ ϖ ^2 ϖ; ϖ^2 ϖ 1; ϖ^2 1; 1; 1; 1 ] )
Let 𝐏, and ∂ be as in <ref> and define embeddings
_ς_0 : 𝐏 ↪𝐇 , _ς_0 : 𝐏 ↪𝐇
(γ_1, γ_2) ↦(γ_1, γ_1 , ∂γ_2 ∂^-1 ) (γ_1, γ_2) ↦(γ_1 , γ_1,
∂γ_2 ∂^-1 )
We denote 𝒳_ς_0, the common images _ς_0(P^∘ ) = _ς_0( P^∘ ).
We let M_ς_0 (resp., M_ς_0 ') denote the subgroup of U_ϖ in which the first and third (resp., second and third) components are identity.
We also let _1,2 : →𝐏 denote the projection ( h_1, h_2, h_3) ↦ (h_1, h_2 ). Finally, we let 𝒴_ς_0, L_ς_0, L'_ς_0, P_ϖ^∘⊂ P the projections of 𝒳_ς_0, M_ς_0, M'_ς_0, U_ϖ respectively.
H_ς_0 = 𝒳_ς_0 M_ς_0 = 𝒳_ς_0 M _ ς_0 '.
Writing h ∈ H as in Notation <ref>, we see that
ς_0 ^-1 h ς_0 =
[ a -c_1 b - c_1 ϖ a - d_1 ϖ ; - c a_1 a_1 - d ϖ b_1 - c ϖ ; a_2 b_2 ϖ; c ϖ d c ; c_1 ϖ c_1 d_1 ; c_2 ϖ d_2 ]
Then one easily verifies that 𝒳_ς_0, N_ς_0, M_ς_0, M'_ς_0 are contained in H_ς_0. On the other hand if h = (h_1, h_2, h_3) ∈ H_ς_0, the above matrix is in K which implies that h_1, h_2 and ∂ ^-1 h_3∂∈_2(_F ). It follows that η : = _ς_0 ( h_1, ∂ h_3∂ ) ) ,
γ := _ς_0 (h_2, ∂^-1h_3∂) ∈𝒳_ς_0 and η ^-1 h ∈ M_ς_0, γ^-1 h ∈ M'_ς_0.
Modulo q - 1,
* 𝔞_ς_0,*(ϕ) ≡ 5 (1+z_0) ϕ,
* 𝔟_ς_0,*(ϕ) ≡ ( 4 + 14 z_0 + 4 z_0 ^2 ) ϕ
* 𝔠_ς_0,* (ϕ) ≡ ( 1 + z_0)^3·ϕ
and 𝔥_ς_0,*(ϕ) =
𝔥_ς_1,*(ϕ) ≡ 0.
Let λ = (a,b,c,d ) ∈Λ and ξ_λ denote [H_ς_0ϖ^λ U ](ϕ). Let Q^∘ : = _2(_F) and Q^♢⊂_2(F) the conjugate of Q^∘ by ∂ = [ ϖ; 1 ].
Lemma <ref> implies that
H_ς_0ϖ^λ U / U →_1,2 ( H_ς_0ϖ^λ U / U ) ×
Q^♢ϖ^(a,d) Q^∘ / Q^∘
(γ_1, γ_2, γ_3 ) U ↦ ( (γ_1, γ_2) _1,2(U) , γ_3 Q^∘ )
is a bijection. Now |Q^♢ϖ^(a,d) Q^∘ / Q^∘ | = | Q^∘ϖ^(a-1,d) Q^∘ / Q^∘ | which equals q^|a-1-2d|(q+1) if a -1 ≠ 2d and 1 otherwise. It remains to describe pr_1,2 ( H_ς_0ϖ^λ U / U ) ⊂ P / P^∘. By Lemma <ref>, _1,2( H_ς_0 ) = 𝒴_ς_0 L_ς_0 = 𝒴_ς_0 L'_ς_0. If | β_0(λ) | ≤ 1 (resp., |α_0(λ)| ≤ 1), then the conjugate of L_ς_0 (resp., L_ς_0') by ϖ^λ is contained in U. So if min{ | α_0(λ) | , | β_0(λ) | }∈{ 0 ,1 }, we have
_1, 2 ( H_ς_0ϖ^λ U / U ) = 𝒴 _ς_0ϖ^(a,b,c) P^∘ / P^∘
where we write ϖ^(a,b,c) for _1,2(ϖ^λ ).
To describe a system of representatives for 𝒴_ς_0ϖ^(a,b,c) P^∘ / P^∘, it suffices to describe one for 𝒴 _ς_0 / ( 𝒴 _ς_0∩ P^∘_(a,b,c) ) where
P^∘_(a,b,c) : = ϖ^(a,b,c) P^∘ϖ^-(a,b,c)
denotes the conjugate of P^∘ by ϖ^(a,b,c). Since 𝒴 _ς_0 is isomorphic to _2(_F) (via the projection 𝐏→𝐇_1, (γ_1, γ_2 ) ↦γ_1), this can be done by viewing intersection 𝒴_ς_0∩ P^∘_(a,b,c) as a subgroup of U_1 = _2(_F ) and comparing it with the Iwahori subgroups I_1^±. For this purpose, it will be convenient to introduce the quantities
u_λ = max{ 0, α_0(λ) , - β_0(λ) } , v_λ = max{ 0 , - α_0(λ ) , β_0(λ) } .
These describe the valuations of the upper right and lower left entries of a matrix in 𝒴_ς_0∩ P^∘_(a,b,c).
The case where min{ |a_0(λ) | , | β_0(λ) | }≥ 2 requires a little more work (though it will only occur once in this proof). Here
we invoke <cit.> for the product _1,2(H_ς_0 ) = 𝒴_ς_0 P_ϖ^∘. Thus
( _1,2 ( H_ς_0ϖ^λ U ) ) = e^-1∑ _γ ( γ P_ϖ^∘ϖ^(a,b,c) P^∘ )
and where γ runs over (the finite set) 𝒴_ς_0 / 𝒴_ς_0∩ P_ϖ^∘ and
e = e_(a,b,c) : = [ _1,2 ( H _ ς_0
) ∩ P_(a,b,c) ^∘ : P^∘_ϖ∩ P^∘_(a,b,c) ]. So the function ξ_λ can be computed by first computing (P^∘_ϖϖ^(a,b,c) P^∘ ) ·ϕ, then summing the translates of the result by representatives of 𝒴_ς_0 / ( Y_ς_0∩ P_ϖ^∘ ) and dividing the coefficients by e.
(a) Recall that 𝔞_ς_0 = (U ϖ^(1,1,1,0) H_ς_0) +
(Uϖ^(1,1,0,1)H_ς_0)+ 2 (Uϖ^(1,1,0,0)H_ς_0). Let
λ_1 : = (1,0,0,1) , λ_2 : =
(1,0,1,0) , λ_3 : = (1,0,1,1) .
Then 𝔞_ς_0, * (ϕ ) = z_0· ( ξ_λ_1 + ξ_λ_2 + 2 ξ_λ_3 ). For each λ_i, the formula (<ref>) applies. For λ = (a,b,c,d) ∈{λ_2, λ_3}, u_λ = 0 and v_λ = 1, so 𝒴_ς_0∩ϖ^(a,b,c) P^∘ϖ^-(a,b,c) is identified with I_1^+ and one easily sees that
ξ_λ_2 = (q+1) 𝒯_0,1(ϕ) ≡ 2 (ϕ + ϕ_(1,1))
ξ_λ_3 = 𝒯_0,1 (ϕ ) ≡ϕ + ϕ_(1,1)
modulo q - 1.
For λ = λ_1, u_λ = v_λ = 1 and we see that 𝒴_ς_0∩ϖ^(1,0,0) P^∘ϖ^-(1,0,0) is identified with I_1^+∩ I_1^-. Thus a system of representatives for 𝒴 _ς_0 / ( 𝒴 _ς_0∩ϖ^(1,0,0) P^∘ϖ^-(1,0,0) ) is obtained by multiplying a system of representatives for U_1 / I_1^+ with that for I_1^+ / I_1^+∩ I_1^-. So
ξ_λ_1 =
∑_γ∈ U_1 / I_1^+γ∑_η∈ I_1^+/(I_1^+∩ I_1^-) ηϖ^λ_1·ϕ .
Now ηϖ^λ_1·ϕ = ϖ^λ_1·ϕ for any η∈ I_1^+. So the inner sum equals q ϕ. The outer sum then evaluates to q ( ϕ + q ϕ_(1,1)).
Thus ξ_λ_1≡ ( ϕ + ϕ_(1,1) ).
Putting everything together
gives part (a).
(b) Recall that
𝔟_ς_0 = (U ϖ^(2,2,1,1) H _ς_0 ) + (U ϖ^(2,1,2,1) H_ς_0) + (Uϖ^(2,2,0,1) H _ς_0) + (U ϖ^(2,1,1,2) H_ς_0) + 4 (Uϖ^(2,1,1,1)H_ς_0) .
For μ∈{ (2,1,1,2), (2,1,1,1) }, it is easy to see that
[ U ϖ^μ H_ς_0]_*(ϕ) ≡ 2 z_0·ϕ
For μ_1 = (2,2,1,1) and μ_2 = (2,1,2,1), arguments similar to part (a) reveal that
[U ϖ^μ_1
H_ς_0 ]_* ( ϕ ) ≡ 2 ( 1 + z_0^2) ϕ ,
[U ϖ^μ_2 H_ς_0 ]_* ( ϕ ) ≡ 4 z_0·ϕ .
This leaves μ = (2,2,0,1). Denote λ = (4,2,2,2) - μ = ( 2,0,2,1) and let e denote e_(2,0,2). It is easy to see that _1,2(H_ς_0) ∩ P^∘_(2,0,2) is equal to the product of 𝒴_ς_0∩ P_(2,0,2)^∘ with P_ϖ ^∘∩ P_(2,0,2)^∘ and therefore e = [ 𝒴_ς_0∩ P_(2,0,2)^∘ : 𝒴_ς_0∩ P_ϖ∩ P_(2,0,2)^∘ ].
From this, one finds that e = q.
Next we compute that
( P _ ϖ ^∘ϖ^(2,0,2) P^∘ ) ·ϕ = q ( ϕ_(0,1) - ϕ_(1,1) + q ϕ_(1,2) ) .
Since 𝒴_ς_0∩ P_ϖ^∘⊂𝒴_ς_0 is identified with I_1^+∩ I_1^-⊂_2(_F ),
the expression (<ref>) reads
ξ_λ = ∑ _ h ∈ U_1 / I _1 ^ + ∩ I_1 ^-
h ( ϕ_(0,1) - ϕ_(1,1) + q ϕ_(1,2) )
=
∑_ γ∈ U_1 / I_1^+γ∑ _ η∈ I_1^+ / ( I_1^+∩ I_1^-) η ( ϕ_(0,1) - ϕ_(1,1) + q ϕ_(1,2) )
Then the inner sum is just multiplication by q. The outer sum then evaluates to
q ( ϕ + q ϕ_(1,1)
) - q ( q + 1 ) ϕ_(1,1) + q ^ 2 ( ϕ_(1,1) + q ϕ_(2,2) ) = q ϕ + q ( q- 1 ) ϕ_(1,1) + q^3ϕ_(2,2)
So
we see that ξ_λ
= q ϕ + q ( q - 1) ϕ _(1,1) + q^3ϕ_(2,2) and therefore
[ U ϖ^(2,2,0,1) H_ς_0]_*(ϕ) = (q+1) z_0^2·ξ_λ≡ 2 ( 1 +
z_0^2 ) ϕ.
Putting everything together, we find that
𝔟_ς_0,*( ϕ) ≡ 2(1+z_0^2)ϕ + 4 z_0·ϕ + 2 ( 1 + z_0^2) ϕ + 2 z_0·ϕ + 4 ( 2 z_0·ϕ )
= ( 4 + 4z_0^2 + 14 z_0 ) ϕ .
(c)
We have 𝔠_ς_0 =
(Uϖ^(3,2,2,2)H_ς_0)+
(Uϖ^(3,3,1,1)H_ς_0)+ (Uϖ^(3,2,0,1)H_ς_0). For each of the three Hecke operators, the formula (<ref>) applies and we find that
[ U ϖ^(3,2,2,2)H_ς_0 ]_* ( ϕ ) ≡ 2( z_0 + z_0^2 ) ϕ ,
[U ϖ^(3,3,1,1) H_ς_0]_* (ϕ) ≡ ( 1 + z_0^3) ϕ ,
[ U ϖ^(3,2,0,1) H_ς_2 ] _* ( ϕ ) ≡ (z_0 + z_0^2 ) ϕ
from which (c) follows.
Now recall that 𝔥_ς_0 = -(1+ ρ^6) (U H_ς_0) + ( 1+ 2 ρ^2 + ρ^4 ) 𝔞_ς_0 - ( 1 + ρ^2 ) 𝔟_ς_0 + 𝔠_ς_0. It is easy to see that [U H_ς_0]_*(ϕ) = ( q + 1 ) ϕ. So by parts (a)-(c), we see that
𝔥_ς_0, * ( ϕ ) ≡ - 2 ( 1 + z_0 ^3 ) ϕ + ( 1 + z_0)^2 ( 5 ( 1 + z_0) ϕ ) - ( 1 + z_0) ( 4 + 14 z_0 + 4 z_0 ^2 ) ϕ + (1 + z_0)^3ϕ
= ( 1 +z_0 ) ( - 2 + 2z_0 - 2z_0^2 + 5 ( 1+ z_0)^2 - 4 - 14 z_0 - 4z_0^2 + ( 1 + z_0 )^2 ) ϕ
= 0
Finally since 𝔥_ς_1 = w_2𝔥_ς_0 w_2 (<ref>) and w_2 only swaps the second and third components of H and w_2 normalizes U, we see that 𝔥_ς_1,*(ϕ) = 𝔥_ς_0,*(ϕ).
We let A_ς_2 denote the intersection A ∩ς_2 K ς_2^-1 and J_ς_2⊂ U denote the Iwahori subgroups of triples (h_1, h_2, h_3 ) ∈ U such that h_1 , h_3 reduce modulo ϖ to lower triangular matrices and h_2 reduces to an upper triangular matrix. We denote by M_ς_2 the three parameter additive subgroup of all triples h = (h_1,h_2,h_3) ∈ U such that
h_1 = [ 1; x 1 ], h_2 = [ 1 y; 1 ] , h_3 = [ 1; y - x + ϖ z 1 ]
where x , y , z ∈_F are arbitrary and by N_ς_2 the three parameter subgroup of all triples (h_1, h_2, h_3) of the form
h_1 =
[ 1 x ϖ; 1 ] ,
h_2 =
[ 1; y ϖ 1 ], h_3 = [ 1 z ϖ; 1 ]
where x , y , z ∈_F are arbitrary.
Finally, we let L_ς_2 the one-parameter subgroup of U all triples of the form (1,1, [ 1; z 1 ] ) where z ∈_F.
H_ς_2 is the product of A_ς_2, M_ς_2, N_ς_2 and J_ς_2 is the product of A^∘, H_ς_2, L_ς_2 where these products can be taken in any order.
It is easily verified that ς_2^-1 M_ς_2ς_2, ς_2^-1 N ς_2 are contained in K, so that M_ς_2 , N_ς_2 are subgroups of H_ς_2.
Let h∈ H_ς_2 and write h as in Notation <ref>. Then
ς_2 ^-1 h ς_2 =
[ a -c_1 b-c_1 ϖ a - d_1 ϖ -c_1 ϖ; -c a_1 -c_2 a_1 - dϖ b_1 - c - c_2 ϖ a_1 - d_2 ϖ; -c_1 a_2 -c_1 ϖ a_2 - d _1 ϖ b_2 -c_1 ϖ; c ϖ d c ; c_1 ϖ c_1 d_1 c_1; c_2 ϖ c_2 d_2 ] .
It follows that h ∈ U and c_1, b_2 , b ∈ϖ_F. In particular, H_ς_2⊂ J_ς_2 and a, a_1, a_2, d , d_1, d_2∈_F ^ ×. Let m ∈ M_ς_2 be defined with parameters x = - c/a, y = -b_1/d_1 and z = -(c_2/a_2 +y-x)/ϖ (see Notation <ref>). Write h' = m h as in Notation <ref> and let n ∈ N_ς_2 be defined with paramaters x = -b'/d'ϖ, -c_1'/a_1'ϖ, z = - c_2'/ a_2' ϖ (see Notation <ref>). Then nmh lies in A, and hence in A _ς_2. Thus H_ς_2 = M_ς_2 N_ς_2 A_ς_2. Similarly we can show H _ς_2 = N _ς_2 M_ς_2 A_ς_2. Since A_ς_2 normalizes both M_ς_2, N_ς_2, the product holds in all possible orders. This establishes the first claim. The second is established in completely analogous way.
If λ∈Λ satisfies β_2 (λ) ≤ 0, then
U ϖ^λ H_ς_2 = U ϖ^λ J_ς_2.
This follows by Lemma <ref> since ϖ^λ L_ς_2ϖ^-λ⊂ U if β_2(λ) ≤ 0.
Corollary <ref> reduces the computation of [Uϖ^λ H_ς_2]_*(ϕ) to [Uϖ^λ J_ς_2]_*(ϕ) for almost all Hecke operators appearing in 𝔥_ς_2,*, which we can be calculated efficiently using Lemma <ref>. The few exceptions are handled below.
Modulo q - 1, we have
* [U ϖ^(1,1,0,1) H_ς_2 ]_*(ϕ) ≡ ( 1 + z_0 ) ϕ - z_0·ϕ_(1,0) ,
* [U ϖ^(1,0,1,1) H_ς_2]_*(ϕ) ≡ z_0·ϕ_(1,0) ,
* [Uϖ^(2,1,1,2) H_ς_2]_* (ϕ) ≡ z_0·ϕ
* [U ϖ^(3,2,1,2)ψ H_ς_2]_*( ϕ) ≡ 0
For λ∈Λ, we will denote ξ_λ := [H_ς_2ϖ^λ U ](ϕ).
(a) This equals z_0·ξ_λ where λ = (1,0,1,0). Since λ has depth one, we have H_ς_2ϖ^λ U = M_ς_2ϖ^λ U. Now
M_ς_2ϖ^λ U / U = { ( [ 1; x ϖ ]
,
[ ϖ y; 1 ] , [ 1; y - x ϖ ] ) U | x, y ∈_F}.
and it is easy to see that a system of representatives for M_ς_2ϖ^λ U/U is obtained by allowing the parameters x, y in the set above to run over []. Using this system, one calculates that ξ_λ = ϕ - ϕ_(1,0) - ϕ_(1,1).
(b) This equals z_0·ξ_λ where λ = (1,1,0,0). As in part (a), we have H_ς_2ϖ^λ U = M_ς_2ϖ^λ U and it is easy to see that
M _ς_2ϖ^λ U / U = { (1 , 1 , [ 1; t 1 ] ) ϖ^λ U | t ∈_F} .
A set of representatives is obtained by allowing the parameter t to run over elements of []. Thus ξ_λ = q ϕ_(1,0).
(c) This expression equals z_0^2·ξ_λ where λ = (2,1,1,0). As the first two components of ϖ^λ are central and β_2(λ) ≥ 0, we have ϖ^-λ N_ς_2ϖ^λ⊂ U and so H_ς_2ϖ^λ U = M _ς_2ϖ^λ U / U. Using the centrality of the first two componetns again, we see that
M_ς_2ϖ^λ U / U = { ( 1 , 1 , [ 1 ; u 1 ] ) ϖ^λ U | u ∈_F} .
From this, we see that a system of representatives is given by letting the parameter u run over [_2]. Thus ξ_λ = q^2ϕ_(1,1).
(d) It suffices to show that [ H_ς_2ψ^-1ϖ^(1,0,1,0) U ](ϕ) ≡ 0. Let us denote ψ^-1ϖ^(1,0,1,0) by η. It is straightforward to verify that η ^-1 N_ς_2η⊂ U, so that H_ς_2η U / U = A_ς_2 M_ς_2η U / U. Elementary manipulations show that
A_ς_2 M_ς_2η U / U = { ( [ 1; s ϖ ] , [ ϖ t; 1 ] , [ 1 ; u + s - t ϖ ] ) U | s, t ∈_F , u ∈_F^×}
where we used that (a, a_1, a_1, d, d_1 , d_2 ) ∈ A_ς_2 if and only if a , d ∈_F^× with a ≡ d_1≡ a_2ϖ and d ≡ a_1≡ d_2ϖ. Let
C(s,t,u) : = ( [ 1; s ϖ ] , [ ϖ t; 1 ] , [ 1 ; u + t - s ϖ ] )
where s, t ∈_F, u ∈_F^×. Then C(s,t,u) U = C(s',t',u') U if and only if s ≡ s', t ≡ t', u ≡ u' modulo ϖ. Thus a system of representatives for H_ς_2η U / U is given by C(s,t,u) where s, t run over [] and u runs over []^×. Thus for each fixed s , t, there are q - 1 choices of u from which it easily follows that the function [H_ς_2η U ](ϕ) vanishes modulo q - 1.
Modulo q - 1, we have
* 𝔞_ς_2,*(ϕ) ≡ 2(1+z_0) ϕ + 2z_0·ϕ_(1,0),
* 𝔟_ς_2,*(ϕ) ≡ ( 1 + 6z_0 + z_0^2) ϕ + 3z_0(1+z_0)ϕ_(1,0)
* 𝔠_ς_2,* (ϕ) ≡ z_0(1+z_0) ϕ + z_0(1+z_0)^2ϕ _(1,0)
and 𝔥_ς_2,*(ϕ) ≡ 0.
For λ = (a,b,c,d) ∈Λ, let ξ _λ denote [U ϖ^λ H_ς_2]_*(ϕ). If β_2(λ) = 2d - a ≤ 0, then ξ_λ = [U ϖ^λ J_ς_2]_*(ϕ). It is easily seen from Lemma <ref> and the decompositions given therein that
[U ϖ^λ J_ς_2]_*(ϕ) ≡ℐ_-b,a-b^-(ϕ) q-1
This formula in conjunction with Lemma <ref> can be used to calculate all Hecke operators.
For instance,
we have 𝔞_ς_2 = ( U ϖ^(1,1,1,0) H_ς_2) + ( U ϖ^(1,0,0,0) H_ς_2) + (U ϖ^(1,1,0,1) H_ς_2) + ( U ϖ^(1,0,1,1) H_ς_2) + 2( U ϖ^(1,0,1,0) H_ς_2) and we compute
2
* [ U ϖ^(1,1,1,0) H_ς_2]_*(ϕ) ≡ ( 1 + z_0 ) ϕ - z_0·ϕ_(1,0),
* [U ϖ^(1,0,0,0)H_ς_2]_*(ϕ) ≡ z_0·ϕ_(1,0),
* [U ϖ^(1,1,0,1) H_ς_2]_*(ϕ ) ≡ ( 1 + z_0 ) ϕ - z_0·ϕ_(1,0),
* [U ϖ^(1,0,1,1) H_ς_2]_*(ϕ) ≡ z_0·ϕ_(1,0),
* 2 [U ϖ^(1,0,1,0) H_ς_2 ] (ϕ) ≡ 2 z_0·ϕ_(1,0).
Now adding all these retrieves the expression in part (a). Similarly for parts (b) and (c).
Now 𝔥_ς_2 = - ( 1 + ρ^6) (U H_ς_2) + ( 1 + 2 ρ^2 + ρ^4) 𝔞_ς_2 - ( 1 + ρ^2) 𝔟_ς_2 + 𝔠_ς_2 from (<ref>).
Therefore
𝔥_ς_2,*( ϕ ) ≡ (- 1 - z_0^3) ϕ + (1+z_0)^2 ( 2(1 + z_0 )ϕ + 2z_0·ϕ_(1,0) ) -
(1+z_0) ( (1+6z_0 + z_0^2)ϕ + 3z_0(1+z_0)ϕ_(1,0) ) + z_0(1+z_0) ϕ + z_0(1+z_0)^2ϕ_(1,0)
= ( -1-z_0^3 + 2(1+z_0)^3 - (1+z_0)(1+6z_0 + z_0^2) + z_0(1+z_0) ) ϕ +
( 2 z_0 ( 1+ z_0 ) ^2 - 3z_0 ( 1 + z_0 ) ^2 + z_0 ( 1+ z_0)^2 ) ϕ_(1,0) = 0
As usual, we let A_ς_3 denote the intersection A ∩ς_3 K ς_3^-1.
We denote by M_ς_3 the three parameter additive subgroup of all triples h = (h_1,h_2,h_3) ∈ U such that
h_1 = [ 1; x/ϖ 1 ], h_2 = [ 1 y; 1 ] , h_3 = [ 1; y - x ϖ + zϖ^2 1 ]
where x , y , z ∈_F are arbitrary and by N_ς_2 the three parameter subgroup of all triples (h_1, h_2, h_3) of the form
h_1 =
[ 1 x ϖ^2; 1 ] ,
h_2 =
[ 1 ; y ϖ 1 ], h_3 = [ 1 y ϖ + z ϖ^2; 1 ]
where x , y , z ∈_F are arbitrary.
H_ς_3 = M_ς_3 N_ς_3 A_ς_3 = N_ς_3 M_ς_3 A_ς_3.
Writing h ∈ H as in Notation <ref>, we find that
ς_3 ^-1 h ς_3 =
[ a -c_1 ϖ bϖ^2 -c_1 a - d_1 ϖ -c_1 /ϖ; -c ϖ a_1 -c_2 a_1 - dϖ b_1 - c_2 - c ϖ^2ϖ^2 a_1 - d_2 ϖ^2; -c_1 a_2 -c_1 ϖ a_2- d_1 ϖ^2 b_2
- c_1 ϖ^2; c ϖ^2 d c ϖ ; c_1 ϖ^2 c_1 ϖ d_1 c_1; c_2 ϖ^2 c_2 d_2 ] .
From this matrix, we easily see that H_ς_3 contains M_ς_3 and N_ς_3. We also see that if h ∈ H_ς_3, then all entries of h except for c are integral. Moreover, c_1, b_2∈ϖ_F, b ∈ϖ^2_F and c ∈ϖ^-1_F. Thus a, a_1 ,a_2, d, d_1, d_2∈_F^×. An argument analogous to Lemma <ref> applies to yield the desired decompositions.
Modulo q - 1, we have
* 𝔞_ς_3,*(ϕ) = z_0·ϕ_(2,0)
* 𝔟_ς_3,*(ϕ) ≡ ( z_0^2 + z_0 )·ϕ_(2,0) + z_0·ϕ_(1,0)
* 𝔠_ς_3,* (ϕ) ≡ (z_0^2 + z_0) ·ϕ_(1,0)
and 𝔥_ς_3,* (ϕ) ≡ 0
For λ∈Λ, let ξ_λ denote [H_ς_2ϖ^λ U ] ( ϕ ).
(a) This equals ξ_f_1. From Lemma <ref>, we find that H_ς_3ϖ^f_1 U / U = {ϖ^f_1 U }, so that ξ_f_1 = ϖ^f_1·ϕ = ϕ_(1,-1).
(b) Recall that 𝔟_ς_3 = (U ϖ^(1,0,1,0) H_ς_3) + (U ϖ^(1,-1,1,0)H_ς_3 ) + (U ϖ^(1,0,0,1)H_ς_3). Let λ_1 = (1,1,0,1), λ_2 = (3,3,1,2) and λ_3 = (1,1,1,0). This 𝔟_ς_3,*(ϕ) = z_0·ξ_λ_1 + z_0^2·ξ_λ_2 + z_0·ξ_λ_3. From Lemma <ref>, we find that
H_ς_3ϖ^λ_1 U / U = {ϖ^λ_1 U }
H_ς_3ϖ^λ_2 U / U = { ( [ ϖ^2 x ϖ; 1 ] , [ 1 ; ϖ ] , [ ϖ ; 1 ] ) U | x ∈_F}
H_ς_3ϖ^λ_3 U / U = { ( [ ϖ ; 1 ] ,
[ ϖ y; 1 ] ,
[ 1 ; y ϖ ] ) U | y ∈_F} .
So H_ς_3ϖ^λ_i U / U is a singleton for i = 1 and a complete system of representatives for i = 2 (resp., i = 3) is given by letting the parameter x (resp., y) run over []. One then easily finds that ξ_λ_1 = ϕ_(1,0), ξ_λ_2 = ϕ_(2,0) - ϕ_(2,1) + ϕ_(3,1) and ξ_λ_3 = q ϕ_(1,0).
(c) Recall that 𝔠_ς_3 =
(U ϖ^(2,1,1,1) H_ς_3 ) + (U
ϖ^(2,0,2,0) H_ς_3). Let λ_1 = (0,0,0,0) and λ_2 = (2,2,0,2). Then 𝔠_ς_3,*(ϕ) = z_0·ξ_λ_1 + z_0 ^2·ξ_λ_2. Using Lemma <ref>, we find that
H_ς_3ϖ^λ_1 U / U = { ( [ 1; x / ϖ 1 ] , 1 , 1 ) U }
H_ς_3ϖ^λ_2 U / U = { ( [ ϖ^2 ; 1 ] , [ 1 ; y ϖ ϖ ^2 ] , [ ϖ ^2 y ϖ; 1 ] ) U | x ∈_F}
So a system for representative cosets for H_ς_3ϖ^λ_1 U/ U (resp., H_ς_3ϖ^λ_2U/U is obtained by letting x (resp., y) run over []. Using this, we compute that ξ_λ_1 = ϕ_(0,-1) - ϕ_(1,-1) + ϕ_(1,0) and ξ_λ_2 = ϕ_(2,0).
Finally, we have 𝔥_ς_3 = ( 1 + 2 ρ^2 + ρ^4 ) 𝔞_ς_3 - ( 1 + ρ^2 ) 𝔟_ς_3 + 𝔠_ς_3, so
𝔥_ς_3, *(ϕ) ≡ ( 1 + z_0)^2 ( z_0·ϕ_(2,0)) ) - ( 1 + z_0 ) ( z_0^2 + z_0 ) ϕ_(2,0) + z_0·ϕ_(1,0) ) + (z_0^2+ z_0) ϕ_(1,0)
= ( z_0 ( 1+ z_0)^2 - (1+z_0) (z_0^2 + z_0 ) ) ϕ_(2,0) + ( z_0^2 + z_0 - (1+z_0) z_0 ) ϕ_(1,0)
= 0
§.§ Convolutions with restrictions of 𝔥_2
In this subsection, we compute the convolution 𝔥_ϑ,*(ϕ) for ϑ∈{ϑ_0, ϑ_1,
ϑ_2, ϑ_3} ∪{ϑ̃_k | k ∈ []^∘}. These matrices are as follows:
ϑ_0 =
0.85(ϖ 1ϖ
ϖ 1ϖ
1
1ϖ
1ϖ
1
)
, ϑ_1 = 0.85(ϖ 1ϖ
1
ϖ 1ϖ
1ϖ
1
1ϖ
), ϑ_2 = 0.85[ ϖ 1ϖ; ϖ 1ϖ 1; 1 1ϖ; 1ϖ; 1ϖ; 1 ]
ϑ_3 = 0.85[ ϖ 1ϖ; ϖ 1ϖ 1ϖ; 1 1ϖ^2; 1ϖ; 1ϖ; 1 ] , ϑ̃_k = 0.85[ ϖ 1ϖ; kϖ 1 kϖ ; (k+1)ϖ 1 k+1ϖ; 1ϖ ; - 1ϖ k + 1; 1ϖ -k ]
where k ∈ []^∘ = [] ∖{ -1 }. Recall that H _ϑ denotes the intersection H ∩ϑ K ϑ ^-1.
H_ϑ is a subgroup of U for ϑ∈{ϑ_0, ϑ_1, ϑ_2, ϑ̃_k | k ∈ []^∘}.
Since θ = ϑτ_2^-1∈ U and H_τ_2' ⊂ U by Lemma <ref>, we see that H_ϑ = H ∩θ H_τ_2' θ^-1⊂ U.
Let 𝒳_ϑ_0⊂ U denote the subgroup of all triples (h_1, h_2, h_3 ) where h_2= [ 1; 1 ] h_1[ 1; 1 ].
H_ϑ_0 equals the product 𝒳_ϑ_0 U_ϖ^2.
Let h ∈ U and write h as in Notation <ref>. Then h∈ H_ϑ_4 if and only if
ϑ_0^-1 h ϑ_0 = [ a -c_1 b - c_1 ϖ^2 a - d_1 ϖ^2 ; -c a_1 a_1 - dϖ^2 b_1 - cϖ^2 ; a_2 b_2; c ϖ^2 d c ; c_1 ϖ^2 c_1 d_1 ; c_2 d_2 ]∈ K.
It follows that 𝒳_ϑ_0, U_ϖ^2 are both contained in H_ϑ_0 and hence so is their product. If h = (h_1, h_2, h_3 ) ∈ H_ϑ_0 is arbitrary, let γ = ( h_1^-1, h_2' , h_3) where h_2' = [ 1; 1 ] h_1^-1[ 1; 1 ]. Then γ∈𝒳_ϑ_0 and γ h = ( 1, h_2'h_2 , 1 ) ∈ H_ϑ_0 and it is easily seen from the matrix formula above (applied to γ h in place of h) that γ h ∈ U_ϖ^2. Thus h = γ^-1·γ h ∈𝒳_ϑ_0 U_ϖ^2 which establishes the reverse inclusion.
Modulo q - 1, we have
* [U H_ϑ_0]_*(ϕ) = ϕ,
* [U ϖ^(3,2,1,2) H_ϑ_0]_* (ϕ)
≡ 2 (z_0^2 + z_0) ϕ ,
* [U ϖ^(4,2,2,3) H_ϑ_0]_* (ϕ) ≡ 2 z_0^2·ϕ,
* [U ϖ^(4,3,1,2) H_ϑ_0]_* (ϕ) ≡ (z_0 ^3 + z_0 ) ·ϕ.
and 𝔥_ϑ_0, *(ϕ) = 𝔥_ϑ_1,*(ϕ) ≡ 0.
Part (a) is clear since H_ϑ_0⊂ U. Let λ∈Λ be such that dep(λ) ≤ 2. Then Lemma <ref> implies that H_ϑ_0ϖ^λ U = 𝒳_ϑ_0ϖ^λ U. Let us denote 𝐏 : = _2(F) × _F^×_2(F) and let P, P^∘ denote the groups of F, _F-points of 𝐏 respectively. Consider the embedding
: 𝐏↪𝐇 , (h_1, h_2) ↦ (h_1, h_1, h_2 )
where = [ 1; 1 ]. Then identifies P^∘ with 𝒳_ϑ_0. If λ = (a,b,c,d) satisfies b = a - c, then we also have ϖ^λ∈(P) and we write ϖ^(a,b,d)∈ P for the pre-image. Then
P^∘ϖ^(a,b,d) P^∘ / P^∘→𝒳_ϑ_0ϖ^λ U / U , γ P ↦(γ)
U
is a bijection. It follows λ = (a,b,c,d) satisfying b = a - c and with dep(λ) ≤ 2, we have
[ H_ϑ_0ϖ^λ U ] (ϕ) = | U_1\ U_1ϖ^(a,d) U_1 | · T_b,a-b,*(ϕ) .
Parts (b), (c), (d) are then easily obtained using Corollary <ref> and the formula above.
Now recall that
𝔥_ϑ_0 = ρ^2(1 + 2ρ^2 + ρ^4) (U H_ϑ_0 ) - ( 1 + ρ^2) (U ϖ^(3,2,1,2) H_ϑ_0 ) + ( U ϖ^(4,2,2,3) H_ϑ_0) + ( U ϖ^(4,3,1,2)H_ϑ_0) .
So putting everything together, we have
𝔥_ϑ_0,*(ϕ) ≡ ( z_0(1+2z_0+z_0^2) - (1 + z_0)(2z_0^2+ 2z_0) + 2z_0^2 + ( z_0^3 + z_0) ) ϕ = 0
Since 𝔥_ϑ_1 = w_2𝔥_ϑ_0 w_2 and conjugation by w_2 only swaps the second and third components of H, we obtain the equality 𝔥_ϑ_0, * (ϕ) = 𝔥_ϑ_1,*(ϕ).
Let A_ϑ_2 = A ∩ϑ_2 K ϑ_2^-1 and U_ϖ^2 the subgroup of all elements in U that reduce to identity modulo ϖ^2. We let M_ϑ_2 be the subgroup of all triples h = (h_1,h_2,h_3) ∈ U such that
h_1 = [ 1; x 1 ], h_2 = [ 1 y; 1 ] , h_3 = [ 1; y - x 1 ]
where x , y ∈_F satisfy x - y ∈ϖ_F. We define N_ϑ_2 to be the two parameter subgroup of triples (h_1, h_2, h_3) given by
h_1 = [ 1 x ϖ; 1 ] , h_2 =
[ 1; x ϖ 1 ], h_3 = [ 1 y; 1 ]
where x , y ∈_F are arbitrary.
H_ϑ_2 =M_ϑ_2 N_ϑ_2 A _ϑ_2 U_ϖ^2 = N_ϑ_2 M_ϑ_2 A _ϑ_2 U_ϖ^2.
That M_ϑ_2, N_ϑ_2 , U_ϖ^2 are subgroups of H_ϑ_2 is easily verified by checking that their conjugates by ϑ_2^-1 are in K , so H_ϑ_2 contains the product. If h ∈ H_ϑ_2⊂ U is arbitrary, then
ϑ^-1_2 h ϑ_2 = [ a - c_1 b-c_1ϖ^2 a-d_1 ϖ^2 -c_1 ϖ; -c a_1 -c_2 ϖ a_1-d ϖ^2 b_1 - c - c_2 ϖ^2 a_1 - d_2 ϖ; - c_1ϖ a_2 - c_1ϖ a_2 - d_1 ϖ b_2 -c_1; c ϖ^2 d c ; c_1 ϖ^2 c_1 d_1 c_1ϖ; c_2 c_2ϖ d_2 ]∈ K.
From the matrix, we see that b, c_1, c_2, b_1 - c, a - d_1∈ϖ_F. In particular, a, a_1, a_2, d , d_1, d_2∈_F^×. Let m ∈ M_ϑ_2 be defined with x = -c/a, y = -b_1/d_1 (see Notation <ref>). Then h' = mh satisfies b_1' = c' = 0. Then c_2' ∈ϖ^2_F. If we define n ∈ N_ϑ_2
with x = -b'/d'ϖ, y = -b'_2/d', we find that h” satisfies b_1” = c” = 0 (inherited from h') and b” = b_2” = 0. The latter condition forces c_1”∈ϖ^2_F. Now h” clearly lies in the product A_ϑ_2 U_ϖ^2 which proves the first equality. The second follows similarly by first using N_ϑ_2 to make the entries b, b_2 in h zero.
Modulo q - 1, we have
2
* [ U H _ ϑ_2 ] _*( ϕ ) = ϕ,
* [U ϖ^(3,2,1,2) H_ϑ_2]_* (ϕ) ≡ (z_0^2 + z_0 ) ϕ - ϕ̅_(1,2),
* [U ϖ^(3,1,2,1) H_ϑ_2]_* (ϕ) ≡ϕ̅_(1, 2 ),
* [U ϖ^(3,1,2,2) H_ϑ_2]_* (ϕ) = ϕ̅ _ ( 1, 2 ),
* [U ϖ^(4,2,2,3) H_ϑ_2]_* (ϕ) ≡ z_2^2·ϕ,
* [U ϖ^(4,1,3,2) H_ϑ_2]_* (ϕ) ≡ ( z_0 + 1 )·ϕ_(1, 2) - z_0^2·ϕ
and 𝔥_ϑ_2, * (ϕ) = 𝔥_ϑ̃_0, * ( ϕ ) ≡ 0.
Part (a) is immediate since H_ϑ_2⊂ U. For λ∈Λ, let ξ_λ = [H_ϑ_2ϖ^λ U](ϕ). If λ depth at most 2, then H _ϑ_2ϖ^λ U / U = M_ϑ_2 N_ϑ_2ϖ^λ U/ U by Lemma <ref>. If moreover λ has depth one and β_2(λ) ≤ 0, then we also have M_ϑ_2 N_ϑ_2ϖ^λ U = M _ϑ_2ϖ^λ U. Similarly if α_0(λ), β_2(λ) ≥ 0 and β_0(λ) ≤ 0, then H_ϑ_2ϖ^λ U / U = N_ϑ_2ϖ^λ U / U.
(b) We need to compute z_0^2·ξ_λ where λ = (1,0,1,0). Then dep(λ) = 1 and β_2(λ) = -1, so H_ϑ_2ϖ^λ U/U = M_ϑ_2ϖ^λ U / U. It is then easily seen that the quotient M_ϑ_2 / M_ϑ_2∩ϖ^λ U ϖ^-λ has cardinality q with representatives given by elements with parameters x = y running over [] (see Notation <ref>). From this, one finds that ξ_λ = ϕ - ϕ_(1,0) + q ϕ_(1,1).
(c) We need to compute z_0^2·ξ_λ where λ = (1,1,0,1). Here α_0(λ) = β_2(λ) = 1 and β_0(λ) = - 1, so H_ϑ_2ϖ^λ U / U = N_ϑ_2ϖ^λ U / U. This coset space has cardinality q and a set of representatives is γϖ^λ where γ∈ N_ϑ_2 runs over elements defined with x = 0 and y ∈ [] (see Notation <ref>). So ξ_λ = q ϖ^λ·ϕ = q ϕ_(1,0).
(d) If λ = -(3,1,2,2), then H_ϑ_2ϖ^λ U / U = M_ϑ_2ϖ^λ U / U as in part (b) and its easy to see that this equals ϖ^λ U/U. So ξ_λ = ϖ^λ·ϕ = ϕ̅_(1,2).
(e) We need to compute z_0^3·ξ_λ where λ = (2,1,1,0). As the first and second components of ϖ^λ are central and β_2(λ) = - 2 < 0, we see that H_ϑ_2ϖ^λ U / U = M_ϑ_2ϖ^λ U / U. From the structure of M, we see that a set of
representatives is given by γϖ^λ where γ = (1,1, [ 1; ϖ z 1 ] ) and z running over []. So ξ_λ = q ϖ^λ·ϕ = z_0^-1·ϕ.
(f) This equals z_0^3·ξ_λ where λ = ( 2,2,0,1). Then H _ϑ_2ϖ^λ U / U = N_ϑ_2ϖ^λ U / U. A set of representatives for this quotient is γϖ^λ where γ runs over elements of N_ϑ_2 defined with y = 0 and x ∈ []. From this, one calculates that ξ_λ vanishes on ( X ∖ X_1,0 ) ∪ ( X_1,1∖ X_2,1), takes value one on X_1,0∖ X_1,1 and q on X_2,1. So ξ_λ = ϕ_(1,0) - ϕ_(1,1) + q ϕ_(2,1) and z_0^3·ξ_λ = ϕ̅_(2,3) - z_0^2ϕ + q ϕ̅_(1,2).
Now recall that
𝔥_ϑ_2 = ρ^2( 1 + 2 ρ^2 + ρ^4 ) ( U H_ϑ_2 ) - ( 1 + ρ^2 ) ( ( U ϖ^(3,2,1,2) H _ϑ_2 ) + ( U ϖ^(3,1,2,1) H _ϑ_2 ) + ( U ϖ^(3,1,2,2) H _ϑ_2 ) )
+ ( U ϖ^(4,2,2,3) H_ϑ_2 ) + ( U ϖ^(4,1,3,2) H _ ϑ_2 )
By parts (a)-(f), we see that
𝔥_ϑ_2,*(ϕ) ≡ z_0(1+z_0)^2·ϕ - (1+z_0) ( (z_0^2 + z_0) ·ϕ - ϕ̅_(1,2) + ϕ̅_(1,2) + ϕ̅_(1,2) ) +
z_0^2·ϕ + ( z_0+1) ·ϕ_(1,2) - z_0^2·ϕ
= ( z_0(1 + z_0)^2 - ( 1 + z_0) (z_0^2 + z_0 ) ) ·ϕ - ( 1 + z_0) ϕ̅_(1,2) + (1 + z_0) ·ϕ̅_(1,2)
= 0
modulo q - 1. Since 𝔥_ϑ̃_0 is the conjugate of 𝔥_ϑ_2 by w_2 w_3 and this only affects the second and third components of H, we see that 𝔥_ϑ̃_0 (ϕ) = 𝔥_ϑ_2 (ϕ). This completes the proof.
Let I_ϑ_3⊂ U denote the subgroup of triples (h_1, h_2, h_3 ) such that modulo ϖ^2, h_1 reduces to a lower triangular matrix and h_2, h_3 reduce to upper triangular matrices. Then H_ϑ_3⊂ I_ϑ_3.
Write h ∈ H_ϑ_3 as in Notation <ref>. Then
ϑ_3 ^-1 hϑ_3 =
[ a * -b - c_1ϖ^2 * -c_1ϖ^2; * a_1 -c_2ϖ^2 * b_1- cϖ^2 - c_2ϖ^4 *; * a_2 * * b_2 - c_1ϖ^2; * d c ; * * d_1 *; * * d_2 ]∈ K
Since all entries of this matrix must be integral, it is easily seen that h ∈ U and that b , c_1, c_2∈ϖ^2_F.
We have
* [U ϖ^(3,1,2,2) H_ϑ_3]_* (ϕ) = ϕ̅_(1,2)
* [U ϖ^(4,2,2,3) H_ϑ_3]_* (ϕ) = ϕ̅_(2, 2)
* [U ϖ^(4,1,3,2) H_ϑ_3]_* (ϕ) = ϕ̅_(1, 3)
and 𝔥_ϑ_3, * ( ϕ ) = - [ ϖ ^-1_F^×; ϖ^-2_F^× ].
For λ∈Λ, let ξ_λ denote [ H _ ϑ_3ϖ^λ U ] ( ϕ ). If each of α_0(λ) , -β_0(λ) , -β_2(λ) lies in { 0 ,1 ,2 }, then ϖ^-λ I_ϑ_3ϖ^λ⊂ U. So for such λ, H _ ϑ_3ϖ^λ U = ϖ^λ U and so ξ_λ = ϖ^λ·ϕ. Parts (a), (b), (c) then follow immediately. Now recall that
𝔥_ϑ_3 = - ( 1 + ρ^2 ) (U ϖ^(3,1,2,2) H_ϑ_3 ) + ( U ϖ^(4,2,2,3) H _ϑ_3 ) + (U ϖ^(4,1,3,2) H _ϑ_3 ) .
Using parts (a)-(c), we find that
Therefore
𝔥_ϑ_3, * ( ϕ )
= - ( 1 + z_0 ) ϕ̅ _(1,2)
+ ϕ̅_(2,2) + ϕ̅_(1,3)
= ϕ̅_(2,2) - ϕ̅_(1,2) + ϕ̅_(1,3) - ϕ̅_(2,3)
= - [ ϖ ^-1_F^×; ϖ^-2_F ] + [ ϖ ^-1_F^×; ϖ^-3_F ]
= - [ ϖ ^-1_F^×; ϖ^-2_F^× ]
For k ∈ [] ∖{ 0 , - 1 }, let 𝒳̃_k⊂ U denote the subgroup of all triples (h_1, h_2, h_3 ) where
h_1 = [ a b; c d ] , h_2 = [ d - c k; - b / k a ] , h_3 = [ d c ( k + 1 ); b / ( k + 1 ) a ] .
That is, h_1∈_2(_F) is arbitrary and h_2, h_3 are certain conjugates of h_1 by anti-diagonal matrices. Recall that U_ϖ denotes the subgroup of U which reduces to the trivial group modulo ϖ.
For k ∈ [] ∖{ 0 , -1 }, H_ϑ̃_k is equal to the product of 𝒳̃_k with U _ϖ∩ H_ϑ̃_k.
It is straightforward to verify that 𝒳̃_k⊂ H_ϑ̃_k by checking that the matrix ϑ̃_k^-1𝒳̃_kϑ̃_k has all its entries integral. This implies that the reduction of H_ϑ̃_k modulo ϖ contains the reduction of 𝒳̃_k modulo ϖ. Thus H_ϑ̃_k contains the product 𝒳̃_k· ( U _ϖ∩ H_ϑ̃_k). For the reverse inclusion, write h ∈ H_ϑ̃_k as in Notation <ref>.
Then
ϑ̃_k^-1 h ϑ̃_k =
[ a * * b - c_1 k^2 - c_2(k+1)^2ϖ^2 a + d_1 k - d_2 ( k+1) ϖ^2 *; -c * a_2-a_1ϖ a_2 ( k + 1 ) - a_1 k -d ϖ^2 b_1 + b_2 - c ϖ - b_2 k + b_1 ( k+1) ϖ; * * * * *; d c ; * * * * *; * * c_1k + c_2 (k+1)ϖ d_2 - d_1ϖ * ]∈ K
As the displayed entries must be integral (and the entries of h are also integral by Lemma <ref>), one easily deduces all the congruence conditions on entries of h for its reduction to lie in the reduction of 𝒳̃_k. For instance, we have b_2 k ≡ -b_1 (k+1) and b_1 + b_2≡ c modulo ϖ, which implies that b_1≡ - ck.
Modulo q - 1,
* [U H_ϑ̃ _k ]_*(ϕ ) = ϕ,
* [ U ϖ^(3,2,1,1) H_ϑ̃_k]_*(ϕ) ≡ (z_0^2 + z_0 ) ·ϕ
and
𝔥_ϑ̃_k,*(ϕ) ≡ 0 for all k ∈ [] ∖{ 0 , -1 }.
Part (a) is trivial since H_ϑ̃_̃k̃⊂ U. For part (b), let λ = -(3,2,1,1). Then dep(λ) = 1 and so H_ϑ_2ϖ^λ U = 𝒳̃_kϖ^λ U. An argument analogous to Proposition <ref> shows that there is a bijection
U_1ϖ^-(3,2) U_1 / U_1→𝒳̃_kϖ^λ U / U
(where U_1 = _2(_F)) using which one obtains the equality [H_ϑ̃_kϖ^λ U](ϕ) = 𝒯_2,1,*(ϕ). Corollary <ref> then implies the claim. Now recall that
𝔥_ϑ̃_k = ρ^2 (1 + 2ρ^2 + ρ^4 ) ( U H_ϑ̃_k ) - ( 1 + ρ^2 ) ( U ϖ^(3,2,1,1) H_ϑ̃_k) .
So
𝔥_ϑ̃_k,*(ϕ) ≡ ( z_0(1+z_0)^2ϕ - (1+z_0) ( z_0^2+ z_0) ) ϕ = 0.
amsalpha
|
http://arxiv.org/abs/2409.02222v1 | 20240903184749 | A Digital signature scheme based on Module-LWE and Module-SIS | [
"Huda Naeem Hleeb Al-Jabbari",
"Ali Rajaei",
"Abbas Maarefparvar"
] | cs.CR | [
"cs.CR",
"94A60, 11T71, 68P25, 06B10, 68T05"
] |
Department of Mathematics, Tarbiat Modares University, 14115-134, Tehran, Iran
[email protected]
Department of Mathematics, Tarbiat Modares University, 14115-134, Tehran, Iran
[email protected]
^*Corresponding author
Department of Mathematics and Computer Science, University of Lethbridge, Lethbridge, Canada
[email protected]
[2010]Primary: 94A60, 11T71, 68P25, 06B10 – Secondary: 68T05
§ ABSTRACT
In this paper, we present an improved version of the digital signature scheme proposed by Sharafi and Daghigh <cit.> based on Module-LWE and Module-SIS problems. Our proposed signature scheme has a notably higher security level and smaller decoding failure probability, than the ones in the Sharaf-Daghigh scheme, at the expense of enlarging the module of the underlying basic ring.
A Digital signature scheme based on Module-LWE and Module-SIS
Abbas Maarefparvar
=============================================================
§ NOTATIONS
We use bold lower-case letters to denote vectors, e.g., a, and use bold upper-case letters like A to denote matrices. The concatenation of two vectors a and b is denoted by
a || b.
The uniform probability distribution over some finite set S will be denoted by U(S). If s is sampled from a distribution D, we write s ← D. Logarithms are base 2 if not stated otherwise.
By default, all vectors will be column vectors, and for a vector v, we denote by v^T its transpose. The boolean operator
statement evaluates to 1 if statement is true, and to 0 otherwise.
§ INTRODUCTION
The advent of quantum computing poses significant challenges to the security of classical digital signature schemes. In anticipation of this paradigm shift, the cryptographic community has turned its focus towards post-quantum cryptography, seeking to develop algorithms that can withstand the computational prowess of quantum adversaries.
In 2016, the National Institute of Standards and Technology (NIST) initiated a process to solicit, evaluate, and standardize one or more quantum-resistant public-key cryptographic algorithms <cit.>. The submitted algorithms for the NIST PQC standardization are designed based on various hard computational problems, including lattices, codes, and hash functions, which are currently believed to resist quantum algorithm attacks. Due to its rich number-theoretic structure, Lattice-Based Cryptography (LBC) is one of the most promising alternatives among all the candidates. In particular, most of the algorithms selected for NIST standardization, are lattice-based ones, namely Crystals-Kyber <cit.> (in the part “Public-key Encryption and Key-establishment Algorithms”) and Crystals-Dilithium <cit.> and FALCON <cit.> (in the part “Digital Signature Algorithms”), see <cit.>.
§.§ Post-Quantum Alternatives
Investigating alternative (quantum-resistant) fundamental problems, as in the NIST Post-Quantum Cryptography Competition <cit.>, has led to efficient implementations of various schemes based on the following categories <cit.>.
(1)
Hash-based cryptography includes cryptographic systems based on hash functions in contrast to number-theoretic schemes. SPHINCS+ <cit.> signature scheme is a representative
of that family.
(2)
Code-based cryptography involves cryptographic schemes based on error-correcting
codes. Classic McEliece <cit.> encryption schemes belong to this type.
(3)
Lattice-based cryptography. The example that has perhaps attracted
the most interest, not the first example historically, is the Hoffstein–
Pipher–Silverman “NTRU” public-key encryption system <cit.>.
(4)
Multivariate cryptography is a type of quantum-safe algorithms based on multivariate
polynomials. This includes cryptographic schemes which are based on the difficulty of
solving systems of multivariate equations. An example of this family is Rainbow signature scheme <cit.>.
(5)
Zero-knowledge proof systems are based on zero-knowledge proofs and symmetric key
primitives such as hash functions and block ciphers. Picnic signature <cit.> is one scheme of
this type.
(6) Supersingular elliptic curve isogeny cryptography is a family of schemes based on
the properties of supersingular elliptic curves and supersingular isogeny graphs. Isogeny-based
schemes use the mathematics of supersingular elliptic curves to create a Diffie–
Hellman-like key exchange, see e.g. SIKE <cit.>.
Recently, inspired by the Lindner-Pikert cryptosystem <cit.>, Sharafi and Daghigh <cit.> designed a lattice-based digital signature whose security is based on the hardness assumption of the Ring Learning With Errors (Ring-LWE) and the Ring Short Integer Solution (Ring-SIS) problems, see Section <ref> for their definitions. In this paper, using the Module Learning With Errors problem (Module-LWE) and the Module Short Integer Solution problem (Module-SIS) we give some improvements to the Sharafi-Daghigh signature scheme. In particular, we show that using the Module-LWE/SIS would significantly increase the security of the algorithm. In addition, applying some implementation considerations we achieve much shorter public key and signature sizes than the ones in <cit.>.
§ LATTICE-BASED CRYPTOGRAPHY
The study of lattices, specifically from a computational point of view,
was marked by two major breakthroughs: the development of the LLL
lattice reduction algorithm by Lenstra, Lenstra, and Lovász in the early
80s <cit.>, and Ajtai's discovery of a connection between the worst-case and
average-case hardness of certain lattice problems in the late 90's <cit.>. Recently, lattice-based cryptography has received much attention from cryptographers and a comprehensive
background on hard problems and security reductions exists. In this section, we introduce some basic notions in lattice-based cryptography and present some fundamental computational hard problems in this area.
For d ≥ 1 integer, a d-dimensional lattice ℒ is a discrete subgroup of ℝ^d. A basis of the lattice ℒ is a set B={b_1,…,b_n}⊆ℝ^d such that b_i's are linearly independent vectors in ℝ^d and all their integer combinations form ℒ:
ℒ(B)=ℒ(b_1,…,b_n)={∑_i=1^n c_i b_i : c_i ∈ℤ, ∀ i=1,…,n }.
The integers d and n are called dimension and rank of the lattice ℒ, respectively (Note that n ≤ d). If n=d, the lattice ℒ is called full rank. Throughout the paper, we only
consider the full-rank (n-dimensional) lattices in ℝ^n.
A lattice basis B is not unique. For a lattice ℒ with basis B, and for every unimodular
matrix U∈ℤ^n × n (i.e., one having determinant ± 1), B.U is also a basis of ℒ(B).
The determinant det(ℒ) of a lattice ℒ is |det(B)| for any basis B of ℒ.
The dual (sometimes called reciprocal) of a lattice ℒ⊆ℝ^n is defined as
ℒ^⊥:={w : <w,ℒ> ⊆ℤ},
i.e., the set of points whose inner products with the vectors in ℒ are all integers. It is straightforward to
that ℒ^⊥ is a lattice.
For a lattice , the minimum distance of is the length of a shortest lattice element:
λ_1():= min_v ∈∖{ 0 } ||v||,
where ||.|| is the Euclidean norm. In general, we define λ_i()=r, if
r is the smallest value such that
has i independent vector of norm at most r.
Since a lattice is an additive group, we have a quotient group ℝ^n/ with cosets as follows.
c+={ c+v :v ∈} .
For a lattice , the minimum distance of is the length of a shortest lattice element:
λ_1():= min_v∈∖{ 0 } ||v||,
where ||v|| is the Euclidean norm of v=(v_1,…,v_n), i.e., ||v||=√(∑_i=1^n v_i^2).
§.§ Computational hard problems on lattices
The shortest vector problem (SVP) and the closest vector problem (CVP) are two fundamental problems in lattices and their conjectured intractability is the foundation for a large number of cryptographic applications of lattices.
For a lattice ℒ with basis B, the Shortest Vector Problem (SVP) asks to find a shortest nonzero lattice vector, i.e., a vector v∈ℒ(B) with ||v||=λ_1((B)). In the γ-approximate SVP_γ, for γ≥ 1, the goal is to find a shortest nonzero lattice vector v∈ℒ(B) \{0} of norm at most ||v|| ≤γ. λ_1(ℒ(B)).
For a lattice ℒ with basis B and a target vector t∈ℝ^n, the Closest Vector Problem (CVP) asks, to find a vector v∈ℒ(B) such that || v-t || is minimized. In the γ-approximate CVP_γ, for γ≥ 1, the goal is to find a lattice vector v∈ℒ such that || v-t || ≤γ . dist(t,ℒ(B)) where
dist(t,ℒ(B))=inf{|| w-t || : w∈ℒ(B) }.
One can show that SVP and CVP and their γ-approximate versions are NP-hard problems, see <cit.> for a survey on this subject.
Today, the approximation problems for lattice cryptography are more important. Hence, we first pose the Approximate Shortest Vector Problem (SVP_ γ).
For a n-dimensional lattice =(B), find a nonzero vector v ∈ such that ||v|| ≤γ(n) λ_1().
For several cryptosystems, there exist secure assuming proofs for the hardness of particular lattice problems. But there is no proof for search version of SVP_ γ. Instead, decision version of approximate-SVP can be proved.
For a basis B with an n-dimensional lattice
= (B) such that either λ_1() ≤ 1 or λ_1()
> γ(n), find out which is the case.
Now we define Approximate Shortest Independent Vectors Problem (SIVP_γ).
Let = (B) be a full-rank n-dimensional lattice, where B is its basis. Then output a set S={ s_i }⊆ of n linearly independent lattice elements such that for all i, ||s_i|| ≤γ(n) λ_1().
Let be a lattice and x a traget point, Closest Vector Problem (CVP) asks for finding the lattice point closest to the target.
§.§ Learning With Errors (LWE)
A fundamental problem in lattice-based cryptography is the Learning with
Errors problem (LWE).
The seminal work of Regev <cit.> establishes reductions from standard
problems such as SVP in general lattices to LWE, suggesting that
LWE is indeed a difficult problem to solve. In particular, the ability to solve LWE
in dimension n implies an efficient algorithm to find somewhat short vectors in
any n-dimensional lattice. LWE is parameterized by positive integers n and q and an error distribution χ over ℤ, which is usually taken to be a discrete Gaussian of width α∈ (0,1).
For a vector s∈ℤ_q^n called the secret, the LWE distribution A_s,χ over ℤ_q^n ×ℤ_q is sampled by choosing a∈ℤ_q^n uniformly at random, choosing e ←χ, and outputting (a,b=<s,a>+e mod q), where <s,a> denotes the inner product of the vectors s and a.
Search and decision are two main types of the LWE problem. The first case is to find the secret given LWE samples, and the second case is to distinguish between LWE samples and uniformly random ones.
(Search-LWE_n,q,χ,m).
Suppose m independent samples (a_i,b_i)∈ℤ_q^n ×ℤ_q are drawn by using A_s,χ for a uniformly random s∈ℤ_q^n (fixed for all samples), find s.
(Decision -LWE_n,q,χ,m).
Given m independent samples (a_i,b_i) ∈ℤ_q^n ×ℤ_q where every sample is distributed according to either A_s,χ for a uniformly random s∈ℤ_q^n (fixed for all samples), or the uniform distribution, distinguish which is the case (with non-negligible advantage).
The concrete and asymptotic hardness of theLWE has recently been surveyed in <cit.>. In particular, the hardness of the LWE can be reduced to the approximate SVP in the worst case.
§.§ Ring-LWE
The Ring Learning With Errors problem (Ring-LWE) is a variant of the Learning With Errors problem (LWE) that was introduced by Lyubashevsky et al. in <cit.> to achieve faster and more efficient cryptographic schemes. Like LWE, the security of Ring-LWE is based on the hardness of solving certain lattice problems in the worst case, even with quantum computers. The specific lattice problem that underlies Ring-LWE is the approximate Shortest Vector Problem (SVP) on ideal lattices, which are special types of lattices that have an algebraic structure related to the ring of integers of a number field.
Let K be a number field, 𝒪_K its ring of integers, and q ≥ 2 a rational integer. The search variant of Ring-LWE with parameters K and q consists in recovering a secret s ∈𝒪_K^∨/q𝒪_K^∨ from arbitrarily many samples (a_i,a_i.s+e_i), where
𝒪_K^∨={x ∈ K | Tr_K/ℚ(xy) ∈ℤ, ∀ y ∈𝒪_K}
denotes the dual of 𝒪_K (by Tr_K/ℚ(a) we mean the trace of a ∈ K over ℚ), each a_i is uniformly sampled in 𝒪_K/q 𝒪_K and each e_i is a small random element of K_ℝ:=K ⊗_ℚℝ. The decision variant of Ring-LWE consists in distinguishing arbitrarily many such pairs for a common secret s chosen uniformly random in 𝒪_K^∨/q 𝒪_K^∨, from uniform samples in 𝒪_K^∨/q𝒪_K^∨× K_ℝ/q 𝒪_K^∨ <cit.>.
The ring R=𝒪_K is typically taken to be a power-of-two cyclotomic ring, i.e., R=ℤ[ζ_n] where ζ_n is a primitive n^th-root of unity for n=2^k a power of 2. This has led to practical implementation advantages and a simpler interpretation of formally defined Ring-LWE, see <cit.>
for more details on this subject.
Ring-LWE is characterized by three items, a ring R of degree n over ℤ, a positive integer modulus q defining the quotient ring R_q:=R/qR, and an error distribution χ over R. An “error rate" α < 1 relative to q appears if R is a cyclotomic ring, and χ is a kind of discretized Gaussian in the cannonical embedding of R.
(Ring-LWE distribution).
For an s ∈ R_q called the secret, the Ring-LWE distribution A_s,χ over R_q × R_q is sampled by choosing a ∈ R_q uniformly at random, choosing e ←χ, and outputting (a,b=s.a+e mod q).
(The Ring-Learning With Errors Problem, decisional version). Let R denote the ring ℤ[X]/<X^n+1> for n a power of 2, and R_q be the residue ring R/qR. The decisional Ring-Learning With Errors problem, DRLWE_m,q,χ, is as follows: for a uniform random secret s ←𝒰(R_q), and given m samples either all of the form (a,b=s.a+e mod q) where the coefficients of e are independently sampled from the distribution χ, or from the unifrom distribution (a,b) ←𝒰(R_q × R_q), distinguish which is the case (with non-negligible advantage).
§.§ Short Integer Solution (SIS) Problem
The short integer solution (SIS) hard problem Starting with Ajtai’s seminal work <cit.>, and has considered as the foundation for collision-resistant hash algorithms, identification schemes, digital
signatures.
Given many uniformly random elements of a certain large finite additive
group, the SIS problem asks to find a sufficiently “short” nontrivial integer combination of them that sums to zero. Indeed, SIS is parameterized by positive integers n and q defining the group ℤ_q^n, a positive real β, and a number m of group elements.
(SIS problem).
For a matrix A ∈ℤ_q^m × n, where its columns are m uniformly random vectors a_i ∈ℤ_q^n
find a nonzero integer vector z ∈ℤ^m of norm ||z|| ≤β such that Az=∑_i a_i.z_i=0 ∈ℤ_q^n.
In a different way, we can see
the SIS problem as an “average-case" short-vector problem on a certain family of so-called
“q-ary" m-dimensional integer lattices, namely, the lattices
ℒ^(A)={z ∈ℤ^m : Az=0 ∈ℤ_q^n }⊇ q ℤ^m.
One can also consider an inhomogeneous version of the SIS problem, which is to find a short integer
solution to
Ax=u ∈ℤ_q^n, where A and u are uniformly random and independent. Notice that, disregarding
the norm constraint, the set of all solutions is the lattice coset
ℒ_u^(A):= c+ ℒ^(A), where c ∈ℤ^m is an
arbitrary (not necessarily short) solution. It is not hard to show that the homogeneous and inhomogeneous
problems are essentially equivalent for typical parameters.
§.§ Ideal Lattices and Hardness of Ring-SIS
It is proved that R-SIS and its associated cryptographic functions are as hard as certain lattice
problems in the worst case, similarly to SIS. However, the underlying lattice problems are specialized to
algebraically structured lattices, called ideal lattices, arising from the ring R . Moreover, the algebraic and
geometric properties of R play a major role to distinguished what kinds of security properties R-SIS can be expected to
have, and as sequence in the quantitative strength of the underlying worst-case guarantee.
An operator is called coefficient embedding if each z ∈ℤ[X]/<f(X)> associates with the n-dimensional integer vector of coefficients of its canonical
representative in ℤ[X].
An ideal of a commutative ring R is an additive subgroup I ⊆ R such that is also closed under multiplication.This multiplicative closure means that ideal lattices have
geometric symmetries that lattices do not have in general.
For example, under the coefficient embedding of ℤ[X]/<X^n-1> an ideal corresponds to a cyclic lattice in ℤ^n.
(Ideal lattices).
An ideal lattice is simply a lattice corresponding to an ideal in R under some fixed choice of
geometric embedding such as the coefficient or canonical embedding.
For the ring R=ℤ[X]/<X^n-1> and other appropriate parameters, Micciancio proved that
the function Az in definition 2.18 is one-way. The concurrent and independent works of Peikert and Rosen <cit.> and Lyubashevsky
and Micciancio <cit.>, published in 2006, showed that the function Az over
R=ℤ[X]/<X^n-1>
turns out not to be collision resistant. the same works in <cit.> showed that over appropriate integral
domains R, the function Az is indeed collision resistant.
(Tighter approximation factors from number fields).
Subsequent work by Peikert and Rosen <cit.>
generalized the above results, demonstrating that R-SIS is at least as hard as worst-case SVP
on ideal lattices
in R, where R= O_K is the ring of algebraic integers in any number field K . Notice that the approximation
factor for the underlying SVP
problem can be as small as
γ=O(√(log n)) in certain families of number
fields. This work revealed how the discriminant of the number field—essentially, the determinant of R under
the canonical embedding-controls the worst-case approximation factors.
§.§ Discrete Gaussians
Many modern works on lattices in complexity and cryptography rely on Gaussian-like probability distributions
over lattices, called discrete Gaussians. Here we recall the relevant definitions.
For any positive integer n and real s>0, which is taken to be s=1 when omitted, define the
Gaussian function ρ_s:ℝ^n →ℝ^+ of parameter (or width) s as
ρ_s(𝐱):=exp(-π ||𝐱||^2/s^2)=ρ(𝐱/s).
Note that ρ_s is invariant under rotations of ℝ^n, and that ρ_s(𝐱)=∏_i=1^n ρ_s(x_i) for any vector 𝐱=(x_1,x_2,…,x_n) ∈ℝ^n.
The (continuous) Gaussian distribution D_s of parameter s over ℝ^n is defined to have probability density
function proportional to ρ_s, i.e.,
f(𝐱):=ρ_s(𝐱)/∫_ℝ^nρ_s(𝐳)d𝐳=ρ_s(𝐱)/s^n.
For a lattice ℒ and a positive real s >0, the discrete Gaussian distribution D_ℒ,s over ℒ with parameter s
is the probability distribution having support ℒ that assigns a probability proportional to ρ_s(𝐱)
to each 𝐱∈ℒ. For ℒ=ℤ^n, it is easy to see (by orthonormality of its standard basis) that the discrete
Gaussian D_ℤ^n,s is simply the product distribution of n independent copies of D_ℤ,s. There are efficient
algorithms for sampling from a distribution within negligible statistical distance of D_ℤ,s, given any s>0, see e.g. <cit.>; for arbitrary s there is a rejection sampling algorithm, and for small s one can compute
a close approximation to the cumulative distribution function. See <cit.> for more details.
§.§ Polynomials and NTT
In general, fast quasi-logarithmic algorithms exist for polynomial
multiplication, like Nussbaumer, Karatsuba, or Schoolbook
multiplication; but past research shows that the Number Theoretic Transform (NTT) is a
very suitable way to implement polynomial multiplication on various platforms, especially for large
dimensions n; the implicit usage of the NTT allows for memory efficient in place computation, and no big temporary data structures are required. Here we just describe
the basic definition and refer to <cit.> for details on the efficient implementation of the NTT. Also basic mechanism to protect the NTT and arithmetic of lattice-based schemes can be
found in <cit.>.
The main mathematical objects that are manipulated in our digital signature scheme, are polynomials in R_q:=ℤ_q[X]/<X^n+1>. For a polynomial f ∈ R_q, where f:=∑_i=0^n-1 f_i X^i we denote by f_i the i-th coefficient of f for all i ∈{0,1,…, n-1 }. Addition or subtraction of
polynomials in R_q (denoted as + or -, respectively) is the usual coefficient-wise addition or subtraction, such that for f=∑_i=0^n-1 f_i X^i ∈ R_q and g=∑_i=0^n-1 g_i X^i ∈ R_q we get f± g=∑_i=0^n-1( f_i ± g_i mod q ) X^i.
With the NTT, a polynomial multiplication for elements in R_q can be performed
by computing h=NTT^-1( NTT(f) o NTT(g) ) for f,g,h ∈ R_q. The o operator denotes coefficient-wise
multiplication of two polynomials f,g ∈ R_q such that fog=∑_i=0^n-1(f_i.g_i mod, q ) X^i. The NTT defined in R_q can be implemented very efficiently if n is a power of two and q is a prime for which it holds that q ≡ 1 (mod 2n). This way a primitive n-th root of unity ω and its square root γ=√(ω) mod q exist. By multiplying
coefficient-wise by powers of γ before the NTT computation and after the reverse transformation by powers
of γ-1 mod q, no zero padding is required and an n-point NTT can be used to transform a polynomial with
n coefficients.
For a polynomial g=∑_i=0^n-1 g_i X^i ∈ R_q we define
NTT(g):=ĝ=∑_i=0^n-1ĝ_i X^i,
where
ĝ_i=∑_j=0^n-1γ^j g_j ω^ij mod q,
where ω is an n-th primitive root of unity and γ=√(ω) mod q. The computation of NTT^-1 is essentially the same as the computation of NTT, except that it uses ω^-1 mod q, multiplies by powers of γ^-1 mod q after the summation, and also multiplies each coefficient by the scalar n^-1 mod q so that
NTT^-1(ĝ)=g=∑_i=0^n-1 g_i X^i,
where
g_i=( n^-1γ^-i∑_j=0^n-1ĝ_j ω^-ij) mod q.
Note that we define the x mod q operation for integers x,q to always produce an output in the range [0,q-1].
Unless otherwise stated, when we access an element f_i of a polynomial f ∈ R_q we always assume that a_i is
reduced modulo q and in the range [0,q-1].
§.§ Module-LWE
The Module Learning With Errors problem (Module-LWE) is a generalization of the Ring Learning With Errors problem (Ring-LWE) that allows for more flexibility and efficiency in lattice-based cryptography <cit.>. Module-LWE is based on the hardness of finding small errors in linear equations over modules, which are collections of ring elements with a common structure. Module-LWE can be seen as a way of interpolating between LWE and Ring-LWE, where the module rank (the number of ring elements in each module) determines the level of security and performance. In particular, Ring-LWE is a special case of Module-LWE with module rank 1 <cit.>.
In order to give the formal definition of the Module-LWE, we have to fix some notations. Similar to the Ring-LWE, let K be a number field of degree n, R be the ring of integers of K, R^∨ be the dual of R, as defined in (<ref>), and q ≥ 2 be integer. Also let K_ℝ:=K ⊗_ℚℝ, 𝕋_R:=K_ℝ/R^∨, R_q:=R/(qR) and R_q^∨:=R^∨/(qR^∨).
(Module-LWE distribution, statement from <cit.>) Let M:=R^d. For s∈ (R_q^∨)^d and an error distribution ψ over K_ℝ, we sample the module learning with error distribution A_d,q,s,ψ^M over (R_q)^d ×𝕋_R^∨ by outputting (a, 1/q <a,s>+e mod R^∨) where a ← U((R_q)^d) and e ←ψ.
(Decision/Search Module-LWE problem, statement from <cit.>) Let Ψ be a family of distributions over K_ℝ and D be distribution over R_q^∨. For
M=R^d, the decision module learning with errors problem Module-LWE_m,q,Ψ^(M) entails distinguishing m samples of U((R_q)^d) ×𝕋_R^∨ from A_q,s,ψ^(M) where s← D^d and ψ is an arbitrary distribution in Ψ.
Let [K:ℚ]=r_1+2r_2, where r_1 and r_2 denote the number of real and complex (non-real) embedding of K, respectively. One can show that
K_ℝ≃ H:={x∈ℝ^r_1×ℂ^2r_2 : x_i=x_i+r_2 , i=r_1+1, …, r_1+r_2},
where x_i+r_2 denotes the complex conjugate of x_i+r_2. Hence the distribution over K_ℝ are sampled by choosing an element of the space H according to the distribution and mapping back to K_ℝ via the isomorphism (<ref>), see <cit.>.
Module-LWE comes with hardness guarantees given by lattice problems based on a certain class of lattices, called module lattices.
In this case, solving the approximate SVP on module lattices
for polynomial approximation factors would permit solving Module-LWE (and thus Ring-LWE) efficiently <cit.>.
Module-LWE is a promising alternative to Ring-LWE that can resist potential attacks that exploit the algebraic structure of the rings <cit.>. Therefore, Module-LWE may provide a higher level of security than Ring-LWE, while still being more efficient than plain LWE. Moreover, Module-LWE allows for more fine-grained control over the security level by adjusting the module rank, which is not possible with efficient Ring-LWE schemes. Furthermore, Module-LWE implementations can be easily adapted to different security levels by reusing the same code and parameters <cit.>.
§.§ SIS, Ring-SIS, and Module-SIS problems
The Short Integer Solution (SIS) problem, introduced by Ajtai <cit.>, serves as one of the foundations of numerous lattice-based cryptographic protocols. The SIS problem is to find a short nonzero solution z∈ℤ^m, with 0 < ||z || ≤β, to the homogeneous linear system Az≡0 mod q for uniformly random A∈ℤ^m × n, where m,n,q denote positive integers and β∈ℝ. Inspired by the efficient NTRU encryption scheme <cit.>, Micciancio <cit.> initiated an approach that consists of changing the SIS problem to variants involving structured matrices. This approach was later replaced by a more powerful variant referred to as the Ring Short Integer Solution (Ring-SIS) problem <cit.>. There are several reductions from some hard lattice problems to SIS and Ring-SIS, see <cit.>.
The Module Short Integer Solution (Module-SIS) problem is a generalization of the Ring Short Integer Solution (Ring-SIS) problem that involves finding short vectors in module lattices, which are lattices with a module structure.
By viewing the rings as modules of rank 1, we can extend the definition of Ring-SIS to Module-SIS and study its hardness and applications.
Let K be a number field with ring of integers R, and R_q:=R/qR where q ≥ 2 is an integer. For positive numbers m,d ∈ℤ and β∈ℝ, the problem Module-SIS_q,n,m,β is to find z_1,…,z_m ∈ R such that 0< ||(z_1,…,z_m) || ≤β and ∑_i=1^ma_i.z_i ≡0 mod q, where a_1,…, a_m ∈ R_q^d are chosen independently from the uniform distribution.
Note that N=nd denotes the dimension of the corresponding module lattice and gives the complexity statements for N growing to infinity, where n denotes the extension degree of K over ℚ <cit.>. For the worst-case to average-case reduction from computational hard lattice problems to Module-SIS problems, see <cit.>.
§ OUR CONTRIBUTION
The worst-case to average-case reductions for the module problems
are (qualitatively) sharp, in the sense that there exist converse reductions. This property is
not known to hold in the context of Ring-SIS/Ring-LWE <cit.>. In addition, using these module problems, one could design cryptographic algorithms whose key sizes are notably smaller than similar ring-based schemes, see e.g. <cit.>.
In this section, we present an improved version of the digital signature proposed in <cit.> based on Module-LWE and Module-SIS problems. Similar to the Sharafi-Daghigh scheme <cit.>, the structure of our digital signature algorithm is based on the “hash-and-sign” approach and “Fiat-Shamir paradigm” <cit.>. Our contribution to the proposed signature scheme can be summarized as follows:
* Using Module-LWE and Module-SIS problems, instead of Ring-LWE and Ring-SIS respectively, would increase the security levels at the cost of slightly increasing the key sizes, see Section <ref>.
* Inspired by some celebrated lattice-based algorithms <cit.>, we generate the public key A from a 256-bits seed ζ as input of a hash function G. Based on this standard technique, instead of n log q bits as in <cit.>, one needs only 256 bits to store the public key 𝐀.
* The encode and decode functions (for serializing a polynomial in byte arrays and its inverse) used in <cit.> are the straightforward methods introduced in Lindner-Peikert scheme <cit.>. In this paper, as a modification, we use the NHSEncode and NHSDecode functions given in NewHope-Simple <cit.>. This method will decrease the decoding failure rate significantly, see Sections <ref> and <ref>.
* Following Crystals-Kyber <cit.> and NewHope <cit.>, we use the centered binomial distribution, instead of the Gaussian one as used in <cit.>, to sample noise and secret vectors in the proposed signature scheme. This gives a more efficient implementation than <cit.> and increases the security against side-channel attacks.
§ PROPOSED DIGITAL SIGNATURE
In this section, we present a Module-LWE based version of Sharafi-Daghigh digital signature <cit.>. It must be noted that we only provide a module version of the signature scheme and its efficient implementation would be an interesting challenge for future works. So we will describe each step by pseudocodes, however, one can apply the corresponding auxiliary functions of any Module-LWE based algorithm, e.g. Crystals schemes <cit.>, to gain a test implementation.
§.§ Encoding and Decoding functions
In <cit.> in order to encode a bit array ν=(ν_0,…,ν_n-1) ∈{0,1}^n into a polynomial v∈ R_q and its inverse, the authors use the following methods as proposed by Lindner and Peikert <cit.>:
v=Encode(ν_0,…,ν_n-1)=∑_i=0^n-1ν_i.⌊q/2⌋ . X^i,
μ=(μ_0,…,μ_n-1)=Decode(∑_i=0^n-1 v_i . X^i), where μ_i={[ 1 if v_i ∈[- ⌊q/4⌋,⌊q/4⌋); 0 otherwise.; ].
As mentioned before, one of our contributions is to use different encoding-decoding functions leading to a much lower error rate. We use the NHSEncode and NHSDecode functions introduced in NewHope-Simple <cit.>. Moreover, sampling secret and error polynomials from a binomial distribution enables us to apply the same analysis as NewHope <cit.> to obtain an error rate bounded by 2^-60. In NHSEncode, each bit of ν∈{0,1}^256 is encoded into four coefficients. The decoding function NHSDecode maps from four coefficients back
to the original key bit;
Take four coefficients (each in the range {0,…,q-1}), subtract
⌊q/2⌋ from each of them, accumulate their absolute values and set the key bit to 0 if the sum is larger
than q or to 1 otherwise. See Algorithms 1 and 2 in NewHope-Simple <cit.> for more details.
§.§ Key Generation
In order to decrease size of the public key, in comparison to <cit.>, one can use a seed (as input parameter of informal function GenA) to generate uniformly public matrix Â∈ R_q^k × k in NTT domain. To accomplish this, for instance, the key generation algorithms of Crystals schemes <cit.> could be applied. Likewise, the secret and error vectors s, e∈ R_q^k could be sampled from a seed. Moreover, in order to ease implementation, s and e would be drawn from the binomial distribution with standard derivation η instead of the Gaussian one. Following Crystals-Dilithum <cit.>, we use the notation S_η to denote the subset of R_q consisting of all polynomials whose coefficients have size at most η. Also, we denote the corresponding informal function by GenSE. The pseudocode for the key generation is described in Algorithm <ref>.
§.§ Signing
In order to generate a signature σ assigned to a message M, the signer first uses a collision-resistant hash function, denoted by CRH, and computes the hash of M, say μ. Then using a random 256-bits coin r, as input value of the function GenSE, she generates the vectors e_1,e_2 ∈ R_q^k and e_3,e_4 ∈ R_q from a binomial distribution of parameter η. The signature σ would contain the following four components
z_1 =Ae_1+e_2 ∈ R_q^k,
z_2 =Pe_2+e_4 ∈ R_q,
z_3 = APe_1+e_3+NHSEncode(μ) ∈ R_q,
h =CRH(μ || CRH(NHSDecode(Ase_2))).
(Recall that the functions NHSEncode and NHSDecode are introduced in Section <ref> and s denotes the secret key). The pseudocode for the signing step is described in Algorithm <ref>.
§.§ Verification
As described in Algorithm <ref>, using the public key P, the verifier accepts σ=(z_1,z_2,z_3,h) as a valid signature of the message M if and only if both of the following conditions hold:
(1)
NHSDecode(z_2+z_3-Pz_1)=CRH(M),
(2)
h=CRH(CRH(M) || CRH(NHSDecode(z_2)).
Note that the publick key (A,P) has size 256+knlog q bits or equivalently its size would be 32+knlog q/8 bytes. Likewise, the secret key s∈ R_q^k has size knlog q/8 bytes, and the size of the signature σ is (k+2)nlog q/8+32 bytes. The corresponding sizes for each parameter set are given in Table <ref>.
§.§ Decoding failure probability
Similar to computations as in <cit.>, we have
z_2+z_3-Pz_1 =Pe_2+e_4+APe_1+e_3+NHSEncode(μ)-P(Ae_1+e_2)
=NHSEncode(μ)+(e_3+e_4),
z_2 =(As+e)e_2+e_4=Ase_2+(ee_2+e_4).
Hence applying NHSDecode on the above terms, the verifier obtains respectively μ and Ase_2 as long as the error terms e_3+e_4 and ee_2+e_4 have been sufficiently small sampled. But following NewHope <cit.> all the polynomials e,e_1,e_2,e_3,e_4 are drawn from the centered binomial distribution ψ_η of parameter η=16. Since the decoding function NHSDecode maps from four coefficients to one bit, the rounding error for every 4-dim chunk has size at most q/4+4. As mentioned in NewHope-Simple <cit.>, one can use the same analysis as in NewHope-USENIX <cit.> to conclude that the total failure rate in encoding-decoding process is at most 2^-60. In Section <ref>, we will choose the parameter sets for the proposed scheme to achieve this failure probability.
(Note that the upper bound of the failure rate in Sharafi-Daghigh scheme <cit.> is 2^-40).
§ UNFORGEABILITY IN RANDOM ORACLE MODEL
The proposed digital signature scheme is a module-LWE-based version of the Sharafi-Daghigh algorithm <cit.>. Hence replacing ring-based hard problems with corresponding module-based ones, one can transmit the security proof in <cit.> to the module-based setting. Similar to <cit.>, the security of our proposed digital signature in the UF-CMA model (Unforgeability under Chosen Message Attack[This means that an adversary who is given a signature for a few messages of his choice should not be able to produce a valid signature for a new message <cit.>.]) arises from two hard assumptions, namely the decision Module LWE and the combined hardness of Module-SIS with the hash function H. By the decision Module LWE assumption, which denoted as Adv_k,η^MLWE in Theorem <ref> below, the public key (A,P) (a module-LWE sample) is indistinguishable from the pair (A^',P^') chosen uniformly at random. This guarantees the security against the key recovery attack. In order to prove the security against chosen message attacks (UF-CMA), let the adversary get the public key (A,P) and successfully produce a signature σ^'=(z_1^',z_2^',z_3^',h^') of a new message M which is valid according to the Verification Algorithm <ref>.
In particular, the adversary would be able to find z_2^'∈ S_η such that
H(NHSDecode(z_2^'))=H(NHSDecode(Ase_2)).
Therefore to create a forged signature of the message M, the adversary will encounter two challenges; he has to either break the hash function H or find a nonzero z ∈ S_η^1 for which A.(z-se_2)=0 mod q, i.e., find a solution of the Module-SIS problem given in Definition <ref> (with m=1 and d=k^2). This challenge is the aforementioned combined hard problem denoted as Adv_k,η^H-MSIS (Note that as in Crystals-Dilithium, the infinity norm, rather than the Euclidean norm, has been considered to avoid the trivial solution (q,0,…,0)^T for the underlying Module-SIS problem). More formally, following Crystals-dilithium <cit.>, the UF-CMA security of the proposed signature scheme in the Random Oracle Model (ROM) can be stated as follows.
Assume that H:{0,1}^* → R_q is a cryptographic hash function modeled as a random oracle. If there exists an adversary 𝒜 (who has classical access to H) that can break the UF-CMA security of the proposed signature, then there exist also adversaries ℬ and 𝒞 such that
Adv^UF-CMA(𝒜) ≤Adv_k,η^MLWE(ℬ) + Adv_k,η^H-MSIS(𝒞),
where
Adv_k,η^MLWE :=|Pr[b=1| A← R_q^k × k; P← R_q^k; b ←𝒜(A,P)]
-Pr[b=1 | A← R_q^k; s← S_η^1; e← S_η^1; b ←𝒜(A,As+e)]|,
and
Adv_k,η^H-MSIS=
Pr[0<||z||_∞≤η ∧ H(M||Az)=h^' | A← R_q^k × k; (z,h^',M) ←𝒜^|H(.)>(A) ].
Although the above reduction is non-tight, yet it can be used to set the parameters, see <cit.>. In addition, we have to consider the security in the quantum random oracle model (QROM). As stated in Crystals-Dilithium, although the above reduction does not transfer over the quantum setting, but the assumption Adv_k,η^H-MSIS is tightly equivalent, even under quantum reductions, to the UF-CMA security of the proposed signature. Hence, in the above theorem, one can assume that the adversary 𝒜 has a quantum access to the hash function H, see <cit.> for more details.
§ CHOOSING PARAMETERS AND SECURITY ESTIMATION
We use the ring R_q=ℤ_q[x]/(x^n+1) with n a power-of-two, a favorable choice by many Ring/Module-LWE based schemes. We use the polynomial ring of degree n=256, for every parameter set, as the basic block of the Module-LWE problem. Similar to NewHope Scheme<cit.> we choose the modulus q=12289 and sample the Module-LWE secret and error terms by the centered binomial distribution ψ_η of parameter η=16. Note that q ≡ 1 (mod n) which enables the use of NTT over the ring R_q.
Also it must be noted that due to replacing the Ring-LWE with Module-LWE, we have decreased the ring dimension 1024 in NewHope <cit.> to 256, yet we use the same modulus and the same sampling distribution as in NewHope to achieve the decoding failure probability 2^-60. In other words, since we have used the encoding-decoding functions introduced in the NewHope-Simple algorithm, using the same modulus q=12289, the same binomial distribution ψ_16 and the ring dimensions 256k^2 (with k=2), ensures us that the upper bound 2^-60, as in NewHope <cit.>, works also for the failure probability in the proposed scheme, see Section <ref>. Moreover, using these parameters, we can use the same security analysis as in NewHope-USENIX <cit.>, and in comparison to Sharafi-Daghigh scheme <cit.>, we obtain much smaller failure probability and higher security levels at the cost of increasing the modulus q, see Table <ref>.
Note that increasing the parameter k will increase the security level but makes the scheme too inefficient. So we have determined only one set of parameters, whereas finding different encoding-decoding functions to achieve an appropriate failure rate with a smaller modulus q, might be an interesting challenge for future works.
§.§ Security Estimation
As mentioned before, the security of the proposed digital signature scheme is based on the hardness of Module-LWE and Module-SIS problems. Any MLWE_k,D instance for some distribution D can be viewed as an LWE instance of dimension (nk)^2. Indeed the underlying MLWE_k,D problem for the proposed digital signature can be rewritten as finding vec(s), vec(e) ∈ℤ^nk from (rot(A), vec(P)) where (A,P) is the public key as in Algorithm <ref>, vec(.) maps a vector of ring elements to the vector obtained by concatenating the coefficients of its coordinates, and rot(A) ∈ℤ_q^nk × nk is obtained by replacing all entries a_ij∈ R_q of A by the n × n matrix whose z-the column is vec(x^z-1.a_ij).
Similarly, the attack against the MSIS_k,η instance can be mapped to a SIS_nk,η instance by considering the matrix rot(A) ∈ℤ^nk × nk
, see <cit.> for more details.
In order to do security estimation, we will not consider BKW-type attacks <cit.> and linearization attacks <cit.>, since based on our parameter set, there are only “(k+1)n” LWE samples available, see <cit.>. Hence the main challenge is security analysis against the lattice attacks. All the best-known lattice attacks are essentially finding a nonzero short vector in the Euclidean lattices, using the Block-Korkine-Zolotarev (BKZ) lattice reduction algorithm <cit.>. The algorithm BKZ proceeds by reducing a lattice basis using an SVP oracle in a smaller dimension b. The strength of BKZ increases with b, however, the cost of solving SVP is exponential in b <cit.>. It is known that the number of calls to that oracle remains polynomial. Following NewHope <cit.>, we ignore this polynomial factor, and rather evaluate only the core SVP hardness, that is the cost of one call to an SVP oracle in dimension b, which is clearly a pessimistic estimation, from the defender's point of view.
There are two well-known BKZ-based attacks, usually referred to as
primal attack and dual attack. As explained in Crystals-Dilithium <cit.>, the primal attack for the proposed scheme is to find a short non-zero vector in the lattice
Λ={x∈ℤ^d : Mx=0 mod q},
where M=(rot(A)_[1:m] | I_m | vec(P)_[1:m]) is an m × d matrix with d=nk+m+1 and m ≤ nk. By increasing the block size b in BKZ, for all possible values m, the primal attack solves the SVP problem in the lattice Λ.
The dual attack consists in finding a short non-zero vector in the lattice
Λ^'={(x,y) ∈ℤ^m ×ℤ^d : M^T x+y=0 mod q},
where M=(rot(A))_[1:m] is an m × d matrix with m ≤ d=nk. Similar to the primal attack, the dual attack solves the SVP problem in the lattice Λ^', by increasing the block size b in BKZ, for all possible values m.
According to the parameters n,k,q,η, we have used the core SVP hardness methodology and the cost estimation of NewHope-USENIX <cit.> against the primal and dual attacks, as presented in Table <ref>.
§ COMPARISON TO SOME RELATED WORKS
Two of the most celebrated lattice-based signature algorithms are Crystals-Dilithium <cit.> and FALCON <cit.>, selected for
NIST [National Institute of Standards and Technology] Post-Quantum
Cryptography Standardization <cit.>. Crystals-Dilithium, one of our main references, is based on the Fiat-Shamir with aborts approach <cit.> whose security is based on Module-LWE and Module-SIS problems. As stated in <cit.>, FALCON [Fast Fourier Lattice-based Compact Signatures over NTRU] is a lattice-based signature scheme utilizing the “hash-and-sign” paradigm. FALCON follows the GPV framework, introduced by Gentry, Peikert, and Vaikuntanathan <cit.> and builds on a sequence of works whose aim is to instantiate the GPV approach efficiently in NTRU lattices <cit.> with a particular focus on the compactness of key sizes. There is also another lattice-based digital signature, namely “qTESLA” <cit.>, among the second-round (but not finalist) candidate algorithms in the NIST post-quantum project. qTESLA is a Ring-LWE-based digital signature in which signing is done using hash functions and Fiat-Shamir with aborts technique.
In addition to the above algorithms in the NIST post-quantum project, for comparison, we have tried to consider those digital signatures, including Sharafi-Daghigh scheme <cit.>, BLISS <cit.>, and GLP <cit.>, that are most relevant to the proposed scheme. The comparison results are presented in Table <ref>.
alpha
|
http://arxiv.org/abs/2409.02078v1 | 20240903172617 | Political DEBATE: Efficient Zero-shot and Few-shot Classifiers for Political Text | [
"Michael Burnham",
"Kayla Kahn",
"Ryan Yank Wang",
"Rachel X. Peng"
] | cs.CL | [
"cs.CL"
] |
COmoving Computer Acceleration (COCA): N-body simulations in an emulated frame of reference
Florent Leclercq
September 9, 2024
===========================================================================================
§ ABSTRACT
Social scientists quickly adopted large language models due to their ability to annotate documents without supervised training, an ability known as zero-shot learning. However, due to their compute demands, cost, and often proprietary nature, these models are often at odds with replication and open science standards. This paper introduces the Political DEBATE (DeBERTa Algorithm for Textual Entailment) language models for zero-shot and few-shot classification of political documents. These models are not only as good, or better than, state-of-the art large language models at zero and few-shot classification, but are orders of magnitude more efficient and completely open source. By training the models on a simple random sample of 10-25 documents, they can outperform supervised classifiers trained on hundreds or thousands of documents and state-of-the-art generative models with complex, engineered prompts. Additionally, we release the PolNLI dataset used to train these models – a corpus of over 200,000 political documents with highly accurate labels across over 800 classification tasks.
§ INTRODUCTION
Text classification is widely used in various applications, such as opinion mining and topic classification <cit.>. In the past, classification was a technical and labor intensive task requiring a significant amount of manual labeling and a strong understanding of machine learning methods. Recently developed large language models (LLMs), like ChatGPT, have all but eliminated this barrier to entry due to their ability to label documents without any additional training, an ability known as zero-shot classification <cit.>. Because of this, it is little wonder that LLMs have received widespread adoption within political and other social sciences.
Yet, despite their convenience, there are strong reasons why researchers should be hesitant to use LLMs for text analysis. The most widely used and performative models are proprietary, closed models. Historical versions of the models are not archived for replication purposes, and the training data is not publicly released. This makes their use at odds with standards of open science. Further, these models have large compute requirements, and charge for their use – labeling datasets of any significant size can be expensive. We echo the sentiments of <cit.>: Researchers should strive to use open sourced models and should provide compelling justification when using closed models.
We aim to narrow this gap between the advantages of closed, state-of-the-art large language models and the best practices of open science. Accordingly, we present two language models named Political DEBATE (DeBERTa Algorithm for Textual Entailment) Large and Political DEBATE Base. The models are trained specifically for zero and few-shot classification of political text. With only 86 million and 304 million parameters <cit.>, the DEBATE models are not only a fraction of the size of proprietary models with tens of billions of parameters, such as Claude 3.5 Sonnet <cit.>, but are as good or better at zero-shot classification of political documents. We further demonstrate that the DEBATE models are few-shot learners without any active learning scheme: A simple random sample of only 10–25 labeled documents is sufficient to teach the models complex labeling tasks when necessary.
We accomplish this in two ways. First, we use domain specific training with tightly controlled data quality. By focusing the model on a specific domain, the model size necessary for high performance is significantly reduced. Second, we adopt the natural language inference (NLI) classification framework. This allows us to train encoder language models (e.g. BERT <cit.>) for zero-shot and few-shot classification. These models are much smaller than the generative language models like GPT-4 <cit.>.
Additionally, we release the PolNLI dataset used to train and benchmark the models. The dataset contains over 200,000 political documents with high quality labels from a wide variety of sources across all sub-fields of political science. Finally, in the interests of open science, we commit to versioning both the models and datasets and maintaining historical versions for replication purposes. We outline the details of both the data and the NLI framework in the following sections.
§ NATURAL LANGUAGE INFERENCE: WHAT AND WHY
Natural language inference (also known as textual entailment) can be thought of as a universal classification framework. A document of interest, known as the “premise,” is paired with a user generated statement, known as the “hypothesis.” The hypothesis are analogous to a very simple prompt given to a model like GPT-4 <cit.> or Llama-3 <cit.>. Given a premise and hypothesis pair, an NLI classifier is trained to determine if the hypothesis is true, given the content of the premise. For example, we might pair a tweet from Donald Trump: “It's freezing and snowing in New York – we need global warming!” with the hypothesis “Donald Trump supports global warming”. The model would then give a true or false classification for the hypothesis – in this case, true. Because nearly any classification task can be broken down into this structure, a single language model trained for natural language inference can function as a universal classifier and label documents across many dimensions without additional training.
Natural language inference has a number of advantages and disadvantages in comparison to generative LLMs. Perhaps the most significant advantage is that NLI can be done with much smaller language models. While a standard BERT model with 86 million parameters can be trained for NLI, the smallest generative language models capable of accurate zero-shot classification have 7-8 billion parameters <cit.>, and state-of-the-art LLMs have tens to hundreds of billions of parameters <cit.>. In practical terms, this is the difference between a model that can feasibly run on a modern laptop, and one that requires a cluster of high-end GPUs.
The primary tradeoff between NLI classifiers and generative LLMs is between efficiency and flexibility. While an NLI classifier can be much smaller than an LLM, they are not as flexible. LLMs like GPT-4 <cit.> and Llama <cit.> can accept long prompts that detail multiple conditions to be met for a positive classification. In contrast, the hypotheses accepted by an NLI classifier should be short and reduce the task to a relatively simple binary. Many classification tasks are not easily reduced to simple hypothesis statements.
This capability stems from the wide knowledge base about the world that LLMs hold within their weights. Because they are trained on such a massive amount of data, their training distributions contain a wider variety of tasks (e.g. classification, summarizing, programming) and domains (e.g. politics, medicine, history, pop-culture). Such a vast knowledge base requires a much larger model with higher compute demands. Often, much of the knowledge contained in these weights is superfluous to the classification task a researcher may be using them for. Thus, while LLMs have shown impressive capabilities in zero-shot settings <cit.>, they are inherently very inefficient tools for any particular classification task.
While we acknowledge that generative LLMs can play a valuable role in political research, their necessarily large size and usually proprietary nature also poses a challenge for open science standards. Their compute demands can be expensive, proprietary models are not archived for scientific replication purposes, and the lack of transparency regarding model architectures and training datasets complicates efforts to replicate or improve these models <cit.>. As a result, despite their impressive capabilities and ease of use, the use of proprietary LLMs as a classification tool at least merits explicit justification in a scientific setting <cit.>.
Here, we demonstrate much smaller models can often offer the convenience and performance of generative LLMs by adopting the NLI classification framework and narrowing its domain of expertise from the entire world to the political world. The advantages of our models presented here over LLMs is first, that they are smaller and thus can be more easily trained or deployed on local or free hardware. Second, they are similarly performative to state-of-the-art LLMs on tasks within their domain. Third, they can be easily versioned and archived for reproduciblity. And finally, they are truly open source in that the model architecture and all of its training data is publicly available for scrutiny or future development.
§ THE POLNLI DATASET
To train our models, we compiled the PolNLI dataset – a corpus of 201,691 documents and 852 unique entailment hypothesis. We group these hypotheses into four tasks: stance detection (or opinion classification), topic classification, hate-speech and toxicity detection, and event extraction. Table <ref> presents the number of datasets, unique hypotheses, and documents that were collected for each task. PolNLI a wide variety of sources including social media, news articles, congressional newsletters, legislation, crowd-sourced responses, and more. We also adapted several widely used academic datasets such as the Supreme Court Database <cit.> by attaching case summaries to the dataset's topic labels. The vast majority of text included in PolNLI is human generated — only a single dataset containing 1,363 documents is generated by an LLM.
In constructing the PolNLI dataset, we prioritized both the quality of the labels and the diversity of the data sources. We used a five step process to accomplish this:
* Collecting and vetting datasets.
* Cleaning and preparing data.
* Validating labels.
* Hypothesis Augmentation.
* Splitting the data.
§.§ Collecting and Vetting Datasets
We identified a total of 48 potential datasets from replication archives, the HuggingFace hub, academic projects, and government documents. A complete list of datasets we used is located in Appendix <ref>. Several of the collected datasets had been compiled by their authors for other classification tasks while others — like the Global Terrorism Database <cit.> and the Supreme Court Database <cit.> — were adapted from general purpose public datasets. We also compiled several new datasets specifically for this project in order to address gaps in the training data. For each dataset, we reviewed the scope of the data, the collection and labeling process, and made a qualitative assessment of the data quality. Datasets for which we determined the quality of the data to be too low or redundant with sources already collected were omitted.
§.§ Cleaning and Preparing Data
To clean the data, we took care to remove any superfluous information from documents that the models might learn to associate with a particular label. This includes aspects like news outlet identifiers in the headings of articles or event records that start each entry with a date. No edits were made to document formatting, capitalization, or punctuation in order to maintain variety in the training data.
For each unique label in the data, we manually created a hypothesis that correlated with that label. For example, documents that were labeled for topic or event were paired with the hypothesis “This text is about (topic/event type)” and documents labeled for stance were paired with the hypothesis “The author of this document supports (stance).” Most hypotheses are framed as descriptive statements about the document, as in the two previous examples.
Finally, each document-hypothesis pair was assigned an entail/not entail label based on the label from the original dataset. For example, a document labeled as an expression of concern over global warming would be paired with the hypothesis “The author of this text believes climate change is a serious concern” and be assigned the “entail” label.[While several other NLI datasets, such as SNLI, have adopted an entail, neutral, contradict labeling scheme, we opted for the simpler entail/not entail because it was a common scheme that all of the collected datasets could be adapted to. Accordingly, neutral and contradiction labels were combined into the “not entail” label.]
One challenge with this approach is that topic and event data only contained positive entailment labels. That is, if an event summary was about a terrorist attack, it was with the hypothesis “This document is about a terrorist attack” and the entailment labels for these were initially always true. However, we wanted to train the model to not only recognize what is a terrorist attack, but what is not a terrorist attack. To accomplish this for datasets and documents that needed negative cases, we duplicated the documents and then randomly assigned one of the other topic or event hypotheses, and then assigned a “not entail” label. One concern is that documents can contain multiple topics, and might be assigned a topic they are related to by chance. This concern is addressed through the validation process outlined in the next section.
§.§ Validating Labels
The original curators of the collected datasets used many approaches to labeling their data with varying levels of rigor. The accuracy of labels is critically important to training and validating models, and thus we wanted to ensure that only high quality labels were retained in our data. To meet this objective, we leveraged the much larger language models, GPT-4 and GPT-4o. Recent research has shown that LLMs are as good, or better, than human coders for similar classification tasks <cit.>. We thus used these proprietary LLMs to reclassify each collected document with a prompt containing an explanation of the task and the entailment hypotheses we generated. A template for the prompt is contained in Appendix <ref>. We then removed documents where the human labelers and the LLM disagreed. To ensure that the LLMs were generating high quality labels, we took a random sample of 400 documents labeled by GPT-4o and manually reviewed the labels again. We agreed with the GPT-4o labels 92.5% of the time, with a Cohen's κ of 0.85. Of the 30 documents where there was disagreement, 16 were judged to be reasonable disagreements where the document could be interpreted either way. The remaining 14, or 3.5% of all documents, were labeled incorrectly by the LLM.
§.§ Hypothesis Augmentation
An ideal NLI classifier will produce identical labels if a document is paired with different, but synonymous, hypotheses (e.g. the hypotheses “This document is about Trump” and “This text discusses Trump” should yield similar classifications). To make our model more robust to the various phrasings researchers might use for hypotheses, we presented each hypothesis to GPT-4o and then asked it to write three synonymous sentences. We then manually reviewed the LLM generated hypotheses and removed any that we felt were not sufficiently similar in meaning. Each document was then randomly assigned an “augmented hypothesis” from a set containing the original hypothesis and the generated alternatives. Finally, we manually varied hypotheses by randomly substituting a few very common words with synonymous words (e.g. text/document, supports/endorses). In total, this increased the number of unique entailment phrases to 2,834.
§.§ Splitting the Data
To split the data into training, validation, and test sets we proportionally sampled from each of the four tasks to construct testing and validation sets of roughly 15,000 documents each. The rest of the data were allocated to the training set. Because we wanted to evaluate model performance in a zero-shot context, a simple random sampling approach to splitting the data would not work. Instead, we randomly sampled from the set of unique hypotheses and allocated all documents with those hypotheses to the test set. This ensures that models did not see any of the test set hypotheses, or their synonymous AI generated variants, during training. The validation set consists of roughly 10,000 documents with hypotheses that are not in the training set, and 5,000 documents with hypotheses that are in the training set. This allows us to both estimate the model's zero-shot performance during testing, as well as look for evidence of over-fitting if performance diverges between the hypotheses seen and not seen during training.
§ TRAINING
The foundation models we used for training were a pair of DeBERTa V3 base and large models fine tuned for general purpose NLI classification by <cit.>. We use these models for a number of reasons: First, the DeBERTa V3 architecture is the most performative on NLI tasks among transformer language models of this size <cit.>. Second, using models already trained for general purpose NLI classification allows us to more efficiently leverage transfer learning. Before we used these models for our application, they were trained on five large datasets for NLI, and 28 smaller text classification datasets. This means that we begin training with a model that already understands the NLI framework and general classification tasks, allowing it to more quickly adapt to the specific task of classifying political texts <cit.>.
We used the Transformers library <cit.> to train the model and monitored training progress with the Weights and Biases library <cit.>. After each training epoch (an entire pass through of the training data), model performance was evaluated on the validation set and a checkpoint of the model was saved. We selected the best model from these checkpoints using both quantitative and qualitative approaches. The model's training loss, validation loss, Matthew's Correlation Coefficient (MCC), F1, and accuracy was reported for each checkpoint. We then tested the best performing models according to these metrics by examining performance on the validation set for each of the four classification tasks, and across each of the datasets. This helped us to identify models with consistent performance across task and document type.
Finally, we qualitatively assessed the models by examining their behavior on individual documents. This included introducing minor edits or re-phrasings of the documents or hypotheses so that we could identify models with stable performance that were less sensitive to arbitrary changes to features like punctuation, capitalization, or synonymous word choice. Hyperparameters used to train the models are in Appendix <ref>.
§ ZERO-SHOT LEARNING PERFORMANCE
We benchmark our models on the PolNLI test set against four other models that represent a range of options for zero-shot classification. The first two models are the DeBERTa base and DeBERTa large general purpose NLI classifiers trained by <cit.>. These are currently the best NLI classifiers that are publicly available <cit.>. We also test the performance of Llama 3.1 8B, an open source generative LLM released by Meta <cit.>. This model is the smallest version of Llama 3.1 released and represents a generative LLM that can feasibly be run on a desktop computer with a high end GPU, or a CPU with an integrated GPU like the Apple M series chips in modern macbooks. Finally, we benchmark Calude 3.5 Sonnet <cit.>. This model is a state-of-the-art proprietary LLM. At the time of writing, it is widely considered to be among the best models available <cit.>. Notably, we do not include GPT-4o in our benchmark because it was used in the validation process from which the final labels were derived. We discourage bench-marking OpenAI models on the PolNLI dataset for this reason.
We use MCC as our primary performance metric due to its relative robustness to other metrics like F1 and accuracy on binary classification tasks <cit.>. MCC is a special case of the Pearson correlation coefficient and can be interpreted similarly. It rangers from -1 to 1 with higher values indicating greater performance.
§.§ PolNLI Test Set
Figure <ref> plots performance with bootstrapped standard errors across all four tasks for each model. We observe that the DEBATE models are more performative than alternatives when all tasks and datasets are combined.
In figure <ref> we break out performance across our four tasks: Topic classification, stance detection, event extraction, and hate-speech identification. While all models perform well on topic classification, significant gaps emerge on the other tasks. The DEBATE models and Claude 3.5 Sonnet perform significantly better than the other models on stance detection. On event extraction tasks we see comparable performance between the DEBATE models and the two generative LLMs, Claude 3.5 and Llama 3.1. Perhaps the most notable gap in performance is on the hate-speech detection task – the DEBATE models perform significantly better than the other models. We think that this is likely because hate-speech is a highly subjective concept and our models are better tuned to the particular definitions used in the datasets we collected.
Finally, in figure <ref>, we plot this distribution of performance across all datasets in the test set. We again observe that the DEBATE models are more consistently performative than alternatives. For most models, the Polistance Quote Tweets dataset was the most challenging dataset, with the DeBERTa Large model having a negative correlation with the correct classification. This dataset measures stance detection and is particularly challenging for language models to parse because quote tweets often contain two opinions from two different people. The model has to parse both of these opinions and correctly attribute stances to the right authors. Even the state-of-the-art Claude 3.5 had an MCC of only 0.29 on the task. However, because the Political DEBATE models were explicitly trained to parse such documents, the base and large models were able to achieve MCCs of 0.62 and 0.88 respectively.
§ FEW-SHOT LEARNING PERFORMANCE
One advantage of the NLI classification framework is that models trained for NLI can more quickly adapt to other classification tasks <cit.>. Few-shot learning refers to the ability to learn a new classification task with only a few examples. Whereas a conventional supervised classifier usually requires hundreds or even thousands of labeled documents to train, models like GPT-4 and Claude 3.5 have demonstrated the ability to improve classification with only a handful of examples provided in the prompt.
Here, we demonstrate that domain adapted NLI classifiers are efficient few-shot learners. With a random sample of 10–25 documents and no active learning scheme, these models can learn new classification tasks at levels comparable to, or better than, supervised classifiers and generative language models. We use two examples from other research projects to illustrate this capability. The first comes from the Mood of the Nation poll, a regular poll issued by the McCourtney Institute for Democracy which recently begun using Llama 3.1 as part of its annotation process for open-text survey questions <cit.>. The second is from <cit.> who trained a transformer model on roughly 2,000 tweets to identify posts that minimize the threat of COVID-19.
For our testing procedure we first use both DEBATE models and a simple hypothesis for zero-shot classification on each document. We then take four simple random samples of 10, 25, 50, and 100 documents, train each of the two DEBATE models on these random samples, and then estimate performance of both models for the respective sample size on the rest of the documents. We repeat this 10 times for each training sample size and calculate a 95% confidence interval. Importantly, we did not search for the best performing hypothesis statements or model hyper-parameters. We simply used the default learning rate and then trained the model for 5 epochs. We felt this was important because a few-shot application assumes researchers do not have a large sample of labeled data to search for the best performing parameters. Rather, few-shot learning should work out-of-the-box to be useful. We also note that that while training these language models on large data sets like PolNLI can take hours or days with a high end GPU, training time in a few shot context is reduced down to seconds or minutes and can be done without high-end computing hardware.
§.§ Mood of the Nation: Liberty and Rights
One of the questions on the Mood of the Nation poll is an open text form asking, “What does democracy mean to you?” The administrators of this poll had a team of research assistants manually label answers to this question that were related to “liberty and rights.” This category was broadly defined as responses that discuss freedoms and rights generally, or specific rights such as speech, religion, or the contents of the bill of rights. However, if the document was exclusively about voting rights, it was assigned to another category. If it mentioned voting rights in addition to other rights, it was still classified as “liberty and rights.” This classification task is somewhat difficult to reduce down to a simple hypothesis statement, and is thus a good candidate for few-shot training.
<cit.> wanted to automate the labeling of short answer responses and wanted to use open source models to do so. This was motivated both by open science standards, and privacy concerns over uploading responses to a proprietary API like GPT-4o. Llama 3.1 worked well, and classified documents with an MCC of 0.74 and accuracy of 88%. Discrepancies between the LLM and human coders were judged to be primarily reasonable disagreements.
In a zero-shot context, Llama 3.1 comfortably out performs our models due to its ability to accept prompts with more detailed instructions. However, after only 10 training samples we see a large jump in performance with both the large and base DEBATE models, with Llama not significantly different than either. At 25 documents the large model significantly outperforms llama 3.1, and at 50 documents the base model does as well.
§.§ COVID-19 Threat Minimization
<cit.> classified Twitter posts about COVID-19 based on whether or not they minimized the threat of COVID-19. Threat minimization was defined as anti-vaccination or anti-masking rhetoric, comparisons to the flu, statements against stay-at-home orders, claims that COVID-19 death counts were faked, or general rhetoric that the disease did not pose a significant health threat. This presents a particularly difficult classification challenge because threat minimization of COVID-19 is a somewhat abstract concept and can be expressed in many different ways across disparate topics. To address this, <cit.> trained an Electra transformer on 2,000 tweets with a Bayesian sweep of the hyper-parameter space. This process involved training 30 iterations of the model to find the best performing hyper-parameters. The final model achieved an MCC of 0.66.
For an NLI classifier, the above classification criteria are too numerous to elegantly fit into a single entailment hypothesis. While <cit.> demonstrated that such tasks can be done zero-shot by dividing it into smaller tasks (e.g. classify the documents once for anti-vaccination rhetoric, another time flu comparisons, and so forth), few-shot learning provides a more elegant solution. To test the models, we use the basic hypothesis “The author of this tweet does not believe COVID is dangerous.” Here, we observe that the base model largely fails at the task in a zero-shot context, and fails to match the accuracy of the supervised classifier with 100 training samples. The large model proves more capable of learning the task, matching the supervised classifier at 10 training samples (MCC = 0.67, accuracy = 87%), and exceeding it at only 25 training samples (MCC = 0.69, accuracy = 88%).
§ TIMING BENCHMARKS
To assess cost-effectiveness, we ran our two DEBATE models and Llama 3.1 across a diverse range of hardware.[We exclude the DeBERTa models used in the performance bench-marking above because they have the same architecture as the DEBATE models, and thus label documents at the same speed. We also do not time proprietary LLMs because their speed is highly determined by server traffic and we cannot test them in a controlled setting on common hardware. Classification speed is also of more concern when using local hardware because it occupies computing resources that might be need during the run time.] We did so with a random sample of 5,000 documents from the PolNLI test set and the simple hypothesis “This text is about politics." We selected four different types of hardware. First, the NVIDIA GeForce RTX 3090 GPU provides high-performance, consumer-grade machine learning capabilities, making it a suitable choice for intensive computational tasks. Second, the NVIDIA Tesla T4 is a free GPU available through Google Colab. In contrast to the RTX 3090, the T4 is easy for researchers to access free of charge. Third, we used a Macbook Pro with the M3 max chip. This is a common laptop with a built-in GPU that is integrated in the system-on-chip, as opposed to the RTX 3090 and Tesla T4 which are discrete GPUs. Finally, the AMD Ryzen 9 5900x CPU was utilized to evaluate performance on a general purpose CPU.[We do not test Llama 3.1 on the Tesla T4 GPU or the Ryzen 5900x CPU. The model is too large to run on the Tesla T4, and slow enough on on a CPU that it's not recommended to do so in any context.]
We observe that the DEBATE models offer massive speed advantages over even small generative LLMs like Llama 3.1 8B. While discrete GPUs like the RTX 3090 do offer a large performance advantage, Documents can still be classified at a relatively brisk pace with a laptop GPU like on the M3, or a free cloud GPU like the T4.
§ LIMITATIONS AND MODEL USE
The https://huggingface.co./mlburnham or by searching for Pol_NLImodel and dataset can be downloaded for free on the https://huggingface.co./mlburnhamHuggingFace hub. We recommend using Python's Transformers <cit.> and Datasets <cit.> libraries to use the models and data. In most cases, models and data can be deployed with only a few lines of code. We include boilerplate code for both zero-shot and few-shot applications on the github repository for this paper. While we offer brief advice on application here, for a more thorough exploration of best practices when using NLI classifiers we defer to <cit.>.
§.§ Which Model Should I Use?
We offer the following two guidelines for selecting a model:
* Use the large model for zero-shot classification.
* Use the large model for most few-shot applications.
* Use the base model for simple few-shot tasks or supervised classification.
Both our extensive use of NLI models and previous research <cit.> indicates that larger models are much better at generalizing to unseen tasks. However, for tasks that are more explicitly within the training distribution such as hate-speech detection or approval of politicians, we expect comparable performance between the large and base models, with the base model offering a significant advantage in efficiency. In the few-shot context we expect similar performance between the large and base model given the results above. However, We also observed that the large model more quickly learns tasks. There is also no clear measurement of a task's simplicity, only qualitative judgements. Thus, we recommend using the large model whenever feasible.
§.§ When Should I Consider Few-shot or Supervised Training?
An NLI classifier will be most performative in a zero-shot context under the following conditions:
* Labels can reasonably be derived from only the text of a document and do not require meta-knowledge about the document such as who wrote it, when, and under what circumstances.
* Labels are for concepts that are commonly understood (e.g. support/opposition to a person or policy) rather than bespoke concepts for a particular research project, or require specialized domain knowledge (e.g. documents about political rights except the right to vote, “threat minimization” of COVID-19).
* Documents are short, generally a sentence or paragraph, or can be segmented into short documents.
If any of these conditions are not met, you should consider few-shot or supervised training. Whether or not these conditions are met is a qualitative judgment that should be made based on familiarity with the data and task. As with any classification task, you should always validate your results with some manually labeled data.
§.§ How Should I Construct Hypotheses?
We recommend using short, simple hypotheses similar to the templates used in the training data. Fore example:
* “This text is about (topic or event)”
* “The author of this text supports (politician or policy position)”
* “This text is attacking (person or group)”
* “This document is hate-speech”
While researchers can certainly deviate from these templates, few-shot training may be appropriate for tasks that require long hypotheses with multiple conditions.
§.§ Other Limitations
Despite the impressive results demonstrated here, we want to emphasize that researchers should not expect the DEBATE models to outperform proprietary LLMs on all classification tasks. The massive size and training sets of proprietary models inevitably means a larger variety of tasks are in their training distribution. Accordingly, we expect that LLMs will more robustly generalize in the zero-shot context for tasks that are less proximate to what is contained in the PolNLI data set. We recommend few-shot training for such tasks.
We also note that these models are trained exclusively for English documents, and it is unknown how the models would perform if re-trained for non-English documents.
§ CONCLUSION AND FUTURE WORK
The presented zero and few-shot entailment models, currently effective in stance, topic, hate-speech, and event classification, shows immense potential for open, accessible, and reproducible text analysis in political science. Future research should explore expanding the capabilities of these models to new tasks (such as identifying entities, and relationships) and new document sources. While we think that these models can be immensely valuable to researchers now, we hope that this is only the first step in developing efficient, open source models tailored for specific domains. We think that there is significant room to further expand the PolNLI data set and, as a result, train better models that more widely generalize across political communication. We believe that domain adapted language models can be a public good for the research community and hope that researchers studying politics will collaborate to share data and expand the training corpus for these models.
Further, we also believe that open source LLM-based chat bots could benefit greatly from our approach of domain adaptation and entailment classification. Thus, in future work we hope to adapt and expand the PolNLI data set to make it suitable for generative language models. By doing so, it is plausible that not only could generative models smaller than Llama 8B achieve state-of-the-art classification performance, but researchers would be able to use these models for tasks encoder models are not capable of, such as synthetic data generation or summarization.
chicago
appendix
|
http://arxiv.org/abs/2409.02263v1 | 20240903194755 | Axion-Photon Conversion Signals from Neutron Stars with Spacetime Curvature Accounted for in the Magnetosphere Model | [
"Jesse Satherley",
"Chris Gordon",
"Chris Stevens"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.CO",
"hep-ph"
] |
APS/123-QED
[email protected]
[email protected]
School of Physical and Chemical Sciences, University of Canterbury, Christchurch, New Zealand
[email protected]
School of Mathematics and Statistics, University of Canterbury, Christchurch, New Zealand
§ ABSTRACT
Axions are a well-motivated dark matter candidate. They may be detectable from radio line emission from their resonant conversion in neutron star magnetospheres. While radio data collection for this signal has begun, further efforts are required to solidify the theoretical predictions for the resulting radio lines. Usually, the flat spacetime Goldreich-Julian model of the neutron star magnetosphere is used, while a Schwarzschild geometry is assumed for the ray tracing.
We assess the impact of incorporating the spacetime curvature into the magnetosphere model.
We examine a range of neutron star and axion masses and
find an average difference of 26% in radiated power compared to the standard Goldreich-Julian magnetosphere model for a 10 mass axion and a 2.2M_⊙ mass neutron star.
A much lesser difference is found for lower-mass neutron stars, as in that case, axion-photon conversion occurs further from the Schwarzschild radius.
Axion-Photon Conversion Signals from Neutron Stars with Spacetime Curvature Accounted for in the Magnetosphere Model
C. Stevens
September 9, 2024
====================================================================================================================
§ INTRODUCTION
The axion was introduced
to solve the strong CP problem in quantum chromodynamics
<cit.>.
It was subsequently realized that it could account for a fraction, or all, of the
dark matter in the Universe <cit.>.
A recent and compelling proposal for indirectly searching for axions in astrophysical environments involves detecting radio photons produced by axion-photon mixing in neutron star magnetospheres. The strong magnetic fields and ambient plasma in these regions can resonantly amplify the mixing process, potentially revealing axions through distinctive radio signatures
<cit.>.
Two main schemes exist to determine the signal received from the axion-photon conversions: (1) the emitter-to-observer scheme samples the conversion surface
and propagates photons forward through the plasma <cit.>; and
(2) the observer-to-emitter scheme sources photons at a distant detector and propagates them backwards onto the conversion surface <cit.>.
Both methods have their own benefits and drawbacks.
The early simulations of the axion-photon conversion process assumed a flat spacetime, did not consider the refractive effects of the plasma, and employed a simple flat-spacetime Goldreich-Julian (GJ) model <cit.> for the NS magnetosphere, e.g., <cit.>.
More recently, work was done to improve this by including the dispersive effects of an isotropic (unmagnetized) plasma in Schwarzschild spacetime <cit.>. However, this still included the GJ model.
Other studies attempt to account for the random infall of axions onto the NS via a Monte Carlo style simulation <cit.>.
Although they include a magnetized plasma dispersion relationship for the photons, they only consider flat spacetime.
Most recently, work was done to include a magnetized plasma dispersion relationship for the photons in Schwarzschild spacetime <cit.>.
This article also had the inclusion of using a recently derived axion-photon conversion probability incorporating 3D effects <cit.>.
However, all these models rely on the inclined GJ model of NS magnetospheres.
The GJ magnetosphere model assumes a dipole magnetic field
in a flat spacetime.
In the force-free NS model case, GJ shows that an NS must have a dense magnetosphere containing charged particles.
They derive an analytical form for the charge density and magnetic field vector.
This proves convenient for studying phenomena around NSs due to the simplicity of the model, which allows quick and approximate results to be shown.
More recently, strides have been made to create increasingly accurate models for NS magnetospheres. Numerical simulations can account for the complete Maxwell equations in the 3+1 formalism of a stationary background metric <cit.>.
Extensions have been done to include more multipolar components of the magnetic fields in a vacuum with analytical equations <cit.>.
A series of papers by Gralla et al. provide an analytical solution in the near field of an inclined dipole magnetic field in curved spacetime <cit.>.
Their model[Hereafter referred to as the Gralla, Lupsasca, and Philippov (GLP) model.] provides corrections to the dipole magnetic field in GR, which alters the magnetic field strength and shape.
Their article also includes a charge distribution constituting a plasma in the magnetosphere that depends on GR effects.
While our principal methods follow the work carried out in Refs. <cit.>, we wish to extend their results by using the GLP model and seeing what effect that has on the predicted power received from axion-photon conversions near an NS.
We will do this by producing estimated signals in the emitter to observer scheme simulations.
In Sec. <ref>, we provide a review of the GJ model, the GLP model, and a brief aside on plasma.
In Sec. <ref>, we discuss our implementation of the conversion of axions to photons in our simulations.
In Sec. <ref>, the dispersion relationships are provided, which are then used in the ray tracing equations reviewed in Sec. <ref>.
In Sec. <ref>, we explain our implementation of axion-to-photon conversion simulations around neutron stars using the observer-to-emitter scheme.
Lastly, in Sec. <ref>, we present the results of our simulations and the conclusions that can be made. The article's main text considers the case where the magnetic field and rotation axes are aligned. We consider the misaligned case in the Appendices.
Throughout, we denote 4-dimensional abstract (where no particular coordinate system is specified) and coordinate tensor indices with Latin letters starting from a,b,c,… and i,j,k,… respectively, both in the range 0–3 and 3-dimensional coordinate indices with Greek letters μ,ν,… in the range 1–3. The 3-vectors with index range 1,2,3 will also be denoted by a boldface typeset where appropriate. We use the metric signature (-,+,+,+), along with the choice of natural units c=ħ=ϵ_0=1.
§ NEUTRON STAR MODELS
As already highlighted in the introduction, the leading NS magnetosphere model that is used in axion-photon ray tracing simulations is the GJ model, which assumes an inclined dipole magnetic field in a flatspace time.
However, we wish to explore the effect on these simulations of including an NS model derived in a curved spacetime.
This section will review and discuss the two models, focusing on their implementation into the simulations.
§.§ Goldreich-Julian Neutron Star
The Goldreich-Julian (GJ) <cit.>
assumes that the NS is surrounded by a dense plasma of charged particles in the star's magnetosphere.
GJ derives forms for the magnetic and electric fields for an aligned rotator. The GJ model has since been extended to account for the misalignment angle <cit.>.
The number density of electrons and positrons in the magnetosphere of the GJ model is given by,
n_GJ(r)=2Ω·B/|q_e|1/1-Ω^2r^2sin^2θ,
where Ω=(2π/P_ns)z is the NSs angular velocity vector with P_ns being the period of rotation, z the unit vector in line with the rotation axis of the star, q_e the charge of an electron, r the distance from the center of the NS, and θ the polar angle from the rotation axis.
The magnetic field is B, which will be defined shortly for the GJ magnetic field (<ref>).
The regions that return a positive number density are dominated by positrons, whereas regions with a negative number density are dominated by electrons.
The GJ model described above assumes that the star's interior magnetic field takes the form of a dipole. From this assumption, expressions for the fields can be derived.
Since we wish to work with the near-zone fields, both of the electromagnetic fields simplify.
This is done by taking the leading order terms as r→0 as long as r>R_NS so that the fields are external to the star, where R_NS is the radius of the NS's surface.
In the near-zone, the magnetic fields to a good approximation are given by that of an idealized inclined rotator (e.g. <cit.>),
B_r = 2μ/r^3(cosχcosθ + sinχsinθcosλ),
B_θ = μ/r^3(cosχsinθ - sinχcosθcosλ),
B_ϕ = μ/r^3sinχsinλ,
with μ being the magnetic dipole moment of the star, χ the misalignment angle between the rotation axis and the magnetic field axis, and λ=ϕ-Ω t, with ϕ the azimuthal angle around the NS where ϕ=0 is inline with x and increases in the anticlockwise direction.
These coordinates are shown in Fig. <ref>.
Because the GJ model is the simplest NS model which includes a plasma, with a well-defined charge density, it is the usual choice for axion-photon conversion papers, for example <cit.>.
However, we wish to study the effect of including a dipole magnetic field derived in curved spacetime as compared to the GJ model.
In the following section we detail necessary equations from the GLP model.
§.§ Gralla, Lupsasca and Philippov Magnetosphere
Recently, derivations have been published that extend the dipole magnetic field around an NS to a curved spacetime.
The GR pulsar model we explore in this work is taken from a series of papers by Gralla et al. <cit.>.
In their work, they use the Hartle-Thorne metric (<ref>) to describe the electromagnetic fields around a pulsar.
The Hartle-Thorne metric is used to describe a slowly rotating star in general relativity.
It is given as (e.g., <cit.>)
g_ij =
[ -f^2+Ω^2r^2sin^2θ 0 0 -Ω r^2sin^2θ; 0 1/f^2 0 0; 0 0 r^2 0; -Ω r^2sin^2θ 0 0 r^2sin^2θ; ],
where f = 1 - r_s/r is the Schwarzschild function
and we are using Schwarzschild coordinates x^i = {t,r,θ,ϕ}.
The rotation is contained with the terms including Ω, which is the angular velocity of the star.
When the rotation is slow enough, frame-dragging terms that contain Ω become insignificant.
The Hartle-Thorne metric then simplifies to the Schwarzschild metric.
In our work, we continue using the Schwarzschild metric for both the magnetic field and the ray tracing[We checked that the Hartle-Thorne metric does not make a significant difference to our results. However, it may matter for faster rotating NSs.].
Gralla et al. provide solutions for the near fields in the situation of a force-free axisymmetric field, after which they extend to an inclined pulsar where the misalignment angle is non-zero.
In the following, we detail their work, and we provide insights on how to apply their results to the problem of axion photon conversion near NSs.
We begin by describing a set of electromagnetic relationships.
We can define the magnetic field for an arbitrary observer and metric by using (e.g. <cit.>),
B^d = 1/2ϵ^abcdF_abU_c
where F_ab is the electromagnetic tensor, U_c is the 4-velocity of an observer and ϵ^abcd the covariant Levi-Civita tensor to account for the metric, which is related to the Levi-Civita symbol multiplied by the determinant of the metric tensor such that (e.g. <cit.>),
ϵ_abcd = (√(|g|))ε_abcd,
ϵ^abcd = sign(g)/(√(|g|))ε^abcd,
where ε_abcd is the fourth-rank Levi-Civita symbol.
The last relationship we require is the current density 4-vector, as the charge density is contained within the current density 4-vector. It is related to the 3-space quantities by the relationship,
J^a=(ρ, J),
where ρ is the charge density and J=ρu is the current density with velocity u^μ=dx^μ/dτ, where τ is the proper time. The current density 4-vector is related to the electromagnetic tensor by,
J^a = ∇_b F^ab,
where ∇_b is the covariant derivative.
§.§.§ Aligned Rotator
The first paper in the series by Gralla et al. <cit.> begins by deriving an analytical method for studying the force-free magnetosphere of a slowly rotating aligned rotator (χ = 0), including the effects of GR.
They find that a ∼60% correction to the dipole component of the surface magnetic field is introduced by accounting for GR.
In deriving the equations for the magnetic field, they assume that the electromagnetic field is force-free (F_abJ^b=0).
This is the same assumption as the GJ model.
The electromagnetic tensor can then be given by potentials ψ_i=ψ_i(r, θ, ϕ - Ω t), such that,
F_ij = ∂_iψ_1∂_jψ_2 - ∂_jψ_1∂_iψ_2.
As the field is axisymmetric, the time dependence does not alter the field configuration.
However, we include it for completeness as it is required when considering the inclined magnetic field case.
In the case of an aligned rotator, the magnetic flux function for a dipole in the near-field region is given as,
ψ_1, near(r, θ) = μ R^>_1(r)sin^2θ,
ψ_2(ϕ-Ω t) = ϕ-Ω t,
where μ=B_nsR_ ns^3/(2Δ_1) is the dipole moment with
B_ns being the surface magnetic field strength at the pole,[This is related to the magnetic moment of (<ref>) via B_ns=μ.] R^>_1(r) is the first radial harmonic, and Δ_ℓ=R_ ns^ℓ R^>_ℓ(R_ ns), all with R_ ns the
radius of the star.
The function Δ_ℓ is dimensionless and depends only on the compactness of the star.
It provides the GR correction to the dipole moment at the surface.
The first radial harmonic is (e.g. <cit.>),
R^>_1(r)=-3/2r[3 - 4f + f^2 + 4log f/(1-f)^3],
recalling that
f=1-r_s/r is the Schwarzschild function.
For this model to be beneficial, it must have a magnetosphere containing charged particles.
The charge density around the star can be described by investigating the charge-current 4-vector.
Upon applying the covariant derivative to the electromagnetic tensor with the Hartle Thorne metric (<ref>), to leading order derivative terms, the time component of J^a is[We have added back in factors of G compared to GLP's derivation due to our choice of units.],
J^t = 2(Ω - Ω_z)/r(r - 2 G M)[(r - 3 G M)∂_rψ_1 + θ∂_θψ_1],
which with,
ρ_e=U_aJ^a=J^t√(1-r_s/r)=J^t̂,
gives the charge density around the NS for the aligned rotator case.
This relationship is used to find the charge density for the plasma frequency, which will be discussed shortly.
We can then compare the results of the fields and charge density described here to the GJ model (<ref>) and (<ref>) to see the effect these corrections have on the axion-photon conversion signal.
In the simulations presented in this paper, we take the magnetic field and rotation axis to be aligned so that we may compare with Ref. <cit.> (henceforth referred to as mcdonaldGeneralizedRayTracing2023MW23).
So, for the results presented in this paper, it is only necessary to understand the GLP-aligned rotator reviewed in the main body.
However, our code implements the inclined rotator case of the GLP model.
In Appendix <ref>, we detail the equations and relationships necessary to include the inclined GLP model.
§.§ Plasma
A fundamental parameter that characterizes a plasma is its plasma frequency. In the absence of a magnetic field, the plasma frequency is the oscillation frequency for the charge distribution about its equilibrium and is given as (e.g. <cit.>)
ω_p=√(∑_i4πα n_i/m_i),
where m_i and n_i is the mass and number density of species i, and α is the fine-structure constant.
It is assumed that the plasma consists only of electrons and positrons.[If ions were considered for the positively charged regions instead, the plasma frequency would decrease due to the ion's greater mass.]
This type of plasma is used, for example, by Refs. <cit.>.
The number density of the charged particles and the charge density are just related via,
n_e = |ρ_e|/q_e,
where q_e is the charge of an electron.
In previous studies three cases are considered for the plasma:
* The plasma consists of no charged particles in the magnetosphere, considering only a vacuum around the star.
* An isotropic plasma, where charged particles are present, but the magnetic field of the NS has no effect on the medium. In this case, the plasma frequency only depends on the distance from the star.
* An anisotropic plasma, where the magnetic field induces new effects on the medium.
The choice of plasma is important for the ray tracing of photons as it alters their dispersion.
§ AXION-PHOTON CONVERSION
As previously indicated, axions undergo resonant conversion to photons in the presence of plasma (e.g. <cit.>).
The benefit of this is an enhancement to the photon signal from axion-photon conversions, potentially leading to detectable signals from Earth.
The resonance occurs due to plasma effects generating an effective mass for the photon, allowing the axion and photon dispersion relations to intersect.
This resonant condition is maximized when the plasma frequency is close to or equal to the mass of the axion[In SI units, this condition is ω_p=ħ m_a.]
ω_p ≈ m_a .
mcdonaldGeneralizedRayTracing2023MW23 uses this condition for their isotropic plasma cases.
For an anisotropic plasma, they advocate using full kinematic matching of axion to photon conversion, which is given by k_a = k_γ.
§.§ Conversion Surface
Using the resonance condition in (<ref>),
we can define a surface in three-dimensional space surrounding a magnetic field source with a plasma.
This surface is referred to as the conversion surface.
The conversion surface is the region that will have the most predominant axion-photon conversion flux.
When considering the GJ dipole magnetic field, where B∝1/r^3, the charge number density (<ref>) can have its r dependency explicitly shown as,
n(r, θ, ϕ, t) ≈1/r^3n(r=1 eV^-1, θ, ϕ, t),
where we have ignored the rightmost fraction in (<ref>), as it is near unity for small radii.
The expression above can then be combined with (<ref>) and (<ref>)
to give the radius of the conversion surface for a given θ, ϕ and t:
r = √(ω_p^2(r=1 eV^-1, θ, ϕ, t)/m_a^2).
For the GLP model, where the dependency on r is more complicated, root-solving methods are required to find the conversion surface.
§.§ Probability of Conversion
As the process of an axion converting to a photon is based on classical field theory and is mediated by the interaction term in the axion's Lagrangian,
an associated probability of conversion can be found (e.g. <cit.>).
This probability will affect the total radiated power predicted in simulations.
Hence, the choice of conversion probability method that is used has a significant impact on the results.
For this reason, we chose to use the conversion probability for an isotropic plasma that was given in Eq. (69) of mcdonaldGeneralizedRayTracing2023MW23 so that we may compare results and it conveniently already incorporates a curved spacetime.
This conversion probability is expressed as,
P_aγγ = π g^2_aγγ|B|^2sin^2θ̃E_γ/|k^i∂_i(ω_p^2)|,
where g_aγγ is the axion-photon coupling constant, |B|^2=B_μ B^μ, θ̃ is the angle between the axion's momentum and the magnetic field, and E_γ is the energy of the photon at the point of conversion.
§ CURVED SPACETIME DISPERSION RELATIONS
The warping of spacetime from the mass of an NS can be extreme near the star, significantly affecting the path that particles and light would follow compared to a flat spacetime.
By accounting for
these
influences on geodesics around the star, the results of axion-photon conversion simulations will be changed (see, for example, Refs. <cit.>).
For the curvature of spacetime to be accounted for in the dispersion relationships above, the metric must be included in some form.
A simple method of converting the flat spacetime relationships to a curved spacetime is by simply taking the 3+1 approach where squared parameters can be converted to Einstein sums, as done in Ref. <cit.>.
We can, however, extend this further by introducing covariant relationships for the photon 4-vector components.
In the following, we discuss how gravitational effects are accounted for in the photon dispersion relation inside different types of plasma.
Firstly, the refractive index of the medium takes on a covariant form and becomes <cit.>,
n^2= 1 + k_ak^a/(k_bU^b)^2,
where k^a is the photons 4-momentum[Remember that in natural units, the 4-momentum and 4-wavevector are equivalent. In SI, p^a=ħ k^a.] and U^a is a global unit timelike vector such that U^aU_a=-1, recalling that U^a is the 4-velocity of an observer.
When the Minkowski metric is used, (<ref>) reduces to the flat-space case of n=ω/k where ω is the photon frequency and k is the photon wavenumber.
§.§ Vacuum
For the simple case when no plasma is present, only gravity will affect the path the photon travels.
The dispersion relationship in a vacuum is just,
D(k)=k_ak^a=0,
where the sum over the indices contains the metric, which will account for the curvature of space.
With the Minkowski metric, this simply becomes k^2-ω^2.
In this case, photons travel along geodesics around the star.
§.§ Isotropic Plasma
With the inclusion of plasma, there is now a scalar function ω_p(x^a) present.
It can can essentially be considered a forcing term in the dispersion relationship, which alters the trajectory of the photons from the vacuum case.
The function ω_p is a scalar independent of the metric.
Hence, it remains the same as the flat-space case, as it is unaffected by the inclusion of GR.
In an isotropic plasma, the dispersion relationship is,
D(k)=k_ak^a+ω_p=0.
For the derivation of this relationship see Eqn. (10) in Ref. <cit.>, which also matches with Eqn. (17) of Ref. <cit.>.
§.§ Anisotropic Plasma
The covariant form of the anisotropic dispersion relationship requires some covariant plasma expressions presented in Ref. <cit.>[They use a metric with the opposite signature (+, -, -, -). We modify their relationships to be compatible with our choice of metric signature, which is (-, +, +, +). This leads to a difference in signs on some terms.].
As in previous sections, let U^a be a global unit timelike vector such that U^aU_a=-1.
Then, the photon's effective energy measured by an observer with 4-velocity U^a is W=-k_aU^a.
Also, define the unit vector in the direction of the magnetic field b^a = B^a/√(B^cB_c) with b^ab_a=1 (where in Schwarzschild coordinates B^0=B^t=0).
Then, the wave vector component parallel to the magnetic field can be represented by the sum K_=k_ab^a.
In the non-relativistic plasma limit of Eqn. (12) of Ref. <cit.>, we have that the GR anisotropic dispersion relation is,
D(k)=k_ak^a+ω_p^2(1 - K_^2/W^2)=0.
When the appropriate Minkowski limit is taken, where K_=kcosθ̃ and W=ω, the above equation simplifies to the flat-space case.
§ RAY TRACING
When a photon propagates through a plasma, it may undergo refraction and reflection due to the plasma's varying refractive index. To trace the path of the photons through the plasma, ray tracing is used.
At a simple level, this involves a system of coupled ordinary differential equations (ODEs), which are constructed using one of the plasma dispersion relationships in Sec. <ref>.
The ODEs `tell' the photon which direction to travel, how its momentum should change direction, and how its energy should evolve.
The ODEs can then be solved analytically or integrated using a numerical solver, depending on the complexity of the dispersion relation that forms the ODEs.
The solutions allow the photon to be followed through the magnetosphere of the NS and can be used to reproduce the expected photon signal from axion-photon conversions.
When the dispersion relation contains a plasma frequency term, the ray path is most affected when the photon's frequency is close to the plasma frequency.
§.§ Ray Tracing in General Relativity
Paths of geodesics in flat space-time are simply straight lines. In a curved spacetime additional terms involving the Christoffel symbols appear in the geodesic equation, altering the paths <cit.>. In a curved spacetime with the presence of a plasma, the paths will change further <cit.>. Below we give the equations governing null geodesics in a curved spacetime with plasma.
§.§.§ Geometric Optics
The geometric optics limit is formed by taking the WKB approximation with the eikonal form <cit.>. The general relativistic equations for ray propagation can then be found by first representing a wave packet in the form,
A_b=∫A_b(k)e^ik_bx^a√(|g|)d^4k
having used the eikonal form with a Fourier transform. In the exponential, k_b should satisfy D(k,x)=0, and in particular, there should be a sharp maximum when k=k_0. So, we can make the substitution k=q+k_0. Then, along the ray x^a=x^a(λ), where λ is an affine parameter along the light path, the phase q_ax^a should be stationary. So, we will have q_a(dx^a/dλ)=0. One can then expand the dispersion relation about k_0 to obtain q_a(∂ D/∂ k_a)=0. This yields,
dx^a/dλ = ∂ D/∂ k_a.
Then using dD/dλ=0,
∂ D/∂ k_adk_a/dλ + ∂ D/∂ x_adx^a/dλ = 0.
Hence, we will have a system of ODEs that describes the ray path. They are expressed as,
dx^a/dλ = ∂ D/∂ k_a,
dk_a/dλ = -∂ D/∂ x^a.
The solution to the first equation (<ref>) prescribes the spacetime position, whilst the second equation (<ref>) controls the energy and direction of propagation.
The convenience of this form is once we have defined the dispersion relation in covariant form, we may then directly use it to find the ray paths. The only caveat is this method requires the dispersion relation to have the 4-wavevector expressed as a covariant vector due to the derivative present in (<ref>) while the position 4-vector remains in contravariant form.
§ METHODS
We now describe our procedure to produce the expected axion-photon conversion signals around an NS.
We begin by describing the required numerical methods, allowing us to compare the effects of changing the metric, dispersion relation, magnetic field, and conversion probability on the signal received from axion to photon conversion around NSs.
§.§ Numerical Methods
Numerical methods are used to run simulations of the photons propagating through the plasma surrounding the star, source the axion-photon conversion points, and produce the estimated signal/flux from these phenomena.
Our choice of programming language is Python[<https://www.python.org/>] due to its relative simplicity and an exhaustive library of modules capable of running the simulations carried out in this work.
Due to the complexity of the coupled ODE systems present from the dispersion relations and the ray tracing equations, we use numerical solvers to compute the paths of the photons through the plasma.
We employed the Scipy library's solve_ivp function to numerically trace the photons' position and momentum through the dispersive plasma.
At each step, the magnetic field, plasma frequency, and angle between the magnetic field and photon momenta are computed.
To compute the derivatives of functions, the Python library Autograd[<https://github.com/HIPS/autograd>] was employed. Autograd is capable of automatically differentiating native Python and Numpy code.
It works by using Automatic Differentiation to compute an approximation of the derivative of a function, with machine precision accuracy, without computing a symbolic expression of the derivative.
Hence, only the function must be known, but its related derivatives are unnecessary.
If the plasma function is to be modified, rather than finding its potentially difficult derivative, only the function itself needs to be known for Autograd to return the derivative/gradient of the plasma function in each coordinate direction.
We implement the module Multiprocessing[<https://docs.python.org/3/library/multiprocessing.html>] to leverage the multiple threads available to us during a simulation by propagating multiple photons through the plasma simultaneously.
This allowed us to divide the time for a simulation to run by approximately the number of available processing threads.
The code was developed so that changes to the dispersion relation, magnetic field, or probability calculations can be easily made by modifying the defining Python functions.
This also allows for adding more complex magnetic fields, dispersion relations, or future probability calculations.
§.§ Observer to Emitter Scheme
In this scheme, photons are sourced from an image plane and back-propagated to the axion-photon conversion surface <cit.>.
This concept is based on the physical process of photons propagating from the conversion surface and falling incident on a detector found on Earth.
Based on the fact that asymptotically far from the star, once the photons are no longer refracting due to the plasma, photon trajectories are perfectly straight and will be orthogonal to the detector on incidence and parallel to neighboring ray paths.
Hence, the propagation of photons can be reversed, beginning orthogonal to and at the detector, and then integrating backwards in time to find the location of the photons from the source.
This means that only photons converging on the detector plane are considered in this simulation scheme.
Because of this, the observer-to-emitter scheme provides a less computationally heavy workload, as the number of photons traced is significantly smaller than that of the emitter-to-observer scheme.
Another benefit of this method is that it produces the image a detector would `see' from the source.
However, the viewing angle from the NS needs to be known as it can greatly affect the results.
The following method is adapted from Refs. <cit.>.
Due to the large separation between a detector and the NS, the rays are assumed to be parallel to each other and perpendicular to the detector. Hence, the rays are sourced in such a way at the detector. The validity of this assumption relies on the rays ceasing refraction due to the plasma once ω≫ω_p, which occurs within the light cylinder of the NS.
Once refraction stops, the rays take on the free-space dispersion relation (<ref>), the solution of which is a ray traveling along geodesics.
Hence, for a ray to reach the detector at some asymptotic distance, it must travel straight at the detector once the plasma's influence stops. Thus, the ray will be perpendicular to the detector plane and parallel to all other rays reaching the detector.
To run the Observer to Emitter Scheme, photons must first be sourced at an initial position.
This is done by placing a detector at some substantial separation from the source.
Photons are then integrated backward in time (back-propagated) from the detector until the photons reach the source of axion-photon conversion.
More precisely, the photons are given initial positions along a detector plane (or image plane), which is divided into pixels of side length Δ b.
A single photon is sourced at the center of each pixel before being propagated through the plasma.
They are initiated to be perpendicular to the detector plane and parallel to each other.
The propagation is described using one of the ray tracing methods with one of the dispersion relations.
The ray path is then solved by integrating the ray tracing equations over time, or their affine parameter, to track the refraction of the rays through the plasma. Integration is finished when either: (1) the photon intercepts the axion-photon conversion surface; or (2) the photon misses the surface and reaches the end of the integration interval.
From here, the value of the radiated power received by each pixel can be calculated with the photon values at the conversion surface. The total differential power (dP/dΩ dω) received by the detector is given by Eqn. (11) of Ref. <cit.>,
dP/dΩ dω = ∑_i,jΔ b^2 ω^3 f^i,j_γ,
where the sum is over i, j is the index of the pixels on the detector plane, Δ b is the pixel side length (so that Δ b^2 is the pixel's area), ω is the frequency/energy of the photon, and f^i,j_γ is the phase space factor of the photons.
The photon phase space factor can be related to the axion phase space factor via the axion to photon conversion probability, such that f_γ = P_aγγf_a.
We wish to evaluate the quantity (<ref>) at the conversion surface for each photon that is traced to the conversion surface. To do this we need the energy of the photon, conversion probability and the axion phase space all at the point of axion to photon conversion. The energy is found via ray tracing, the conversion probability is found using (<ref>), and the phase space factor of the axions is given by the expression (e.g. see mcdonaldGeneralizedRayTracing2023MW23 Eqns. (58) and (59)),
f_a(x, k) = v_a n_a,∞k_c(|x|)/k_0δ(ω-ω_c)/4π |k|^2,
where v_a is the velocity of the axions at that point, n_a,∞=ρ_a,∞/m_a is the asymptotic number density of axions, k_c(|x|)^2 = k_0^2 + 2GM_nsm_a^2/|x| the square of the in-fall momentum of the axions, k_0 = m_a v_0 is the momentum dispersion, and ω_c is the energy of the photon at the conversion surface.
In natural units, |k|^2=k_μ k^μ is the three-momentum magnitude of the axion/photon and is found via ray tracing.
§.§.§ Implementation
In the following, we describe our implementation of the observer-to-emitter method in GR using the Schwarzschild metric.
We will use the Schwarzschild coordinate system where x^i=(t, r, θ, ϕ).
Hence, the 4-momentum will take on the form k^i=(ω, k^r, k^θ, k^ϕ).
Throughout this description, we will also highlight code variables in our algorithm using mygraythis formatting.
We do this to clarify what each variable controls and what values have been chosen in this paper.
The detector's observing angle (θ, ϕ) must be chosen for each simulation.
This gives the center line of the detector relative to z of the NS (recall that z is in line with the rotation axis of the star).
These angles are related to the code variables mygrayObs_theta and mygrayObs_phi.
The detector plane distance from the NS also needs to be assigned to mygrayObs_r0. A balance must be struck with this initial distance.
It should be far enough away to approximate what an extremely distant observer would see but not so far that computation time is significantly increased.
The dispersion relationship must also be picked via mygraydispersion_relation, where the method is chosen with the strings mygray[language=python]'vacuum', mygray[language=python]'isotropic', mygray[language=python]'anisotropic'.
Lastly a metric must be chosen using mygraymetric_choice with the options mygray[language=python]'flat', mygray[language=python]'schwarzschild', or mygray[language=python]'hartle_thorne'.
Below, we outline the procedure used to produce an axion-photon simulation.
We used the GR dispersion relations in Sec. <ref> and geometric optics in Subsec. <ref>.
A detailed explanation of the observer-to-emitter algorithm used in this article is as follows:
* The center of a detector plane is initialized by setting a distance mygrayObs_r0 from the NS, and viewing angles mygrayObs_theta and mygrayObs_phi.
Pixels are then evenly spaced out on a rectangular detector plane, up to a max size of mygraymax_x and mygraymax_y with the number of pixels along each row and column given by mygraytotal_resolution.
The center of the detector is aligned with the center of the NS.
The distance between adjacent pixels is given by Δ b =mygraymax_x/mygraytotal_resolution.
* A single photon is then sourced at the center of each pixel and given initial data:
* The initial position of each photon is set as the center of each pixel and given a time mygrayt_0, the starting time for the simulation (typically t_0=0).
Hence, each photon has an initial 4-position given by,
x^i=(t_0, r_pixel, θ_pixel, ϕ_pixel),
where pixel refers to the pixel at which the photon is sourced.
* The initial 4-momentum of the photon is found using the photon energy and the refractive index.
Energy conservation with the axion gives the energy of each photon during the conversion.
All the axion's energy is converted to the photon's energy.
One may then use E^2=m^2+p^2,
ω=√(m_a^2[1+v_a^2(r_pixel)]).
where ω is the photon's energy, v_a^2=v^2_min+v_∞^2 with v^2_min=r_s/r and v_∞ the dark matter velocity at asymptotic infinity.
The equation (<ref>) must be translated via the effective energy equation ω=-k_iU^i to give the covector form k_i.
In the case that the plasma is static, such that U^t=√(-g^tt) and U^μ=0, we have that
k_t=-ω/√(-g^tt),
where ω is given by (<ref>).
* For the spatial components k_μ, we take the refractive index of the medium at distance mygrayObs_r0 to be n≃1.
Hence, we will have the covariant relationship from the refractive index (<ref>) as -k_tk^t=k_μ k^μ.
The centermost photon of the detector is taken to travel perfectly radial from/towards the star at the detector, such that k^α=(k^r, 0, 0).
So we will have that,
k^r = √(-g_ttk^tk^t/g_rr),
for this photon.
Every other photon is assigned the same value to its spatial components but rotated such that the momentum is parallel to the centermost photon's momentum[If this rotation is not done, and instead the momentum of each photon is set identical to (<ref>), the photons will all have velocities towards the center of the star rather than generating rays that are initially parallel.].
* Once the
photons'
initial conditions are defined, they can then be backpropagated through the medium using the ray tracing equations (<ref>) and (<ref>) along with one of the dispersion relationships (<ref>), (<ref>) or (<ref>).
This is done by using a numerical ODE solver for a finite time.
An example of the paths rays will travel along in this during this algorithm is shown in Fig. <ref>.
The figure also shows the origin from which the photons are sourced on the detector plane with red dots.
* At the beginning of the simulation, a coarse pixel search over the entire detector plane is done.
This search uses the detector resolution[Resolution here means the number of pixels along the side length of the detector.] defined by mygray|coarse_resolution|.
The intention of this is to find the regions of the image plane that likely receive photons from the conversion surface.
This decreases the total number of photon being back tracked in the next step by removing detector regions that will not have photons intercepting the conversion surface.
Hence, this procedure decreases the run-time and increases the efficiency of the simulation by removing unnecessary ray tracing of photons.
* After the coarse search has identified fine pixels that may have photons that will back-trace onto the conversion surface, a fine search is started over the smaller pixels.
This is done by selecting each coarse pixel identified before and completing a higher-resolution search of that coarse pixel.
The resolution is defined by mygray|fine_resolution|.
Each back-traced photon from a fine pixel that intercepts the conversion surface is recorded, while all photons that never reach the conversion surface are ignored.
* Numerical integration of a photon's path ceases when either the photon intercepts the NS surface or the end of the integration interval is reached.
The photon continues after intercepting the conversion surface, and the solver records the photon position and momentum 4-vectors to find the power from each intersection of the photon with the conversion surface.
* Each photon from the fine search that intercepts the conversion surface can then have its probability of conversion calculated using the intercepts obtained from the ODE solver.
Then the photon values and the conversion probability can be used to find the radiated power received by each fine pixel using the terms inside the summation of (<ref>).
Once the algorithm is completed, the results can be plotted to form the image on the detector.
Each radiated power found in the last step is assigned to the index of the fine pixel from which the photon originated.
These indices can be used to reproduce the image on the detector plane using appropriate plotting software.
Otherwise, the powers for a particular viewing angle can be summed together using (<ref>) to find the total estimated power received by a distant observer.
To improve the effectiveness of the coarse search and avoid missing regions that may receive photons, a relative tolerance mygray|coarse_search_rel_tol| is defined in the search algorithm.
This alters the search so that coarse pixel photons that approach near the conversion surface, but do not intercept it, are also included.
This is to avoid the case that fine pixels may be missed if a coarse pixel is excluded by only considering coarse search intercepts.
§ RESULTS AND DISCUSSION
We carry out the observer-to-emitter scheme as detailed above using the parameters in Table <ref>.
Because the effects of GR become greater closer to the Schwarzschild radius, we also include a higher mass NS of M_ns=2.2M_⊙ (compared to the typical M_ns=M_⊙).
We specify the resolution of the detector plane in Table <ref>.
To ensure that the total radiated power had converged, we ran simulations of increasing detector resolution a specified observing polar angles.
Hence, our choice to use a total resolution of 250 × 250 pixels provided a good balance of accuracy and simulation runtime, whilst having converged to a consistent radiated power.
We found that using fewer pixels than our choice of 250 did not sufficiently alter the radiated power received in the simulation, until we reached a total resolution of 150.
This total resolution of the detector in each dimension equals the number of coarse resolution pixels times the number of fine resolution pixels.
For the differing axion masses, the conversion surface changes size (e.g. see Fig. <ref>). To maintain a resolution of a constant number of pixels, we therefore have to use different pixel sizes for different axion masses.
In the case of m_a=10, the pixel size is Δ b=120.
For simulations using m_a=1, it has Δ b=480.
§.§ GJ vs GLP Conversion Surface
Firstly, we explore the effect on the conversion surface by changing the magnetic field model.
Each magnetic field choice alters the charge density, hence the plasma frequency and ultimately the point at which the resonance condition ω_p=m_a is satisfied.
A 2D cross-section of various conversion surfaces using the GJ and GLP models is shown in Fig. <ref>.
This plot clearly shows that the conversion surface moves closer to the surface of the star when considering the GLP model.
However, the general shape of the conversion surface remains the same, with a bulb at either pole and a central torus around the equator.
Both of these regions of the conversion surface are separated by throats that extend below the NS's surface.
The throats occur due to a change in sign of the charge density, meaning the plasma frequency tends to zero in the region by the throats.
Hence, from (<ref>), the conversion surface radius also tends to zero.
We also display the effect of altering the NS's mass in <ref>.
The effects of spacetime curvature are enhanced when the mass of the NS is increased, and hence, there is a greater difference between the two magnetosphere models.
This is because the GLP model depends on the mass of the NS[The GJ model has no dependence on the NS's mass as it is derived in flat space. However, even in that case,
the NS mass will still affect the ray tracing by affecting the spacetime geometry.], ultimately affecting the charge density and, hence, the plasma frequency that gives the conversion surface.
The effect of increasing the NS mass is even clearer for the higher axion mass as the conversion surface is brought closer to the NS when choosing a heavier axion mass.
This is shown for m_a=10 in <ref>(b) as opposed to the lighter mass axion used for (a).
Most notably, the throats' shape, position, and size are altered for the higher mass NS and m_a=10.
§.§ Isotropic Plasma
By using the simpler isotropic plasma case (<ref>), we can glimpse any initial difference in the total power a distant observer receives.
The main effect of changing the magnetosphere model will be on the charge density, and hence, through (<ref>), the plasma frequency around the NS.
Ultimately, this will cause the back-traced photons to have altered trajectories through the plasma.
This will also result in a change to the values returned by (<ref>), (<ref>) and (<ref>) at the point of conversion for a photon.
The result of switching the magnetosphere model on the period averaged radiated power across different observing angles is shown in Figs. <ref> and <ref>.
The latter figure uses a higher mass NS in the simulations.
For the results presented in Fig. <ref>, we see only minor differences between the GJ and GLP models.
However, by increasing the mass of the NS, the changes introduced by spacetime curvature will become more important.
Most importantly, increasing the mass significantly alters the GLP magnetosphere model while leaving the GJ magnetosphere model unchanged.
Changing the mass of the NS also has consequences on the ray tracing due to the changes in the metric.
Also, the axions will have a greater momentum through the plasma as seen in (<ref>).
The result of all of this can be seen in Fig. <ref>, that by increasing the mass of the NS, the total radiated power is increased.
It also increases the difference between the two magnetosphere models, especially in the m_a=10 case.
This suggests that a GR magnetosphere may be important to consider in the results of searches around higher-mass NSs.
As a test to ensure that we have implemented the physical processes occurring correctly in our numerical simulations, we compare the results we obtain with the GJ magnetosphere to that of the previously published work of mcdonaldGeneralizedRayTracing2023MW23.
We see in Fig. <ref> that our results reproduce theirs
reasonably well.
The slight deviation is likely due to differences in numerical solver tolerances and the resolution of the detector plane.
We also only need to evaluate the polar angles from 0 to π/2 radians due to the symmetry of the magnetic field.
To study whether there is a reasonable difference between the total radiated power of the GJ and GLP models, we present the absolute percentage difference between our results using the GJ and GLP models in Figs. <ref> and <ref>.
To justify if the difference between the two models is significant, we use the difference between our simulations using the GJ magnetosphere and the results from mcdonaldGeneralizedRayTracing2023MW23.
This provides an estimate of `uncertainty' in implementing the two models.
In both Figs. <ref> and <ref>, the simulations using m_a=1 have large differences present at θ_ obs∼ 1 radian which coincides with the substantial flux received from the throat of the conversion surface.
When considering the conversion surfaces in Fig. <ref>, the throats are not as deep for the GLP case.
Hence, back-traced photons will not `bounce' off the conversion surface as much down its throat, yielding less radiated power.
In the m_a=10 simulations, this issue is not prevalent due to the throat not being as deep and intercepting the NS surface much closer to the opening.
Interestingly, the interception of the NS surface leads to a slight dip in radiated power near θ_obs∼ 1 rad in the bottom panel Fig. <ref>.
In Fig. <ref>, for the m_a=10 case, a reasonable difference is present between the GJ and GLP models across most viewing angles.
This difference is also greater than the difference between GJ and mcdonaldGeneralizedRayTracing2023MW23 data.
For the GLP model, from Table <ref>, there is an average absolute difference of 26% over all the viewing angles, which, when compared to the difference with mcdonaldGeneralizedRayTracing2023MW23 having an average of 9.8%, appears to be a significant change in power.
Looking closer at Fig. <ref>, we see that some viewing angles have larger changes in power, while a few have a minimal difference.
In Fig. <ref> for both axion masses, the difference between the GJ and GLP models is similar to the difference between GJ and mcdonaldGeneralizedRayTracing2023MW23.
Hence, the GLP model does not introduce a significant difference in the lower mass NS case.
Hence, for a higher mass NS and a conversion surface that will be close to the Schwarzschild radius, a GR magnetosphere induces a reasonable difference in power.
§.§ Anisotropic Plasma
Earlier in this article, we discussed the anisotropic plasma relationships (<ref>).
However, our implementation of this more complex plasma, discussed for example in Ref. <cit.>, did not produce results that reliably matched the results presented by mcdonaldGeneralizedRayTracing2023MW23 for the GJ model.
Hence, we have excluded any of our results using the dispersion relation (<ref>).
In future work, we will try and determine the reason for the difference.
§ CONCLUSION
In the work carried out here, we discussed the implementation of the GLP magnetosphere model that is derived in a curved spacetime using the Schwarzschild metric.
This GR model was then employed in numerical simulations to study the effect on axion-photon conversion signals from NS.
This is in comparison to the recent numerical simulations of mcdonaldGeneralizedRayTracing2023MW23, which uses the flatspace-derived GJ model in their simulations.
Using an isotropic dispersion relation, we compared the difference in total radiated power across the observing angles of the neutron star between the two models that were simulated for this article and the simulation data supplied by mcdonaldGeneralizedRayTracing2023MW23.
For the M_ ns=2.2M_⊙ case, we found the average absolute percentage difference between the GLP and GJ models was 26% for the m_a=10μeV. This was about 2.7 times greater than the average difference between our GJ model and that of mcdonaldGeneralizedRayTracing2023MW23. The m_a=1μeV had an even larger 32% difference between the GLP and GJ models, but this was only about 2.1 times greater than the average difference between our GJ model and that of mcdonaldGeneralizedRayTracing2023MW23 for this case.
In the cases of M_ ns=1.0M_⊙, implementing the GLP model appears to have little effect on the radiated power for an isotropic plasma.
The conclusion that the higher mass case has the greatest difference intuitively makes sense.
This is due to the stronger dependence on spacetime curvature effects when the conversion surface is close to the Schwarzschild radius of the NS.
We thank Jamie McDonald, Harrison Ploeg, and Sam Witte for helpful comments. Sam Witte also for making available some of the simulation data from MW23 to include in our Figs. <ref>, <ref>, and <ref>.
We also thank Alexandru Lupsasca for helpful comments on the difference between the GLP model's choice of coordinates and our definition of the same coordinates.
apsrev4-2-author-truncate.bst
§ GLP - INCLINED ROTATOR
The third paper, in the series by Gralla et al. <cit.>, extends their work by including a misalignment between the rotation and magnetic field axis.
They suggest that the results from their first paper can be modified by a spatial coordinate change, using a set of spatial coordinates about the rotation axis and another set about the magnetic field axis.
The rotation axis Ω is chosen to be in line with the Cartesian axis z, after which the typical spherical polar coordinates (r, θ, ϕ) are defined about this axis.
This is the coordinate system a stationary observer will be using.
The magnetic field symmetry axis e is inclined by a polar angle χ to the rotation axis and will have azimuthal angle Ω t due to the star's rotation.
The polar coordinates (r, ϑ, φ) are defined around the axis e such that ϑ is the polar angle measured away from the axis and φ is the azimuthal angle around the axis measured from a line pointing at the rotation axis Ω, such that in this coordinate system the magnetic field is independent of time.
We can also introduce an azimuthal angle, which measures the angle from e to ϕ in the (r,θ,ϕ) frame. Explicitly, this can be expressed as λ=ϕ-Ω t.
Given axisymmetric field functions defined about the magnetic field symmetry axis, the coordinates (r, ϑ, φ) can be used to maintain the field
functions'
symmetry.
These coordinates are shown in Fig. <ref>.
Hence, we may take the two functions from Gralla et al. <cit.> that describe an aligned dipole, (<ref>) and (<ref>), and perform the coordinate change θ→ϑ and ϕ→φ. This results in,
ψ_1 near(r, ϑ) = μ R^>_1(r)sin^2ϑ,
ψ_2(φ) = φ,
where the time dependence no longer exists in the rotated frame due to the axisymmetric field functions.
The coordinate angles ϑ and φ need to be related to the angles θ and ϕ so that we may compute the fields and charge density in an observer's frame. A method to derive these relations is using spherical triangles, and in particular, the relationships are,
cosϑ = cosθcosχ + sinθcosλsinχ,
tanφ = sinθsinλ/cosθsinχ + sinθcosλcosχ,
which are found using the equations and figures in Appendix <ref>.[These relationships differ from Ref. <cit.>.
This is due to different coordinate definitions.
However, the results remain unaffected.]
The relationships (<ref>), (<ref>), (<ref>) and (<ref>) can all be combined with (<ref>) to give the electromagnetic tensor around an inclined star with GR corrections due to the curved spacetime.
We can then use this electromagnetic tensor with (<ref>) to find the magnetic field strength throughout the star's magnetosphere.
Lastly, we need the charge density for this NS model to provide the plasma. From Ref. <cit.>, in the (r, θ, ϕ) frame, they give charge density as,
J^t̂=Ω-Ω_z/r(r-2 G M){ ∂_θψ_1 ∂_θ∂_ϕψ_2 - ∂_θψ_2 ∂_θ∂_ϕψ_1 + r(r-2 G M)(∂_r ψ_1 ∂_r ∂_ϕψ_2-∂_r ψ_2 ∂_r ∂_ϕψ_1)
-∂_ϕψ_1[(1-2 G M/r) ∂_r(r^2 ∂_r ψ_2)+∂_θ(sinθ∂_θψ_2)/sinθ]
+ ∂_ϕψ_2[(1-2 G M/r) ∂_r(r^2 ∂_r ψ_1)+∂_θ(sinθ∂_θψ_1)/sinθ]}.
Where (<ref>) was found via (<ref>) using (<ref>) with the coordinate dependencies induced by (<ref>) and (<ref>). For a stationary Schwarzschild observer,[Who will have a 4-velocity U^a=(√(-g^tt), 0, 0, 0).] the charge density ρ_e is given by,
ρ_e=U_aJ^a=J^t√(1-r_s/r)=J^t̂.
This gives us a well-defined near-field approximation of a misaligned dipole magnetic field in covariant form which accounts for the curvature of spacetime.
§ SOLUTIONS OF SPHERICAL TRIANGLES
The spherical Law of Cosines is given as (e.g. <cit.>),
cos c = cos a cos b + sin a sin b cos C,
and the spherical Law of Sines is given as,
sin a/sin A = sin b/sin B,
where the lower case represents the side length and the upper case represents the corresponding angle (see Fig. <ref>).
Upon rearranging the spherical Law of Cosines for cos C, we get,
cos C=cos c - cos acos b/sin a sin b.
We can replace a, b, and c with the coordinate angles around the rotation axis (θ, ϕ) and around the magnetic field axis (ϑ, φ) (see Fig. <ref>),
cosϑ = cosχcosθ + sinχsinθcosλ,
sinϑ/sinλ = sinθ/sin-φ,
cos-φ = cosθ-cosχcosϑ/sinχsinϑ.
All three equations can be combined to remove the dependency on ϑ from (<ref>). By replacing sinϑ using (<ref>) and cosϑ using (<ref>)
cosφ = -sinφ[cosθ - cosχ(cosχcosθ + sinχsinθcosλ)]/sinχsinλsinθ
where we have also simplified the negative arguments. This then becomes,
cosφ/sinφ = -cosθ + cos^2χcosθ - cosχsinχsinθcosλ/sinχsinλsinθ,
tanφ = sinχsinλsinθ/sin^2χcosθ - cosχsinχsinθcosλ,
tanφ = sinλsinθ/sinχcosθ - cosχsinθcosλ.
|
http://arxiv.org/abs/2409.03527v1 | 20240905133659 | The Kaufmann--Clote question on end extensions of models of arithmetic and the weak regularity principle | [
"Mengzhou Sun"
] | math.LO | [
"math.LO",
"03C62, 03F30, 03H15 (Primary) 03F35 (Secondary)"
] |
§ ABSTRACT
We investigate the end extendibility of models of arithmetic with restricted elementarity. By utilizing the restricted ultrapower construction in the second-order context, for each n∈ and any countable model of Σ_n+2, we construct a proper Σ_n+2-elementary end extension satisfying Σ_n+1, which answers a question by Clote positively.
We also give a characterization of countable models of Σ_n+2 in terms of their end extendibility similar to the case of Σ_n+2.
Along the proof, we will introduce a new type of regularity principles in arithmetic called the weak regularity principle, which serves as a bridge between the model's end extendibility and the amount of induction or collection it satisfies.
[
[
=====
§ INTRODUCTION
End extensions are of significant importance and have been studied intensively in the model theory of arithmetic: The classical MacDowell–Specker theorem <cit.> showed that every model of admits a proper elementary end extension. Around two decades later, Paris and Kirby <cit.> studied the hierarchical version of the MacDowell–Specker theorem for fragments of . In fact, they showed that for countable models, end extendibility with elementarity characterizes the collection strength of the ground model.
Let M be a countable model of Δ_0. For each n∈, M satisfies Σ_n+2 if and only if M has a proper Σ_n+2-elementary end extension K.
For the left-to-right direction, the theorem above does not explicitly specify what theory the end extension K can satisfy.
This amount of elementarity stated in the theorem already implies KΣ_n, and Paris–Kirby's proof actually indicates that K cannot always satisfy Σ_n+1, since this would imply MΣ_n+3. Moreover, for each n∈, Cornaros and Dimitracopoulos <cit.> constructed a countable model of Σ_n+2 which does not Σ_n+1-elementarily end extend to any model of Σ_n+1.
So with Σ_n+2-elementarity, the theory that the end extension K may satisfy lies between Σ_n and Σ_n+1, and the following question arises naturally:
[Kaufmann–Clote]
For n∈, does every countable model MΣ_n+2 have a proper Σ_n+2-elementary end extension KΣ_n+1?
The question was included in the list of open problems in <cit.> edited by Clote and Krajíček.
It was first raised by Clote in <cit.>, where he noted that the same question in the context of models of set theory had been previously posed by Kaufmann in <cit.>.
In the same paper, Clote <cit.> showed that every countable model MΣ_n+2 admits a Σ_n+2-elementary proper end extension to some K M-Σ_n+1, which means all the instances of Σ_n+1:
x a yϕ(x,y)→ b x a y bϕ(x,y)
where a∈ M while parameters in K are allowed in ϕ(x,y)∈Σ_n+1(K).
Cornaros and Dimitracopoulos <cit.> showed that every countable model MΣ_n+2 has a Σ_n+2-elementary proper end extension KΣ_n+1^-(the parameter-free Σ_n+1-collection).
In this paper, we give an affirmative answer to Question <ref>.
The original proof of Paris–Kirby's theorem is based on a first-order restricted ultrapower construction. The ultrapower is generated by a single element when viewed from the ground model. One can show that, by relativizing the proof of pointwise Σ_n+1-definable models do not satisfy Σ_n+1 (e.g., <cit.>, such ultrapowers always fail to satisfy Σ_n+1 in the question.
To tackle this issue, we expand our model into a second-order structure satisfying _0 and the end extension K will be a second-order restricted ultrapower with respect to that structure.
For us, one of the motivations of studying this question is to find a model-theoretic characterization of (countable) models of Σ_n+2 analogous to Theorem <ref>.
Despite the fact that the extension in Question <ref> is insufficient for characterization, a slight generalization of it will suffice.
We will show that for any countable model MΔ_0+exp,
MΣ_n+2 if and only if M admits a proper Σ_n+2-elementary end extension K MΣ_n+1, whose definition is similar to MΣ_n+1.
The regularity principle is the key to connecting end extensions with the arithmetic theories that the models satisfy.
Through the end extension, we can employ a `nonstandard analysis' style argument to prove certain types of regularity principle in the ground model.
Here we provide an example of such an argument. Notice that KΣ_n+1 plays an crucial role in the proof.
For each n∈, let MΔ_0. If M admits a proper Σ_n+2-elementary end extension KΣ_n+1, then M satisfies the following principle:
x y aϕ(x,y)→ y a xϕ(x,y).
where a∈ M, ϕ(x,y)∈Π_n+1(M) and ∃^ x abbreviates b∃ x>b.
Suppose M x y aϕ(x,y), then it is equivalent to a Π_n+1-formula over Σ_n+1. Since both M and K are models of Σ_n+1,
K satisfies the same formula by elementarity.
Pick some arbitrary d>M in K and let c<a such that Kϕ(d,c). Now for each b∈ M, K x bϕ(x,c) and it is witnessed by d. Transferring each of these formulas back to M by elementarity, we have M x bϕ(x,c) for any b∈ M, which means M xϕ(x,c).
We call the principle above weak regularity principle, and denote it by ϕ.
The proposition above, together with the affirmative answer to Question <ref>, implies that Σ_n+2⊢Π_n+1 for each n∈.
Similar to the argument above, we will show that if the extension K MΣ_n+1, then the ground model M will satisfy some other form of weak regularity principle that implies Σ_n+2.
The paper is organized as follows:
In Section 2, we present the necessary notations and fundamental facts regarding models of arithmetic
In Section 3 we review the definition of the second-order restricted ultrapower, and state some basic properties of it.
In Section 4, we provide an affirmative answer to Question <ref> and present the construction of an end extension that characterizes countable models of Σ_n+2 as mentioned above.
In Section 5, we formally introduce the weak regularity principle ϕ, and calibrate its strength within the I-B hierarchy.
Finally, putting the results in Section 4 and Section 5 together, we establish a model-theoretic characterization of countable models of Σ_n+2 analogous to Theorem <ref>.
§ PRELIMINARIES
We assume the reader is familiar with some basic concepts and facts in model theory of first- and second-order arithmetic <cit.>. We reserve the symbol for the set of standard natural numbers.
For each n∈, let Σ_n and Π_n be the usual classes of formulas in the arithmetic hierarchy of first-order arithmetic.
Given a model of first-order arithmetic M, a formula is Δ_n over M if it is equivalent to both a Σ_n and a Π_n formula in M, or simply Δ_n if the model involved is clear from the context.
Σ_n∧Π_n is the class of formulas which is the conjunction of a Σ_n- and a Π_n-formula, and Σ_n∨Π_n is defined similarly.
Σ_0(Σ_n) is the closure of Σ_n formulas under Boolean operations and bounded quantification.
Σ_n^0, Π_n^0, Δ_n^0 and Σ_0(Σ_n^0) are their second-order variants respectively.
Given a model of first-order arithmetic M, Δ_n(M) is the class of Δ_n-formulas over M, potentially including parameters from
M that are not explicitly shown.
Σ_n(M), Π_n(M) are defined similarly, as well as their second-order variants Δ_n^0(M,𝒳), Σ_n^0(M,𝒳) and Π_n^0(M,𝒳) for some model of second-order arithmetic (M,𝒳).
Finally, x… is the abbreviation of bxb….For each n∈, let Σ_n and Σ_n be the collection scheme and the induction scheme for Σ_n formulas respectively. We assume that all the Σ_n include Δ_0, and all the theories considered include ^-, which is the theory of non-negative parts of discretely ordered rings. Σ_n^0 and Σ_n^0 are their second-order counterparts, respectively.
For any element c in some model MΔ_0+exp, we identify c with a subset of M by defining x∈ c to mean the x-th digit in the binary expansion of c is 1.
_0 is the subsystem of second-order arithmetic consisting of Robinson arithmetic, Σ_1^0, Δ_1^0-comprehension, and _0 consists of _0 and a statement asserting that every infinite binary tree has an infinite path.
A tree T in the second-order universe is called bounded if there is a total function f in the second-order universe such that σ(x)<f(x) for any σ∈ T and x<σ.
It is provable in _0 that every infinite bounded tree has an infinite path <cit.>. For each n∈, every countable model of Σ_n+2^0 admits a countable ω-extension (i.e., an extension only adding second-order objects) to some model satisfying _0+Σ_n+2^0 <cit.>.
Considering extensions of models of arithmetic, we say an extension M⊆ K of models of first-order arithmetic is Σ_n-elementary, if all the Σ_n(M)-formulas are absolute between M and K, and we write M_Σ_n K.
We say an extension M⊆ K is an end extension, if every element of K∖ M is greater than any element of the ground model M.
This is denoted by M⊆_K, or M_,Σ_nK if we also have M_Σ_nK.
Strictly speaking, We view second-order structures as two-sorted first-order structures, so by extensions of second-order structures, we mean extensions of the corresponding two-sorted first-order structures.
We write (M,𝒳)_Σ_n^0(K,𝒴) if all the Σ_n^0(M,𝒳)-formulas are absolute between the two structures. We say an extension of second-order structures is an end extension if its first-order part is an end extension, and we denote this by (M,𝒳)⊆_(K,𝒴), or (M,𝒳)_,Σ_n^0(K,𝒴) if we also have (M,𝒳)_Σ_n^0(K,𝒴).
§ SECOND-ORDER RESTRICTED ULTRAPOWERS
The second-order restricted ultrapower resembles the usual ultrapower construction in model theory, but instead of working on the class of all subsets of the model and all functions from the model to itself, we only consider the sets and functions in the second-order universe. For completeness, we review the definition and some basic facts about it.
All the results appear in <cit.> except Lemma <ref> and Corollary <ref>. Throughout this section we fix some arbitrary second-order structure (M,𝒳)_0.
The second-order part of (M,𝒳) forms a Boolean algebra under inclusion and Boolean operations. Let $̆ be an ultrafilter on𝒳such that all the elements of$̆ are cofinal in M, and be the class of all the total functions from M to M in 𝒳. Define an equivalence relation ∼ on by
f∼ g{i∈ M| f(i)=g(i)}∈,̆
where f,g∈.
The second-order restricted ultrapower of M, denoted by , is the set of equivalence classes [f] for f∈ modulo ∼.
The interpretations of symbols in the language of first-order arithmetic are defined similarly:
[f]+[g]=[f+g],
[f]×[g]=[f× g],
[f]<[g]{i∈ M| f(i)<g(i)}∈.̆
Here f+g and f× g are the pointwise sum and product of f and g as functions.
M naturally embeds into by identifying elements of M with constant functions.
Moreover, is a proper extension of M since the equivalence class of identity function of M is greater than any equivalence class of constant function.
admits a natural second-order expansion inherited from 𝒳, namely for A∈𝒳 and [f]∈, we define
[f]∈ A{i∈ M| f(i)∈ A}∈.̆
We denote the expanded structure of the ultrapower by (,𝒳).
It is easy to show that for i∈ M and A∈𝒳, i∈ A holds in (M,𝒳) if and only if it holds in (,𝒳), so we may view (,𝒳) as an extension of (M,𝒳), where the second-order part is an injection.
The first-order restricted ultrapower is defined similarly, but with and $̆ replaced by the corresponding first-order definable classes.
For example, theΔ_1ultrapower uses the class ofΔ_1-definable subsets andΔ_1-definable total functions.
From now on we also fix an ultrafilter$̆ on 𝒳 such that all the elements of $̆ are cofinal inM.
Generally, Ł oś's theorem does not hold for restricted ultrapowers, but a restricted version of it does hold:
Let (,𝒳) be the second-order restricted ultrapower. Then the following holds:
* If ϕ( x) is a Σ_1^0(M,𝒳)-formula, then
(,𝒳)ϕ([f])∃A∈, A⊆{i∈ M| (M,𝒳)ϕ(f(i))}.
* If ϕ( x) is a Δ_1^0(M,𝒳) formula over (M,𝒳), then
(,𝒳)ϕ([f]){i∈ M| (M,𝒳)ϕ(f(i))}∈.̆
Here the right-hand side makes sense in view of Δ_1^0-comprehension of (M,𝒳). (M,𝒳)_Σ_2^0(,𝒳).
Let (M,𝒳) x yψ(x,y) for some ψ(x,y)∈Δ_0^0(M,𝒳). By picking the least witness y of ψ(x,y), we may assume ψ(x,y) defines a total function f∈, so (M,𝒳)ψ(x,f(x)) for all x∈ M. In particular, for each g∈,
(M,𝒳)ψ(g(x),f∘ g(x)).
Here f∘ g is the composition of f and g. By restricted Ł oś's theorem, (,𝒳)ψ([g],[f∘ g]) for each [g]∈, so that (,𝒳) x yψ(x,y).
We say $̆ is additive if wheneverf∈is bounded, then there is ac∈ Msuch that{i∈ M| f(i)=c}∈$̆.
If $̆ is additive, thenis an end extension ofM.
If [f]<b for some b∈ M, then we may define
g(i)=
0, if f(i)≥ b.
f(i), if f(i)<b.
[g]=[f] and g is a total bounded function in 𝒳. The additiveness of $̆ implies that there isc∈ Msuch that{i∈ M| g(i)=c}∈$̆, that is, [g]=[f]=c.
The following lemma and corollary enable us to transfer the comprehension in(M,𝒳)into the ultrapower viaΣ_2^0-elementarity, and reduce the case ofΣ_n+2toΣ_2^0uniformly in the construction of our main result.
For each n≥ 1, if (M,𝒳) satisfies Σ_n-comprehension, then each instance of Σ_n- and Π_n-comprehension in (M,𝒳) transfers to (,𝒳). Formally speaking, for any first-order formula ϕ(x) in Σ_n(M) or Π_n(M), if there is some A∈𝒳 such that
(M,𝒳) x(x∈ A↔ϕ(x)),
then (,𝒳) x(x∈ A↔ϕ(x)) as well.
We prove the statement for all the ϕ(x) in Σ_k(M) and Π_k(M) simultaneously by induction on k≤ n.
For k=1, let ϕ(x) be any formula in Σ_1(M) or Π_1(M), then x(x∈ A↔ϕ(x)) is a Π_2^0(M,𝒳)-formula, so the same holds in (,𝒳) by Corollary <ref>. For the induction step, suppose the statement holds for all the formulas in Σ_k(M) and Π_k(M), and take any ϕ(x):= yψ(x,y)∈Σ_k+1(M) where ψ(x,y)∈Π_k(M). By Σ_n-comprehension in (M,𝒳), there exist A,B∈𝒳 such that
(M,𝒳) x(x∈ A↔ yψ(x,y)),
(M,𝒳)⟨ x,y⟩(⟨ x,y⟩∈ B↔ψ(x,y)).
By induction hypothesis, the second clause transfers to (,𝒳). We also have the following relation of A and B from the two statements above:
(M,𝒳) x(x∈ A↔ y⟨ x,y⟩∈ B).
This fact also transfers to (,𝒳) by Corollary <ref>.
Putting the two statements in (,𝒳) together, we have
(,𝒳) x(x∈ A↔ yψ(x,y)).
The case for ϕ(x)∈Π_k+1(M) is exactly the same. This completes the induction.
If (M,𝒳) satisfies Σ_n-comprehension for some n∈, then M_Σ_n+2 as an extension between models of first-order arithmetic.
Let M x yψ(x,y) for some ψ(x,y)∈Π_n(M). Take A∈𝒳 such that
(M,𝒳)⟨ x,y⟩(⟨ x,y⟩∈ A↔ψ(x,y)),
(M,𝒳) x y⟨ x,y⟩∈ A.
By Lemma <ref> and Corollary <ref>, both formulas hold in (,𝒳), which implies (,𝒳) x yψ(x,y).
§ CONSTRUCTIONS OF END EXTENSIONS
In this section, we present the constructions of end extensions by the second-order ultrapower.
We first answer Question <ref> affirmatively. In view of Corollary <ref>, we only need to deal with the case for the base level in the second-order context.
For any countable model (M,𝒳)Σ_2^0+_0, there is a proper end extension (M,𝒳)_,Σ_2^0(K,𝒳)Σ_1^0.
We will give two proofs: The first proof is suggested by Tin Lok Wong. We ensure that the second-order ultrapower satisfiesΣ_1^0by properly embedding it into a coded ultrapower as an initial segment;
the second proof guaranteesΣ_1^0directly in the construction of the ultrafilter._0plays a central role in both constructions.
Following the construction in <cit.>, by an iterated arithmetic completeness theorem within (M,𝒳), there is an end extension M⊆_LΔ_0 such that 𝒳=_M(L).
We build a coded ultrapower with respect to both M and L. Let
={g∈ L| g codes a total function from M to L},
={f∈ L| f codes a total function from M to M},
and $̆ be an ultrafilter on𝒳. Following the construction by Paris and Kirby <cit.>, let/$̆ be the coded ultrapower with respect to and $̆.
Similar to the second-order ultrapower, coded ultapowers also satisfy restricted Ł oś's theorem forΔ_0-formulas and thusL_Δ_0/$̆.
Moreover, /$̆ is a cofinal extension ofL, as each element[g]∈/$̆ is bounded by its code g∈ L.
So L_Σ_1/$̆ and in particular,/Δ_0.
Since⊆,is a substructure of/$̆.
On the other hand, since 𝒳=_M(L), is isomorphic to a second-order ultrapower with respect to 𝒳 and $̆, so(M,𝒳)_,Σ_2^0(,𝒳).
We want to pick a sufficiently generic ultrafilter$̆, such that both M and are proper initial segments of 𝒢/$̆.
We construct$̆ in countably many stages. For each k∈ we construct some A_k∈𝒳 that is cofinal in M, and A_k⊇ A_k+1.
We enumerate all the pairs ⟨ f,g⟩ such that f∈ and g∈𝒢 as {⟨ f_k,g_k⟩}_k∈, and all the functions in bounded by some b∈ M as {h_k}_k∈.Stage 0: Let A_0=M.Stage 2k+1(⊆_/$̆): Consider
A=A_2k∩{x∈ M| L g_k(x)<f_k(x)}.
SinceLΔ_0,A∈_M(L)=𝒳.
IfAis cofinal inM, then letA_2k+1=A. Otherwise letA_2k+1=A_2kand move on to the next stage.Stage 2k+2(M⊆_): Assumeh_kis bounded byb∈ M.
Then
(M,𝒳) x y b(x∈ A_2k+1∧ h_k(x)=y).
Since(M,𝒳)Σ_2^0, there is somec<bsuch that
{x∈ M| (M,𝒳) x∈ A_2k+1∧ h_k(x)=c}
is cofinal inM. Let this set beA_2k+2and move on to the next stage.
Finally, let=̆{A∈𝒳| k A_k⊆ A}. It is not hard to see that$̆ is an ultrafilter and each element of $̆ is cofinal inM. This completes the construction of$̆.Verification: We verify that M⊆_⊆_/$̆. For⊆_/$̆, fix any g∈.
For each f∈, take k∈ such that ⟨ f_k,g_k⟩=⟨ f,g⟩.
At stage 2k, if
A=A_2k∩{x∈ M| g(x)<f(x)}
is cofinal in M, then {x∈ M| g(x)<f(x)}∈$̆.
Letĝ∈𝒢be defined by
ĝ(x)=
g(x), if g(x)< f(x).
0, if g(x)≥ f(x).
Then[ĝ]∈and[g]=[ĝ]in/$̆. Otherwise, if A is bounded in M, then we are forced to have {x∈ M| g(x)≥ f_k(x)}∈$̆. By the restricted Ł oś's theorem in/$̆, /[g]>[f_k]. So is an proper initial segment of /$̆, which implies(,_(/)̆)Σ_1^0.
ForM⊆_, supposeh∈is bounded. Then, takek∈such thath=h_k, and at stage2k+2the choice ofA_2k+2forces{x∈ M| (M,𝒳) h(x)=c}∈$̆ for some c∈ M. So $̆ is additive with respect to, andM⊆_by Lemma <ref>.
For eachA∈𝒳, leta∈ Lbe the element that codesA⊆ M. By the restricted Ł oś's theorem forΔ_0-formulas in bothand/$̆, it is not hard to prove that for each f∈,
(,𝒳) [f]∈ A(/,̆𝒳) [f]∈ a.
So we may embed the second-order part 𝒳 of (,𝒳) into _(/)̆.
Since (,_(/)̆)Σ_1^0, we have (,𝒳)Σ_1^0.
For each n∈ and any countable model MΣ_n+2, there is a Σ_n+2-elementary proper end extension M_,Σ_n+2KΣ_n+1.
We first expand M to a second-order structure satisfying Σ_2^0 by adding all the Σ_n-definable sets into the second-order universe, then we further ω-extend it to some countable (M,𝒳)Σ_2^0+_0. By Theorem <ref>, there is an ultrapower (M,𝒳)_,Σ_2^0(,𝒳) which satisfies Σ_1^0. Since all the Σ_n-definable sets of M are in 𝒳, (M,𝒳) satisfies Σ_n-comprehension and M_,Σ_n+2 by Corollary <ref>.
For Σ_n+1, suppose x [g] yθ(x,y,[f]) for some [g]∈ and θ∈Π_n where [f]∈ is the only parameter in θ.
By Π_n-comprehension in M, let A∈𝒳 such that
(M,𝒳)⟨ x,y,z⟩ (⟨ x,y,z⟩∈ A↔θ(x,y,z)).
(,𝒳) satisfies the same formula by Lemma <ref>, and thus
(,𝒳) x [g] y⟨ x,y,[f]⟩∈ A.
By Σ_1^0 in (,𝒳),
(,𝒳) b x [g] y b⟨ x,y,[f]⟩∈ A,
which means b x [g] y bθ(x,y), so Σ_n+1.
Yet simple, this construction does not reveal a syntactical proof ofΣ_n+2⊢Π_n+1. We make(,𝒳)Σ_1^0by embedding it into a larger ultrapower/$̆ as an initial segment, and the core argument is wrapped inside the construction of /$̆.
Here we present another construction that directly guarantees each instance ofΣ_1^0in the ultrapower, and hence provide more insights. The construction relies on a simple yet powerful lemma resulting from_0, which states that if aΠ_1^0-definable bounded multi-valued function is total, then we may select a single-valued choice function of it within the second-order universe.
The lemma also leads to a syntactical proof ofΣ_n+2⊢Π_n+1.
Fix a model (M,𝒳)_0. Let θ(x,y,z)∈Δ_0^0(M,𝒳). If (M,𝒳) x yf(x) zθ(x,y,z) for some total function f∈𝒳, then there is a total function P∈𝒳 such that:
(M,𝒳) x(P(x)<f(x)∧ zθ(x,P(x),z)).
Consider the following tree T which is Δ_1^0-definable in (M,𝒳):
σ∈ Tx,zσ(σ(x)<f(x)∧θ(x,σ(x).z))
Obviously T is bounded by f, so we only need to show that T is infinite.
Let F(x)=max_x'< xf(x').
For any x∈ M, by Σ_1^0, let σ_x∈ M be a coded sequence of length x such that for any x'<x and y'<F(x),
σ_x(x')=y' (M,𝒳) zθ(x',y',z)∧ w y' zθ(x',w,z).
Then we have
(M,𝒳)x'x z(σ_x(x')<f(x)∧θ(x',σ_x(x'),z)).
This means σ_x is an element of T of length x, so T is infinite.
By _0, take an infinite path P of T. Clearly P satisfies the two requirements above.
We construct an ultrafilter $̆ on𝒳inωmany stages.
Along the construction, we gradually guarantee that the ultrapower(,𝒳)Σ_1^0and$̆ is additive.
Enumerate all the triples {⟨ z θ_k(x,y,z),f_k, g_k⟩}_k∈, where θ_k(x,y,z)∈Δ_0^0(M,𝒳) and f_k,g_k are total functions in 𝒳, and enumerate all the bounded total functions in 𝒳 as {h_k}_k∈. For each k∈, at stage k we construct a cofinal set A_k∈𝒳 such that A_k⊇ A_k+1 for all k∈, and the resulting ultrafilter =̆{A∈𝒳|k A⊇ A_k}.Stage 0: Set A_0=M∈𝒳.Stage 2k+1((,𝒳)Σ_1^0):
At these stages we want to guarantee the following instances of Σ_1^0:
y[g_k] zθ_k([f_k],y,z)→ by[g_k]zbθ_k([f_k],y,z).
The general idea is that we first try to `force' the consequent of above implication to be true in (,𝒳). If we succeed, then the entire statement is true. Otherwise, we apply Lemma <ref> to argue that the antecedent is already guaranteed to be false in the ultrapower.
Consider the Σ_1^0-definable set
A=A_2k∩{x∈ M| byg_k(x)zbθ_k(f_k(x),y,z)}.
If it is cofinal in M, then there is a cofinal subset A^*∈𝒳 of A by Hájek–Pudlák <cit.>. Let A_2k+1=A^* and proceed to stage 2k+2. If A is not cofinal in M, we let A_2k+1=A_2k and proceed directly to stage 2k+2.Stage 2k+2(Additiveness of $̆): This part is exactly the same as the construction of stage2k+2in the first proof of Theorem <ref>.
Finally, let=̆{A∈𝒳| k A⊇ A_k}. It is not hard to see that$̆ is an ultrafilter, and each element of $̆ is cofinal inM. This completes the whole construction. Verification: Let(,𝒳)be the corresponding second-order restricted ultrapower. We show that(M,𝒳)_,Σ_2^0(,𝒳)and(,𝒳)Σ_1^0. The elementarity is given by Corollary <ref>.(M,𝒳)⊆_(,𝒳)follows from the exact same reasoning as in the first proof of Theorem <ref>. For(,𝒳)Σ_1^0, consider an arbitrary instance ofΣ_1^0in(,𝒳):
y[g] zθ([f],y,z)→ by[g]zbθ([f],y,z),
whereθ∈Δ_0^0(M,𝒳). Here without loss of generality, we assume there is only one first-order parameter[f]∈inθ.
Assume at stage2k+1, we enumerate⟨ z θ_k(x,y,z),f_k, g_k⟩=⟨ zθ(x,y,z),f,g⟩, andA_2k∈𝒳is the cofinal subset ofMwe obtained from the previous stage.
Suppose we are in the first case of the construction in this stage, i.e.,
A=A_2k∩{x∈ M| byg(x)zbθ(f(x),y,z)}
is cofinal inM, then by the construction there exists someA^*⊆ Ain$̆. By restricted Ł oś's theorem, (,𝒳) by[g]zbθ([f],y,z), so the instance of Σ_1^0 is true. If we are in the second case, assuming A is bounded by d∈ M, then
(M,𝒳) x d(x∈ A_2k→ b y g(x) z bθ(f(x),y,z)).
By Σ_1^0 in M, this fact is equivalent to:
(M,𝒳)xdyg(x)(x∈ A_2k→ zθ(f(x),y,z)).
By Lemma <ref>, there is a total function P∈𝒳 such that
* (M,𝒳)xd P(x)<g(x).
* (M,𝒳)xd (x∈ A_2k→ zθ(f(x),P(x),z) ).
The first clause implies (,𝒳) [P]<[g] by restricted Ł oś's theorem. Suppose (,𝒳) zθ([f],[P],z), then by restricted Ł oś's theorem for Σ_1^0 formulas, there is some A'∈$̆ such thatA'⊆{x∈ M| zθ(f(x),P(x),z)}, but by the second clause,A'∩ A_2kis bounded byd, which contradictsA'∩ A_2k∈$̆. So (,𝒳) zθ([f],[P],z), and the instance of Σ_1^0 considered is vacuously true.
We now proceed to the construction of end extensions for characterizing countable models ofΣ_n+2. We first defineK MΣ_n+1for an end extensionM_ K, and introduce some equivalent definitions of it.
For each n∈, let M,K be models of Δ_0+exp and M⊆_ K.
We say K MΣ_n+1 if for any ϕ(x)∈Σ_n+1(K) and a∈ M,
Kϕ(0)∧ x a(ϕ(x)→ϕ(x+1))→ x aϕ(x).
Notice that we allow parameters in K in ϕ while the bound a must be in M.
For each n∈, let M be a model of Δ_0+exp, KΣ_n and M⊆_ K, then the following are equivalent:
* K MΣ_n+1.
*
For any ϕ(x)∈Σ_n+1(K) and a∈ M,
K cx a(ϕ(x)↔ x∈ c).
*
For any θ(x,y)∈Π_n(K) and a∈ M,
K b x a( yϕ(x,y)↔ y bθ(x,y)).
We show lem:equivMind:1⇔lem:equivMind:2 and lem:equivMind:2⇔lem:equivMind:3.
For lem:equivMind:1⇒lem:equivMind:2, first by modifying a standard argument, one can show that lem:equivMind:1 implies the least number principle for Π_n+1-formulas that is satisfied by some element of M.
Then we pick the least c<2^a∈ M such that
K x a(ϕ(x)→ x∈ c).
Such c will code ϕ(x) for x<a by the minimality of c.
For lem:equivMind:2⇒lem:equivMind:1, take the code c of ϕ(x) below a∈ M. Then, one can prove the instance of MΣ_n+1 for ϕ(x) by replacing ϕ(x) with x∈ c and applying KΔ_0.
For lem:equivMind:2⇒lem:equivMind:3, take the code c of {x<a| K yθ(x,y)} by lem:equivMind:2. Consider the Σ_n+1-formula
Φ(v):= b x v(x∈ c↔ y bθ(x,y)).
It is not hard to show that KΦ(0)∧ v(Φ(v)→Φ(v+1)).
By MΣ_n+1 (from lem:equivMind:2⇒lem:equivMind:1) we have KΦ(a), which implies lem:equivMind:3. For lem:equivMind:3⇒lem:equivMind:2, let ϕ(x):= yθ(x,y) for some θ∈Π_n(K).
By lem:equivMind:3, there is some b∈ K such that
K x a( yθ(x,y)↔ y bθ(x,y)).
By Σ_n in K, there is some c∈ K that codes {x<a| K y bθ(x,y)}. Such c will serve as the witness for lem:equivMind:2.
For any countable model (M,𝒳)Σ_2^0, there is a proper end extension (M,𝒳)_,Σ_2^0(K,𝒳) such that for any Σ_1^0(K,𝒳)-formula ϕ(z) and a∈ M, the set {z<a| (K,𝒳)ϕ(z)} is coded in K.
The construction is a mild generalization of the construction in <cit.>.
We make the ultrapower (,𝒳) satisfy the coding requirement by maximizing each Σ_1^0-definable subset of that is bounded by some element of M.
Enumerate all the pairs {⟨ yθ_k(x,y,z),a_k⟩}_k∈ where θ_k(x,y,z)∈Δ_0^0(M,𝒳) and a_k∈ M, and enumerate all the bounded total functions in 𝒳 as {h_k}_k∈.
At each stage k we construct a cofinal set A_k∈𝒳 such that A_k⊇ A_k+1 for all k∈, and the resulting ultrafilter =̆{A∈𝒳|k A⊇ A_k}.
Stage 0: Let A_0=M.Stage 2k+1(coding Σ_1^0 sets): Consider the following Π_2^0-formula where c<2^a_k:
x(x∈ A_2k∧ z c yθ_k(x,y,z)).
By (M,𝒳)Σ_2^0, there exists a maximal c_0<2^a_k satisfying the formula above.
Similar to the alternative proof of Theorem <ref>, let A_2k+1∈𝒳 be a cofinal subset of the following Σ_1^0-definable subset of M:
{x∈ M| (M,𝒳) x∈ A_2k∧ z c_0 yθ_k(x,y,z)}.
Stage 2k+2 (Additiveness of $̆): This part is exactly the same as stage2k+2in the proof of Theorem <ref>.
Finally, let=̆{A∈𝒳| k A⊇ A_k}. It is not hard to see that$̆ is an ultrafilter, and each element of $̆ is cofinal inM. This completes the whole construction.
Verification: Let(,𝒳)be the corresponding second-order restricted ultrapower.(M,𝒳)_,Σ_2^0(,𝒳)follows in exactly the same way as in Theorem <ref>.
For the coding requirement of(,𝒳), consider anyΣ_1^0-formula yθ([f],y,z)whereθ∈Δ_0^0(,𝒳)and[f]∈is the only first-order parameter ofθ.
For anya∈ M, assume for somek∈, at stage2k+1we enumerate the pair⟨ yθ(f(x),y,z),a⟩. We show that the maximalc_0∈ Mwe obtained in the construction codes{z<a| (,𝒳) yθ([f],y,z)}.
For eachz<asuch thatz∈ c_0, sinceA_2k+1is a subset of{x∈ M| (M,𝒳) yθ(f(x),y,z)},(,𝒳) yθ([f],y,z)by restricted Ł oś's theorem forΣ_1^0-formulas.
On the other hand, for eachz'<asuch thatz'∉ c_0, if(,𝒳) yθ([f],y,z'), then by restricted Ł oś's theorem forΣ_1^0-formula again there is someB∈$̆ such that
B⊆{x∈ M| (M,𝒳) yθ(f(x),y,z')}.
Since B∩ A_2k+1∈$̆ is still cofinal inM, we have
x(x∈ A_2k∧ z c_0∪{z'} yθ(f(x),y,z)),
which contradicts the maximality ofc_0in the construction.
So(,𝒳) yθ([f],y,z').
For each n∈ and any countable model MΣ_n+2, there is an Σ_n+2-elementary proper end extension M_,Σ_n+2K MΣ_n+1.
The proof is mostly the same as Theorem <ref>.
We expand M to a second-order structure (M,𝒳) satisfying Σ_2^0+_0 by adding all the Δ_n+1-definable subsets of M into the second-order universe.
Such (M,𝒳) satisfies Σ_n-comprehension.
By Theorem <ref>, there exists a second-order ultrapower extension (M,𝒳)_,Σ_2^0(,𝒳) that codes all the Σ_1^0-definable subset bounded by some element of M. Since all the Σ_n-definable sets of M are in 𝒳, (M,𝒳) satisfies Σ_n-comprehension and M_,Σ_n+2 by Corollary <ref>.
For M Σ_n+1, we only need to show that satisfies condition lem:equivMind:3 in Lemma <ref>.
Let ϕ(x):= yθ(x,y,[f]) be any Σ_n+1-formula, where θ∈Π_n and [f]∈ is the only parameter in θ, and a be some element of M. Since (M,𝒳) satisfies Σ_n-comprehension, there is some A∈𝒳 such that
(M,𝒳)⟨ x,y,z⟩ (⟨ x,y,z⟩∈ A↔θ(x,y,z)).
The same formula holds in (,𝒳) by Lemma <ref>, and thus
(,𝒳) x( y⟨ x,y,[f]⟩∈ A↔ϕ(x)).
There is some c∈ K that codes {x<a| (,𝒳) y⟨ x,y,[f]⟩∈ A}.
Such c will also code {x<a|ϕ(x)}.
§ THE WEAK REGULARITY PRINCIPLE
In the final section we provide an application of Lemma <ref>: We introduce the weak regularity principleϕ, a variant of the regularity principle, and determine its strength within the I-B hierarchy. One of the main application is to prove the converse of Theorem <ref>, and give a model-theoretic characterization of countable models ofΣ_n+2analogous to Theorem <ref>.
Mills and Paris <cit.> introduced the regularity principleϕto be the universal closure of the following formula:
x yaϕ(x,y)→ y a xϕ(x,y).
For any formula classΓ, let
Γ=Δ_0∪{ϕ|ϕ∈Γ}.
It is also shown in <cit.> thatΠ_n⇔Σ_n+1⇔Σ_n+2for eachn∈.
The weak regularity principle is defined by replacing the∃^ xby∀ xin the antecedent of implication inϕ.
Let ϕ(x,y) be a formula in first-order arithmetic with possibly hidden variables. The weak regularity principleϕ denotes the universal closure of the following formula:
x yaϕ(x,y)→ya xϕ(x,y).
For any formula class Γ, define
Γ=Δ_0∪{ϕ|ϕ∈Γ}.
The strength of the weak regularity principle behaves more complicated within the arithmetic hierarchy compared to the regularity principle.
The principle for most of the natural class of formulas correspond to collection schemes, whereas induction schemes are only equivalent to the principle for a highly restricted subclass ofΣ_0(Σ_n+1)-formulas.
We will show that for eachn∈,Σ_0(Σ_n),Σ_n+1,Π_n+1and(Σ_n+1∨Π_n+1)are all equivalent toΣ_n+2,
and(Σ_n+1∧Π_n+1)is equivalent toΣ_n+2.
The weak regularity principle may also be viewed as an infinitary version of the pigeonhole principle, and similar phenomenon arises with the strength of pigeonhole principle in the I-B hierarchy.
Dimitracopoulos and Paris <cit.> proved thatΣ_n+1andΠ_n+1are equivalent toΣ_n+1.(Σ_n+1∨Π_n+1)andΣ_0(Σ_n+1)are equivalent toΣ_n+1.
For each n∈, Σ_n+2⊢Π_n+1.
We only show the case of n=0 and the rest can be done by relativizing to the Σ_n universal set. Let MΣ_2, we first expand M to a second-order structure (M,𝒳)Σ_2^0+_0.
Let ϕ(x,y):= z θ(x,y,z) for some θ∈Δ_0.
Suppose M x yaϕ(x,y). By applying Lemma <ref> with θ and f(x) ≡ a as a constant function, we obtain some total function P∈𝒳 such that (M,𝒳) x(P(x)<a∧ zθ(x,P(x),z)). By Σ_2^0, there is some y_0<a such that there are infinitely many x satisfying P(x)=y_0, which implies M y a x ϕ(x,y). So MΠ_1.
For each n∈, Σ_n+2⊢(Σ_n+1∨Π_n+1).
Fix n∈ and let MΣ_n+2, ϕ(x,y)∈Σ_n+1(M) and ψ(x,y)∈Π_n+1(M). Suppose M x y a(ϕ(x,y)∨ψ(x,y)) for some a∈ M.
If M x b y aψ(x,y) for some b∈ M, then by Lemma <ref>, M x y aψ(x,y) and the conclusion holds.
Otherwise M x y aϕ(x,y), then by Σ_n+1, M y a xϕ(x,y) and the conclusion holds again.
There is also a direct model-theoretic proof similar to Proposition <ref>. One only need to notice that over Σ_n+1, x y a(ϕ(x,y)∨ψ(x,y)) is equivalent to a Π_n+2-formula.
For each n∈, (Σ_n∧Π_n)⊢Σ_n+1.
We prove for each k≤ n, Σ_k+(Σ_n∧Π_n)⊢Σ_k+1, and the lemma follows by induction.
Let MΣ_k+(Σ_n∧Π_n), suppose MΣ_k+1, then there is a proper cut I⊆ M defined by ϕ(y):= xθ(x,y), where θ(x,y)∈Π_k(M). Let μ(x,y):=y' yx' xθ(x',y'). Then μ(x,y)∈Π_k(M) over Σ_k and ϕ^*(y)= xμ(x,y) also defines a cut J⊆ I.
J is closed under successor by its definition.
For any x∈ M, if Mμ(x,y) for all y∈ J, then y∈ J is defined by μ(x,y) in M, which contradicts MΣ_k. So for each x∈ M, we may take the largest y∈ J satisfying μ(x,y) by Σ_k. Fixing some arbitrary a>J, we have
M x y a(μ(x,y)∧μ(x,y+1)).
Applying (Σ_n∧Π_n), there is some y_0<a such that
M x(μ(x,y_0)∧μ(x,y_0+1)).
By the definition of μ(x,y), this implies y_0∈ J and y_0+1∉ J, which contradicts the fact that J is a proper cut of M closed under successor.
So MΣ_k+1.
For each n∈, Σ_0(Σ_n)⊢Σ_n+2.
Let MΣ_0(Σ_n). We show MΠ_n, which is equivalent to Σ_n+2.
By Lemma <ref>, MΣ_n+1.
Suppose M x y a ϕ(x,y) for some ϕ∈Π_n(M), and without loss of generality, we assume M y aϕ(0,y).
For each z∈ M, we find the largest x<z such that M y aϕ(x,y), and `color' z by the witness y. Formally,
M z y a x z(ϕ(x,y)∧x'(x,z) y aϕ(x',y)),
where (x,z) refers to the open interval between x and z.
By Σ_0(Σ_n),
M y a z x z(ϕ(x,y)∧x'(x,z) y aϕ(x',y)).
which implies M y a xϕ(x,y).
For each n∈, Σ_0(Σ_n)⇔(Σ_n+1∨Π_n+1)⇔Σ_n+2.
Σ_0(Σ_n)⊢Σ_n+2 follows from Lemma <ref>. Σ_n+2⊢(Σ_n+1∨Π_n+1) follows from Corollary <ref>. (Σ_n+1∨Π_n+1)⊢Σ_0(Σ_n) is trivial.
There is an analog of Proposition <ref> forM_,Σ_n+2K MΣ_n+1and(Σ_n+1∧Π_n+1). It also leads to a model-theoretic proof ofΣ_n+2⊢(Σ_n+1∧Π_n+1).
Let MΔ_0+exp. For each n∈, if there is a proper Σ_n+2-elementary end extension M_,Σ_n+2K MΣ_n+1, then M(Σ_n+1∧Π_n+1).
Let θ(x,y,z)∈Σ_n(M), σ(x,y,w)∈Π_n(M) and ϕ(x,y):= zθ(x,y,z)∧ wσ(x,y,w)∈Σ_n+1∧Π_n+1. Suppose M x y aϕ(x,y) for some a∈ M.
Over Σ_n+1, it is equivalent to the following Π_n+2 formula
x b y a( z bθ(x,y,z)∧ wσ(x,y,w)).
So both M and K satisfy the formula above by elementarity. Pick some arbitrary d∈ K∖ M, then
K b y a( z bθ(d,y,z)∧ wσ(d,y,w)).
By Lemma <ref>lem:equivMind:2, there is some b∈ K such that
K y a( zθ(d,y,z)↔ z bθ(d,y,z)),
which implies
K y a( zθ(d,y,z)∧ wσ(d,y,w)).
Pick a witness c<a in M such that K zθ(d,c,z)∧ wσ(d,c,w), i.e., Kϕ(d,c). Now for each b∈ M, K x bϕ(x,c), where this is witnessed by d. Transferring each of these formulas to M, we have M x bϕ(x,c) for any b∈ M, i.e., M xϕ(x,c).
For each n∈, (Σ_n+1∧Π_n+1)⇔Σ_n+2.
(Σ_n+1∧Π_n+1)⊢Σ_n+2 follows from Lemma <ref>. For the other direction, given any countable model MΣ_n+2, there is a proper end extension M_,Σ_n+2K MΣ_n+1 by Theorem <ref>, and then M(Σ_n+1∧Π_n+1) by Proposition <ref>.
In Hájek-Pudlák <cit.> it was shown that every Σ_0(Σ_n+1)-formula has the following normal form:
1u_1v_1…ku_kv_kΨ(u_1… u_k,v_1… v_k,w_1… w_l).
where k,l∈, each Q_k is either ∀ or ∃, Ψ is a Boolean combination of Σ_n+1-formulas and the variable sets {u_i}_i≤ k, {v_i}_i≤ k and {w_i}_i≤ l are pairwise disjoint.
Our proof of Proposition <ref> can be refined to show that Σ_n+2⊢ϕ, where ϕ(x,y)∈Σ_0(Σ_n+1), and if written in the normal form above, x does not appear in {v_i}_i≤ k, i.e., x is not permitted to appear in the bound of any bounded quantifiers.
In contrast, the instance ϕ(z,y) of Σ_0(Σ_n) we used to prove Σ_n+2 in Lemma <ref> starts with ∃ x<z explicitly.
Finally, we establish the characterization of countable models ofΣ_n+2as promised.
Let M be a countable model of Δ_0. For each n∈, M satisfies Σ_n+2 if and only if M admits a proper Σ_n+2-elementary end extension K MΣ_n+1.
The direction from left to right follows by Theorem <ref>. For the other direction, if M_,Σ_n+2K MΣ_n+1, then M(Σ_n+1∧Π_n+1) by Proposition <ref>, and thus MΣ_n+2 by Theorem <ref>.
The main remaining problem now is to find a purely syntactic proof of Theorem <ref>. We conjecture that a more refined tree construction similar to the approach in Lemma <ref> would solve the problem.
Give a direct proof of Σ_n+2⊢(Σ_n+1∧Π_n+1) (and also Σ_n+2⊢ϕ where ϕ(x,y)∈Σ_0(Σ_n+1) as described in the remark above) without using end extensions.
§ ACKNOWLEDGEMENT
The author's research was partially supported by the Singapore Ministry of Education Tier 2 grant AcRF MOE-000538-00 as well as by the NUS Tier 1 grants AcRF R146-000-337-114 and R252-000-C17-114. This work is contained in the author's Ph.D. thesis. I would like to thank my two supervisors Tin Lok Wong and Yue Yang for their guidance, encouragement and helpful discussions.
I am also indebted to Leszek A. Koł odziejczyk for carefully reading drafts of the paper and providing many helpful suggestions.
plain |
http://arxiv.org/abs/2409.03458v1 | 20240905121433 | Non-Uniform Illumination Attack for Fooling Convolutional Neural Networks | [
"Akshay Jain",
"Shiv Ram Dubey",
"Satish Kumar Singh",
"KC Santosh",
"Bidyut Baran Chaudhuri"
] | cs.CV | [
"cs.CV"
] |
CNN Robustness under Non-Uniform Illumination
Jain et al.
Non-Uniform Illumination Attack for Fooling Convolutional Neural Networks
Akshay Jain, Shiv Ram Dubey, Senior Member, IEEE, Satish Kumar Singh, Senior Member, IEEE,
KC Santosh, Senior Member, IEEE,
Bidyut Baran Chaudhuri, Life Fellow, IEEE
A. Jain, S.R. Dubey and S.K. Singh are with the Computer Vision and Biometrics Lab, Department of Information Technology, Indian Institute of Information Technology Allahabad, Prayagraj, Uttar Pradesh-211015, India (e-mail: [email protected], [email protected], [email protected]).
KC Santosh is with the AI Research Lab, Department of Computer Science, University of South Dakota, Vermillion, SD 57069 USA (e-mail:
[email protected]).
B.B. Chaudhuri was with the Computer Vision and Pattern Recognition Unit at Indian Statistical Institute, Kolkata-700108, India (e-mail: [email protected]).
September 9, 2024
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Convolutional Neural Networks (CNNs) have made remarkable strides; however, they remain susceptible to vulnerabilities, particularly in the face of minor image perturbations that humans can easily recognize. This weakness, often termed as `attacks,' underscores the limited robustness of CNNs and the need for research into fortifying their resistance against such manipulations. This study introduces a novel Non-Uniform Illumination (NUI) attack technique, where images are subtly altered using varying NUI masks. Extensive experiments are conducted on widely-accepted datasets including CIFAR10, TinyImageNet, and CalTech256, focusing on image classification with 12 different NUI attack models. The resilience of VGG, ResNet, MobilenetV3-small and InceptionV3 models against NUI attacks are evaluated. Our results show a substantial decline in the CNN models' classification accuracy when subjected to NUI attacks, indicating their vulnerability under non-uniform illumination. To mitigate this, a defense strategy is proposed, including NUI-attacked images, generated through the new NUI transformation, into the training set. The results demonstrate a significant enhancement in CNN model performance when confronted with perturbed images affected by NUI attacks. This strategy seeks to bolster CNN models' resilience against NUI attacks.
[The code is available at <https://github.com/Akshayjain97/Non-Uniform_Illumination>]
While CNN models demonstrate strong performance on controlled data, their susceptibility to manipulation raises significant concerns about their robustness and suitability for real-world applications, as they can potentially fooled by data perturbation. In this context, we explore non-uniform illumination (NUI) masks that manipulate images to deceive CNN models while preserving their semantic content. Additionally, we introduce a key defense strategy involving NUI augmentation during training to enhance CNN model robustness. Given the prevalence of illumination variations in practical computer vision applications, our NUI masks offer a crucial means of bolstering model resilience.
Convolutional Neural Network; Robustness; Non-Uniform Illumination; Deep Learning; Image Categorization; Fooling Deep Models.
§ INTRODUCTION
Deep learning, a subfield of artificial intelligence, known for neural networks with multiple interconnected layers, enables the automated extraction of progressively abstract features from input data <cit.>.
Its resurgence in the 2010s was catalyzed by ample data availability, enhanced computational resources, and novel architectures such as convolutional and recurrent networks. Ongoing research in optimization, interpretability, and robustness continues to refine deep learning's efficacy and broaden its applicability across intricate real-world problem domains. The convolutional and recurrent networks made significant advancements in diverse domains including computer vision <cit.>, natural language processing <cit.>, health informatics <cit.>, and sentiment analysis <cit.>
The Convolutional Neural Networks (CNNs) are utilized for computer vision applications <cit.>, such as image recognition <cit.>, COVID-19 grading <cit.>, image quality assessment <cit.>, image super-resolution <cit.> and human action recognition <cit.>. CNN models employ backpropagation to learn the weights <cit.>. However, if a CNN model is more complex than the dataset and appropriate regularization techniques are not utilized, they are susceptible to overfitting the training data. Common regularization approaches include Dropout <cit.>, Batch Normalization <cit.>, and Data Augmentation <cit.>.
Recent studies uncovered that the CNN models can be deceived via data perturbation in multiple different ways <cit.>. To address this issue, many defense methods and network robustness aspects were studied <cit.>. However, none of them studied the robustness of CNN models against non-uniform illumination. In this paper, we propose mask-based non-uniform illumination (NUI) variations as depicted in <ref> to fool the CNN models. Existing methods for adversarial attacks and defense techniques depend on data and the model's gradient. The proposed NUI attack is data-independent and utilizes varying weights of brightness and darkness.
The majority of the techniques to perturb test images have a few drawbacks: prior knowledge of the model and dataset limits their applications in unfamiliar scenarios, and the inability to add non-uniform illumination variations in the brightness of the images, whereas the NUI attack technique adds non-uniform brightness to the image while keeping the semantic meaning intact.
The following are the contributions of this paper:
* The proposed NUI attack produces the attacked images by combining the input image with a NUI mask. Specifically, 12 NUI attack masks are presented.
* The NUI attack mask is created using several non-linear transformations generating non-uniform variations of brightness and darkness exploiting the spatial structure of the image.
* We analyze the robustness of the CNN models including VGG, ResNet, MobilenetV3 and InceptionV3 over the proposed NUI attack on various benchmark datasets, including CIFAR10, CalTech256, and TinyImageNet.
* We also train the CNN models on the NUI-attacked images to evaluate the robustness of the models when the NUI attack is used as a data augmentation technique.
The remaining paper is structured as follows: <ref> describes the related work; <ref> describes the proposed NUI attack; <ref> describes the experimental settings, datasets, and training settings used; <ref> illustrates the experimental results with observations;
and <ref> concludes the paper.
§ RELATED WORK
This section briefs about the adversarial attacks using brightness and defense mechanisms to such attacks.
§.§ Adversarial Attacks Using Brightness
Several works have focused on attacking the neural network models by perturbing the intensity values of the image pixels.
Nguyen et al. <cit.> have explored the possibility and practicality of performing real-time physical attacks on face recognition systems using adversarial light projections.
Singh et al. <cit.> have generated adversarial examples using Curriculum Learning.
The natural adversarial lighting conditions are generated by utilizing a physical lighting model proposed by Zhang et al. <cit.> for conducting an adversarial relighting attack.
Given an image, Yang et al. <cit.> have generated the adversarial examples by applying a brightness transformation to an image and feeding it into a CNN.
Hsiung et al. <cit.> have utilized the component-wise projected gradient descent and automatic attack-order scheduling to find the optimal attack composition for creating the composite adversarial examples.
Most existing methods require a neural network to generate adversarial examples. The colour channel perturbation (CCP) attack, perturbs the channels of images to generate the mixed colour channels randomly <cit.>. The impact of colour is also studied in <cit.> on the robustness of deep learning models. The paper aims to judge the robustness of CNN models against various non-uniform illumination variations generated through different masks. The proposed method is data-independent, does not require any neural network and gives a high attack success rate.
§.§ defense Against Brightness Attacks
The primary defense mechanism employed by most methods includes the attacked samples in the training set through data augmentation and retrains the model. A survey of defense strategies is presented in <cit.>.
Agarwal et al. <cit.> have exploited the image transformations, including Discrete Wavelet Transform and Discrete Sine Transform, against adversarial perturbation using deep models.
The performance of CNN models on CCP-attacked images greatly improved when the models were trained on the training set containing the CCP-attacked samples <cit.>.
The adversarial examples generated in <cit.> are designed to be resilient against variations in real-world brightness conditions. Agarwal et al. <cit.> have developed an adversarial perturbation detector agnostic to databases, attacks, and models. Adversarial visual reconstruction is used against DeepFakes in <cit.>.
Hsiung et al. <cit.> have performed the generalized adversarial training (GAT) to enhance the robustness of the model against composite semantic perturbations, including combinations of Hue, Saturation, Brightness, Contrast, and Rotation. Recently, a self-supervised defense mechanism has been utilized in <cit.> against adversarial face images.
Premakumara et al. <cit.> have systematically investigated the amount of artificial perturbation needed to enhance the models' generalization by augmenting the data for object detection using neural networks.
We propose a primary defense mechanism against the NUI attack by employing data augmentation through NUI attack in the training set and retraining the CNN models for the image classification task. The proposed defense technique can be useful in common use cases where the input image gets distorted due to exposure to sunlight or part of the image becomes relatively darker because of reflection.
§ PROPOSED NON-UNIFORM ILLUMINATION ATTACK
In recent years, various attack methods have been investigated to judge the robustness of CNN models. However, the conventional attack methods do not take advantage of creating non-uniform illumination variations with different brightness and darkness levels.
§.§ Proposed NUI Attacks
We propose a simple yet effective non-uniform illumination (NUI) attack on test image data. The rationale behind developing this attack technique stemmed from a desire to investigate perturbation methods applicable to convolutional neural network (CNN) models which can give a high attack success rate and do not require any Neural Network to model such attack. Specifically, the aim is to explore how illumination variations could be utilized to attack these models. In the earlier stages of the experiments we considered only Mask 1 to Mask 4, but later to experiment with the region of attack, we added Mask 5 to Mask 12 given in <ref>. The proposed NUI attack brightens or darkens the image pixels non-uniformly to generate the synthesized test images to fool the CNN models. The core of the proposed attack is the weight of image brightness and darkness. The weight (k) value controls the brightness or darkness added to the test image based on certain patterns.
The proposed attack technique uses several masking strategies to generate different masks (a) for the images of size h× w, where h and w are image height and width, respectively. The created masks are applied to the test images to generate the synthesized test images to fool the CNN models.
In this paper, we experiment using 12 different masks. We analyzed the robustness of CNN models on the Attacks caused by different NUI masks. The formulas utilized to create these masks (a) are given in <ref> with its region of perturbation in the image.
There are a total of 23 different weight values k used in this paper, ranging from -2.2 to +2.2 with a gap of 0.2. It leads to 23 × 12 = 276 experiments for a given model on any dataset.
The masking function, Mask 1, is considered from <cit.>. Mask 2, 3, and 4 are the variations of Mask 1 and are formulated by considering the exploitation of spatial locality. Mask 5 perturbs the image centre up to the centres of each side in the shape of a curved diamond. The effect of Mask 6, 7 and 8 is similar, but with different severity. These masks create a circular perturbation effect in the images. The amount of perturbation is highest for Mask 6 and lowest for Mask 8. Mask 9 and 10 use Mask 1 and negative of the Mask 2 in specific conditions leading to perturbation of the pattern of vertical and horizontal lines, respectively. Mask 11 adds perturbations of Mask 1 2, 3, and 4 in different quadrants. The effect of the Mask 12 is similar to Mask 11, except for the right part of the image which becomes darker instead of brighter.
The algorithm for the proposed NUI attack is illustrated in Algorithm <ref>. The input image (I) is attacked to purturbed image (I_M_i,k) using the i^th Mask and weight value k. As shown in Table <ref>, 12 NUI Masks are used in this paper. Based on the chosen Mask and weight, the final Mask is computed and added in the input image to generate the attacked image.
§.§ Effect of NUI Attacks
The effect of different NUI attacks is illustrated in <ref> using the sample images as to how the brightness, colour, details, appearance, etc. change after applying different NUI masks. Here, the perturbation weight (k) value is different for all columns and is positive, because of which all the images look brighter than their original form. The 1^st column contains the original sample images. The 2^nd to 13^th columns correspond to the images generated using Mask 1 to 12, respectively. As mentioned, the perturbed image is brighter on the left side and the perturbation drops when it goes to the right for Mask 1. The image is bright in general for Mask 2. The images appear bright in the top right corner for Mask 3. The perturbations are focused more in the bottom right corner for the masking function 4. These masking functions are simple and do not change the underlying semantic meaning of the input image, but can provide a good attack success rate.
The effect of a curved diamond can be observed for the Mask 5. The perturbations for Mask function 6, 7, and 8, respectively, produce samples like the reverse of the Mask 5. The images produced using Mask 6 are perturbed with higher intensity values. However, the amount of perturbation is reduced for Mask 7 which is further reduced for Mask 8. Moreover, the attack success increases for Mask 8 without losing the visual perceptibility of the image.
The perturbations caused by Mask 9 and 10 respectively have vertical and horizontal patterns of alternate brightness and darkness. Masks 11 and 12 perturb the images using different masks in different quadrants. Mask 11 adds mask value in each quadrant, while mask 12 adds mask value in the left side quadrants and subtracts in the right side quadrants. We also show the effect on the histogram in Supplementary.
§.§ Proposed Workflow using NUI Attacks
The workflow of the proposed method is illustrated in <ref>. To analyse the robustness of the CNN models against NUI attacks, we trained models on the original datasets and tested them for all NUI masks for all values of (k). Further to analyse the defense capability, the CNN models are trained on the NUI-attacked datasets and again tested.
For training models on perturbed datasets, the NUI perturbation is added to 80% of the training set. We limit the weight factor (k) in the training part to 12 different settings to avoid high bias in the training set towards severe perturbation, i.e., from -1.2 to +1.2 with a gap of 0.2 excluding 0.0 as it is already included in the 20% part of the training set. The number of masks for perturbation during training is reduced to 10 only, excluding Mask 6 and Mask 7 as these are similar to Mask 8. Mask 12 is replaced with the following mask for training:
if(x≤16 and y≤16):a = +Mask 1
if(x≤16 and y>16):a = -Mask 2
if(x>16 and y≤16):a = +Mask 3
if(x>16 and y>16):a = -Mask 4
which subtracts Mask 2 and Mask 4 in the leading diagonal quadrants, respectively, and adds Mask 1 and Mask 3 in the other two quadrants, respectively. This represents the general case for quadrant perturbation.
After being trained on perturbed images, the CNN models not only preserved the original accuracy on unperturbed data but also became robust to NUI attacks.
§ EXPERIMENTAL SETTINGS
§.§ Datasets
To examine the impact of the proposed NUI attacks, we conduct the image classification experiments on three benchmark datasets, including CIFAR10 <cit.>, CalTech256 <cit.>, and TinyImageNet <cit.>.
The 60,000 images in the CIFAR10 dataset are equally divided into 10 different categories. Out of 60,000 images 10,000 images are marked as the test set and the rest as the training set.
The 30,607 images in the CalTech256 dataset represent 257 different object categories. 20% of the CalTech256 dataset is utilized for testing, while the rest for training. The CalTech256 dataset exhibits a high level of complexity due to several categories and more instances within each category, it also exhibits high inter-class similarity.
The training set of the TinyImageNet dataset contains 100,000 images and the validation set consists of 10,000 images. The dataset comprises 200 categories which have 500 training images and 50 validation images for each category. It consists of a subset of images from ImageNet, specifically curated for small-scale experiments.
§.§ CNN Architectures Used
We used VGG <cit.>, ResNet <cit.>, MobilenetV3 <cit.> and InceptionV3 <cit.> to demonstrate the effects of the proposed non-uniform illumination attack.
The VGG network is a deep CNN model containing 16 or 19 trainable layers. The principal thought behind the VGG network is to utilize a series of convolutional layers with small filter sizes (3×3) and stack them together to create a deeper network.
For experiments on the CIFAR10 and TinyImageNet datasets, VGG16 is used and for experiments on the CalTech256 dataset, VGG19 is used.
The ResNet model includes the residual connections that allow the flow of gradients during backpropagation effectively.
Deep CNNs utilizing the residual model demonstrate improved convergence, leading to enhanced performance. The ResNet18 model is used with all the datasets for experiments.
MobileNetV3 is a convolutional neural network specifically optimized for mobile phone CPUs through a combination of hardware-aware network architecture search (NAS).
This network has been further refined through several innovative architectural improvements, including integrating complementary search methodologies, developing new efficient nonlinearities suitable for mobile environments and creating efficient network design tailored for mobile applications.
Inception-v3 represents an advanced convolutional neural network architecture within the Inception series, incorporating several enhancements. These include Label Smoothing, factorized 7×7 convolutions, and the integration of an auxiliary classifier to propagate label information to earlier network layers with the implementation of batch normalization within the auxiliary head layers.
Cifar10 dataset has been used for experimentation with MobilenetV3-small and InceptionV3.
§.§ Training Settings
All the experiments are performed using the PyTorch framework <cit.>. The batch size of 64 is used for VGG and ResNet models, 256 for MobileNet model and 128 for Inception model. Using the Adam optimizer, the models are trained for 100 epochs. For the first 80 epochs, the learning rate is set at 10^-3 for CIFAR10 and TinyImageNet and 10^-4 for the CalTech256 dataset, and for the final 20 epochs, it is reduced by a factor of 10. The categorical cross-entropy loss function is used as an objective function to measure the dissimilarity between predicted and actual class labels. Batch normalization is used for regularization.
The following data augmentation is used during training: random cropping of size 32, random horizontal flipping, and normalization to zero mean and unit standard deviation. The images are also resized to 32 × 32 resolution for VGG and ResNet models, whereas the MobileNet and Inception models accept images of size 224×224 and 299×299, respectively.
§ EXPERIMENTAL RESULTS AND ANALYSIS
In this section, the qualitative and quantitative results are presented for image classification using VGG and ResNet models on CIFAR10, TinyImageNet and CalTech256 datasets as well as MobileNet and InceptionV3 models on CIFAR10 dataset.
§.§ Qualitative Results
The visual results for a sample image from the CIFAR10 dataset under different NUI attacks are shown using the ResNet18 model in <ref> following the predicted category with the probability of classification. The 1^st image in the 1^st row is an original dog image taken from the CIFAR10 dataset, and the model predicts it as a dog with very high probability. The 2^nd to 7^th images in the 1^st row and the 1^st to 6^th images in the 2^nd row represent samples generated using the 1^st to 12^th mask in the same order along with its predicted category with probability. Different values of NUI weight (k) are used with different masks. Note that when for negative k, the resultant image becomes darker and vice-versa. The images are misclassified with high probability under NUI attacks with 2^nd, 3^rd, 7^th, 8^th, 10^th and 12^th masks. Whereas, the probability of classification to correct class is decreased under other NUI attacks. It is evident from these results that almost all the images are visually perceptible to the original image with some amount of brightness or darkness, however, these images are either misclassified by a trained CNN model or confidence of classification decreases. We refer to the Supplementary materials to observe the impact of the NUI attack on image pixel value distributions.
§.§ Quantitative Results
The goal of this study was to evaluate the robustness of CNN models under NUI attacks, on different datasets. After conducting several experiments, we have recorded a substantial drop in the accuracy of CNNs on all datasets. <ref> shows the performance of VGG16 over CIFAR10 under different NUI attacks on the test set, similarly later figures up to <ref> shows the performance curve for different CNNs over different datasets. The plots are reported in blue colour when the models are trained on the original training set and in orange colour on the augmented training set with NUI transformations. Each Figure contains 12 Sub-Figures corresponding to NUI attacks with 1^st to 12^th masks in the order of 1^st row from left to right for 1^st to 6^th masks and 2^nd row from left to right for 7^th to 12^th masks, respectively. The x-axis and y-axis represent different NUI weights (k) and Accuracy (%), respectively. Note that k=0 indicates no attack. From these plots, it is clear that the performance of the CNN models decreases on the NUI-attacked test sets. However, the performance is enhanced by including the NUI attack-based augmentation during training. It is also observed that the accuracy of the CNN models decreases as the weight (k) of the NUI attack moves towards extreme positive or negative values. We can observe that the curve for a mask remains similar for a particular dataset irrespective of the model used. It depicts the generalizability of proposed NUI attacks for different CNN models. The performance of a particular mask on a dataset also depends on the number of classes. If the number of classes is less, the probability of correctly classifying a test image is high as compared to a dataset with more classes.
The blue curves show that the CNN models are not robust against the NUI attacks as these models get fooled by the perturbed images. The 6^th, 7^th, 9^th and 10^th masks lead to a very high impact on the performance degradation of the CNN models. The poor performance of the models for Mask 6 is due to severe circular perturbation which leads to the complex generated images. Mask 7 is similar to Mask 6 but with reduced complexity. Still, the complexity of images generated by Mask 7 is very high to fool the CNN models.
Mask 9 and 10 add perturbation as a pattern in the horizontal and vertical directions, respectively. Using these NUI attacks, the images after adding the mask are still visually perceptible, however, the performance of CNN models has significantly dropped. Moreover, only a small value of k can produce a powerful NUI attack with high fooling success using Mask 9 and Mask 10.
The red curves depict that there has been considerable improvement in the performance of the CNN models after being trained on NUI-augmented training data. We exclude Mask 6 and Mask 7 in the training set, hence the improvement after NUI augmentation is low under these attacks on the test set.
<ref> summarizes the percentage reduction in the accuracy of the CNN models under different NUI attacks for k = -1.4 w.r.t. without attack. A high attack success rate is achieved using Mask 6, Mask 7, Mask 9 and Mask 10. TinyImageNet images are more prone to heavy perturbation using NUI attacks as depicted by the highest performance drop among all the datasets. The success rate of attack is higher for datasets for which the number of classes is large as the perturbation creates more confusion in class probabilities.
<ref> summarizes the percentage increase in the accuracy of the CNN models after being trained on the NUI perturbed dataset. The percentage improvement in the performance is calculated on the model's performance on the NUI attack and the model's performance after being trained on the NUI perturbed dataset. If a model on a particular dataset has a higher percentage reduction in <ref> then in most of such cases a higher percentage increase is observed in the model's performance on the same dataset in <ref>. Mask 9 and Mask 10 lead to the highest increment when trained on the NUI perturbed dataset. The readings also indicate that using NUI transformation as data augmentation is an effective technique and results in considerable performance improvements on NUI-attacked test sets.
§.§ Analysis
NUI has given a high attack success rate for all the models (VGG, ResNet, MobileNetV3 and InceptionV3). As mentioned in <ref> the classification accuracy of all the models decreased by at least 7% which proves the effectiveness of the attack across various architecture and dataset complexity.
The t-Distributed Stochastic Neighbor Embedding (t-SNE) plots are shown in <ref> and <ref> on CIFAR10 Test Set using InceptionV and MobileNet-small models, respectively. The t-SNE plots present the effect of different masks on the discriminative ability of the embedding distribution of CNN models leading to lower classification accuracy.
It can be noticed that the separation between the distribution of the embedding of different classes decreases after applying the NUI attacks leading to mis-classifications. The t-SNE plots of 6^th, 7^th, 9^th, 10^th and 11^th masks show heavy degradation of the separation between the distributions which leads to a huge accuracy drop. In addition to the t-SNE plot, we have provided histograms to better understand the change in data distribution in the Supplementary. The accuracy drops are managed via the proposed defense technique effectively. The defense strategy enhances models' performance on perturbed data and preserves the original accuracy.
<ref> shows at least 4% increase in the models' accuracy after applying the defense technique. The metrics Precision, Recall and F1-score, given in Supplementary, also support the above discussion.
Compared to attack approaches that require a neural network, the proposed NUI attack is swift and data-independent. The challenge with this approach is its fixed nature, which may prove ineffective in certain scenarios requiring an attack technique of a dynamic nature. Testing of such scenarios is out of the scope of this paper. We tested the proposed attack extensively through various evaluation metrics which gives a better understanding of how the attack technique works.
§ CONCLUSION
In this research, we introduce non-uniform illumination (NUI) attacks to study the robustness of the CNN models. The proposed NUI attacks can deceive the CNN models for image classification. The attack is simple and data-independent. It leverages the pixel brightness with spatial information to create the different masks that are included in the original image with a weight factor to generate the perturbed images. The images generated using NUI attacks retain their semantic significance. Through extensive experimentation using VGG and ResNet models on CIFAR10, TinyImageNet, and CalTech256 datasets as well as MobilenetV3-small and InceptionV3 models on CIFAR10 dataset, we observe a significant decline in classification performance across all the NUI-attacked test sets. Notably, several samples that were correctly classified with high confidence in the original test set, were incorrectly classified with high confidence after undergoing the NUI attack. The proposed NUI attack is also utilized as a data augmentation during training as a primary defense mechanism and to make the models resilient against such attacks. We have also observed the effects of the NUI attack on different colour channels through a brief experiment, detailed in Supplementary, which we would like to extend in future as a topic of our next research.
IEEEtran
[
< g r a p h i c s >
]Akshay Jain was born in Indore, Madhya Pradesh, India in 1997. He completed his Bachelor of Engineering from Jabalpur Engineering College, Jabalpur in Information Technology in 2020. He completed his Master of Technology from the Indian Institute of Information Technology, Allahabad (IIIT-A) in Information Technology in 2023. He worked as a teaching assistant in IIIT-A from 2021 to 2023. He is currently working as a Junior Engineer at Netweb Technologies and he is interested in the field of computer vision.
[
< g r a p h i c s >
]Shiv Ram Dubey is with the Indian Institute of Information Technology (IIIT), Allahabad since July 2021, where he is currently the Assistant Professor of Information Technology. He was with IIIT Sri City as Assistant Professor from Dec 2016 to July 2021 and Research Scientist from June 2016 to Dec 2016. He received the PhD degree from IIIT Allahabad in 2016. Before that, from 2012 to 2013, he was a Project Officer at Indian Institute of Technology (IIT), Madras. He was a recipient of several awards including the Best PhD Award in PhD Symposium at IEEE-CICT2017. Dr. Dubey is serving as the Secretary of IEEE Signal Processing Society Uttar Pradesh Chapter. His research interest includes Computer Vision and Deep Learning.
[
< g r a p h i c s >
]Satish Kumar Singh is serving at Indian Institute of Information Technology, Allahabad from 2013, and presently working as an Associate Professor in the Department of Information Technology. Dr. Singh is heading the Computer Vision and Biometrics Lab (CVBL) at IIIT Allahabad. His areas of interest include Image Processing, Computer Vision, Biometrics, Deep Learning, and Pattern Recognition. Dr. Singh was the Section Chair IEEE Uttar Pradesh Section (2021-2023) and a member of IEEE India Council (2021). He also served as the Vice-Chair, Operations, Outreach and Strategic Planning of IEEE India Council (2020-2024). Dr. Singh is also the technical committee affiliate of IEEE SPS IVMSP and MMSP. Currently, Dr. Singh is the Chair of IEEE Signal Processing Society Chapter of Uttar Pradesh Section and Associate Editor of IEEE Signal Processing Letters.
[
< g r a p h i c s >
]KC Santosh, a highly accomplished AI expert, is the chair of the Department of Computer Science, University of South Dakota. He served the National Institutes of Health as a research fellow. Before that, he worked as a postdoctoral research scientist at the LORIA research centre, Universitè de Lorraine in direct collaboration with industrial partner, ITESOFT, France. He earned his PhD in Computer Science - Artificial Intelligence from INRIA Nancy Grand East Research Centre (France). With funding of over $1.3 million, including a $1 million grant from DEPSCOR (2023) for AI/ML capacity building at USD, he has authored 10 books and published over 240 peer-reviewed research articles. He is an associate editor of multiple prestigious journals such as IEEE Transactions on AI, Int. J of Machine Learning & Cybernetics, and Int. J of Pattern Recognition & Artificial Intelligence. To name a few, Prof. Santosh is the proud recipient of the Cutler Award for Teaching and Research Excellence (USD, 2021), the President's Research Excellence Award (USD, 2019) and the Ignite Award from the U.S. Department of Health & Human Services (HHS, 2014). As the founder of AI programs at USD, he has taken significant strides to increase enrolment in the graduate program, resulting in over 3,000% growth in just three years. His leadership has helped build multiple inter-disciplinary AI/Data Science related academic programs, including collaborations with Biology, Physics, Biomedical Engineering, Sustainability and Business Analytics departments. Prof. Santosh is highly motivated in academic leadership, and his contributions have established USD as a pioneer in AI programs within the state of SD. More info. https://kc-santosh.org/https://kc-santosh.org/.
[
< g r a p h i c s >
]Bidyut Baran Chaudhuri
received the Ph.D. degree from IIT Kanpur, in 1980. He was a Leverhulme Postdoctoral Fellow with Queen’s University, U.K., from 1981 to 1982. He joined the Indian Statistical Institute, in 1978, where he worked as an INAE Distinguished Professor and a J C Bose Fellow at Computer Vision and Pattern Recognition Unit of Indian Statistical Institute. He is now affiliated to Techno India University, Kolkata as Pro-Vice Chancellor (Academic). His research interests include Pattern Recognition, Image Processing, Computer Vision, Natural Language Processing (NLP), Signal processing, Digital Document Processing, Deep learning etc. He pioneered the first workable OCR system for printed Indian scripts Bangla, Assamese and Devnagari. He also developed computerized Bharati Braille system with speech synthesizer and has done statistical analysis of Indian language. He has published about 425 research papers in international journals and conference proceedings. Also, he has authored/edited seven books in these fields. Prof. Chaudhuri received Leverhulme fellowship award, Sir J. C. Bose Memorial Award, M. N. Saha Memorial Award, Homi Bhabha Fellowship, Dr. Vikram Sarabhai Research Award, C. Achuta Menon Award, Homi Bhabha Award: Applied Sciences, Ram Lal Wadhwa Gold Medal, Jawaharlal Nehru Fellowship, J C Bose fellowship, Om Prakash Bhasin Award etc. Prof. Chaudhuri is the associate editor of three international journals and a fellow of INSA, NASI, INAE, IAPR, The World Academy of Sciences (TWAS) and life fellow of IEEE (2015). He acted as General Chair and Technical Co-chair at various International Conferences.
§ SUPPLEMENTARY
§.§ Effect of NUI Attacks
<ref> shows the effect of various masks on the image pixel value distribution using histograms.
The 1^st column contains the histograms corresponding to the original images used in Figure 2 of main paper. Similarly, the later columns from left to right contain the histograms for images after the NUI attack by Mask 1 to Mask 12, respectively.
The change in the distribution of the pixel values can be observed. We generate all the images using positive values of k, thus the number of pixels having higher pixel values has increased causing the histogram to be right-shifted. Masks that cause both brightness and darkness in the image generate histograms equally distributed throughout the axis.
Following Figure 2 of main paper and Figure <ref> of Supplementary, we observed that though the histograms contain severely brighter pixels, the semantic meaning is intact and the histograms are similar for the majority of the images and thus generalize the NUI attack technique.
§.§ Quantitative Analysis:
Fig. <ref> and Fig. <ref> show the comparison of precision, recall and f1-score before and after the model trained on perturbed data. These results also support similar trend as observed using Accuracy reported in the main paper.
§.§ Extension – Effect of NUI Attack on Color Channels
We also test the effect of the proposed NUI attack on specific channels of RGB images. For this experiment, the VGG16 model is used on the CIFAR10 dataset with a NUI attack using Mask 1 on the test set. Six RGB experimental settings are tested for different values of k, including perturbations applied to R, G, B, RG, RB, GB, where R, G and B represent the Red, Green and Blue channels, respectively. The results are illustrated in <ref>. The NUI attack shows a high impact on the combination of the Red and Blue channels as depicted in the 5^th plot. The effect on a sample image is shown after the NUI attack using Mask 1 with k=1.8 in <ref>. All the images are perceptible and preserve the semantic meaning.
|
http://arxiv.org/abs/2409.03281v1 | 20240905064543 | Extended Drag-Based Model for better predicting the evolution of Coronal Mass Ejections | [
"Mattia Rossi",
"Sabrina Guastavino",
"Michele Piana",
"Anna Maria Massone"
] | astro-ph.SR | [
"astro-ph.SR",
"85-10"
] |
empty
Extended Drag-Based Model for better predicting the evolution of Coronal Mass Ejections
Mattia Rossi^1, Sabrina Guastavino^1,2, Michele Piana^1,2 and Anna Maria Massone^1
^1MIDA, Dipartimento di Matematica Università di Genova, via Dodecaneso 35 16146 Genova, Italy
^2Osservatorio Astrofisico di Torino, Istituto Nazionale di Astrofisica, Strada Osservatorio 20 10025, Pino Torinese, Italy
empty
§ ABSTRACT
The solar wind drag-based model is a widely used framework for predicting the propagation of Coronal Mass Ejections (CMEs) through interplanetary space. This model primarily considers the aerodynamic drag exerted by the solar wind on CMEs. However, factors like magnetic forces, pressure gradients, and the internal dynamics within CMEs justify the need of introducing an additional small-scale acceleration term in the game. Indeed, by accounting for this extra acceleration, the extended drag-based model is shown to offer improved accuracy in describing the evolution of CMEs through the heliosphere and, in turn, in forecasting CME trajectories and arrival times at Earth. This enhancement is crucial for better predicting Space Weather events and mitigating their potential impacts on space-based and terrestrial technologies.
Keywords: coronal mass ejections – interplanetary propagation – drag-based model – accelerated dynamics – spacecraft alignment
§ INTRODUCTION
Coronal Mass Ejections <cit.> are massive outbursts of magnetized plasma from the solar corona into the interplanetary space. When directed toward Earth, they cause severe geomagnetic disturbances <cit.> and can pose a persistent hazard as harmful radiation to space and ground-based facilities, and human health. Therefore, predicting the CMEs' arrival time and impact speed to Earth is essential in the context of the Space Weather forecasting science <cit.>.
One of the most popular and commonly used approaches to predict the transit time of a CME and its speed to Earth is known as the Drag-Based Model (DBM) <cit.>. This model assumes that the kinematics of the CME is governed by its dynamic interaction with the Parker spiral-shaped interplanetary structures (i.e., high- and low-speed streams) where it propagates, via the magnetohydrodynamic (MHD) equivalent of the aerodynamic drag force. The model, which mathematically reduces to a rather simple equation of motion, thus essentially predicts that the speed of the CME will balance that of the ambient solar wind in which it is expanding. Recent efforts have also been devoted to incorporating the physics of aerodynamic drag into methodologies based on Artificial Intelligence (AI) techniques, paving the way for innovative (hybrid) approaches known as physics-driven AI models <cit.>.
Although the DBM has been subject to continuous refinements <cit.> and is now a well-established approach, the several drawbacks associated with its intrinsic approximations are evident. Indeed, it is clear that the complex dynamical interaction of the CME with its surroundings cannot be properly described solely by the drag force. Other important physical processes are certainly at play in the evolution of CMEs: these include CME rotation, reconfiguration, deformation, deflection, erosion along with any other magnetic reconnection-driven processes <cit.>, resulting in additional accelerations beyond that predicted by the trivial DBM acting on the CME as it travels through the heliosphere.
<cit.> recently pointed out that the DBM is often quite ineffective in describing the proper propagation of CMEs in interplanetary space. In this study, a CME was observed by two radially aligned probes separated by a distance of just 0.13 AU. Although the model predicted that the CME would decelerate, the velocity profiles measured by the two spacecraft instead revealed a residual acceleration, pointing to an additional force to the drag that overpowered its braking effect, and thus resulting in an increase in velocity. This work presents a more refined and realistic drag-based model, with the aim to overcome the limitations of current versions by introducing into the equation of motion describing the dynamic interaction of the CME with the solar wind an extra acceleration, representing any other forces involved.
After obtaining and discussing the mathematical solutions of the resulting new equations of motion (<ref>.), the updated version is applied to the observation of the same CME already studied in <cit.>, showing that it satisfactorily succeeds in describing its dynamic evolution, and thus becoming a significant breakthrough in the prediction of CME travel time in Space Weather studies (<ref>.). Interpretation of which physical process(es) the additional acceleration is due to, is tentatively given in <ref>., where our conclusions are also offered. Computational details to derive formulae in <ref>. are summarized in the Appendix.
§ THE EXTENDED DRAG-BASED MODEL
Let us consider a generalization of the DBM where the total net acceleration acting on the CME in the interplanetary phase is made of two contributions:
r̈=a_drag(r,t)+a_extra(r) ,
where a_drag= -γ r^-α|ṙ - w(r,t)|^k(ṙ-w(r,t)), k∈ 2ℕ+1, and a_extra=ar^-β, for appropriate exponents α,β>0 and coefficients γ>0, a≠0; r=r(t) and v(t)=ṙ(t) are the CME’s instantaneous radial position and speed (typically the CME front distance and front speed); w(r,t) is the background solar wind speed given as a known function of position and time. Physically, the model describes the same
dynamics of the DBM perturbed by an extra (e.g., magneto-gravitational) force acting on the CME along the motion, altogether exponentially damped over distance.
In general, equation (<ref>) does not admit an analytical solution, which hampers the computation of a_extra from time and space measurements, i.e., by solving a boundary value problem. A closed-form time solution of (<ref>) is possible by assuming α=β=0 and a constant w(r,t)≡ w, for any fixed odd integer k. Therefore, we introduce the Extended Drag-Based Model (EDBM hereafter) as the equation
r̈=-γ|ṙ-w|(ṙ-w)+a ,
in which k=1 and which corresponds to a straightforward perturbation of the simplest form of the DBM studied in <cit.>. The sign of a≠0 in (<ref>) establishes the form of the solution and the properties of the associated dynamical system.
We start from the equilibria, i.e.:
* if a<0, v(t)≡ w-√(-a/γ) is an asymptotically stable (in the future) constant solution of (<ref>); thus, for v_0>w-√(-a/γ) (v_0<w-√(-a/γ)) the CME monotonically decelerates (accelerates) for positive times;
* if a>0, v(t)≡ w+√(a/γ) is an asymptotically stable (in the future) constant solution of (<ref>); thus, for v_0>w+√(a/γ) (v_0<w+√(a/γ)) the CME monotonically decelerates (accelerates) for positive times.
These assertions clarify the role of the acceleration term a≠0: it shifts the asymptotic solution from v=w (standard DBM) to v=w±√(± a/γ) (EDBM). In contrast to the case a=0, this means that for a>0 (a<0) initial speeds below (above) the wind speed can increase (decrease) up (down) to w and beyond. A schematic of the dynamics around the equilibrium points is provided in Figure <ref>.
Equation (<ref>) can be integrated from 0 to t>0 to obtain explicit formulae for v(t) and r(t), given the initial conditions v(0)=v_0, r(0)=r_0. Depending on the choice of v_0, the solutions may be (differentiably) piecewise-defined for positive or negative values of a due to the presence of the absolute value term |ṙ-w|, and present obvious symmetries in the form. Specifically,
Case a>0.
* if v_0≤ w, then
v(t)=
w+√(a/γ)tan(√(aγ)t-σ_+) , for 0≤ t≤1/√(aγ)σ_+
w+√(a/γ)e^2(√(aγ)t-σ_+)-1/e^2(√(aγ)t-σ_+)+1 , for t>1/√(aγ)σ_+
,
r(t)=
wt+r_0-1/γln(S_+cos(√(aγ)t-σ_+)) , for 0≤ t≤1/√(aγ)σ_+
(w-√(a/γ))t+r_0+1/γ(ln(e^2(√(aγ)t-σ_+)+1/2S_+)+σ_+) , for t>1/√(aγ)σ_+
,
where σ_+arctan(√(γ/a)(w-v_0)) and S_+√((a+γ(v_0-w)^2)/a);
* if v_0>w, then
v(t)=
w+√(a/γ)A_+e^2√(aγ)t+B_+/A_+e^2√(aγ)t-B_+ , for t≥ 0 ,
r(t)=(w-√(a/γ))t+r_0
+1/γln(A_+e^2√(aγ)t-B_+/2√(a)) , for t≥ 0 ,
where A_+√(γ)(v_0-w)+√(a) and B_+√(γ)(v_0-w)-√(a).
Case a<0.
* if v_0≤ w, then
v(t)=w+√(-a/γ)A_-e^-2√(-aγ)t+B_-/A_-e^-2√(-aγ)t-B_- , for t≥0 ,
r(t)=(w-√(-a/γ))t+r_0-1/γln(A_-e^-2√(-aγ)t-B_-/2√(-a)) , for t≥0 ,
where A_-√(γ)(v_0-w)+√(-a) and B_-√(γ)(v_0-w)-√(-a);
* if v_0> w, then
v(t)= w-√(-a/γ)tan(√(-aγ)t-σ_-) , for 0≤ t≤1/√(-aγ)σ_-
w+√(-a/γ)e^-2(√(-a γ)t-σ_-)-1/e^-2(√(-a γ)t-σ_-)+1 , for t>1/√(-aγ)σ_-
,
r(t)= wt+r_0+1/γln(S_-cos(√(-aγ)t-σ_-)) , for 0≤ t≤1/√(-aγ)σ_-
(w-√(-a/γ))t+r_0-1/γ(ln(e^-2(√(-aγ)t-σ_-)+1/2S_-)-σ_-) , for t>1/√(-aγ)σ_-
,
where σ_-arctan(√(-γ/a)(v_0-w)) and S_-√((a-γ(v_0-w)^2)/a).
The expressions for the CME's speed reflect the dynamical behavior of Figure <ref>: in (<ref>), v(t)≤ w in the former expression while v(t)>w in the latter, and v→ w+√(a/γ) as t→+∞; in (<ref>), v(t)>w with v→ w+√(a/γ) as t→+∞, and v(t) is never smaller than or equal to w for positive times; in (<ref>), v(t)≤ w with v→ w-√(-a/γ) as t→+∞, and v(t) is never larger than or equal to w for positive times; finally, in (<ref>), v(t)≥ w in the former expression while v(t)<w in the latter, and v→ w-√(-a/γ) as t→+∞.
The derivation of (<ref>)–(<ref>) requires standard calculus techniques, whose details are given in Appendix <ref>.
§ VALIDATION OF THE EDBM: THE NOVEMBER 3RD – 5TH 2021 EVENT
As discussed in <ref>., the closely spaced SolO-Wind detections of a CME of early November 2021 provided a reliable test-bed to assess the effectiveness of the EDBM. Indeed, in agreement with the analysis of <cit.>, the wind speed profiles measured by SolO and Wind (Figure <ref>) suggested an acceleration of the CME Magnetic Cloud (MC) front from SolO to Wind (bottom panel) rather than the expected deceleration due to the drag force induced by the background solar wind. Although the physical reasons for this local behavior remain unclear (plausible interpretations are discussed in <ref>.), in the following we applied the extended model described in <ref>. against data collected at SolO and Wind locations r_SolO and r_Wind to estimate the additive acceleration a>0 between the two instruments. Specifically, given the measured MC front speeds v_SolO,v_Wind at times t_SolO,t_Wind, respectively, we focused on the difference between the mean acceleration
a_mean=Δ v/Δ t=v_Wind-v_SolO/t_Wind-t_SolO ,
which is an indicator of the approximate total measured acceleration exerted on the CME between the two spacecraft, and the model-dependent acceleration contributions a_drag(SolO)+a and a_drag(Wind)+a, where
a_drag(SolO)=-γ|v_SolO-w|(v_SolO-w) ,
a_drag(Wind)=-γ|v_Wind-w|(v_Wind-w) .
The main task, therefore, was to determine, from the solutions v(t),r(t) in <ref>., the values of the extra-acceleration term a that are compatible with the set of boundary values
(v_SolO,v_Wind,r_SolO,r_Wind)=(690.86 km/s,705.97 km/s,0.85 AU,0.98 AU) ,
obtained from the data time series at initial time t_SolO=0 and final time t_Wind=17820 s, and the parameters (γ,w). More specifically, we considered several experiments by choosing w∈[400,800] km/s with incremental step Δ w=50 km/s, and we used the same value γ=0.24×10^-7 km^-1 as in <cit.>, compatible with the CME erupted on November 2nd 2021 at 02:48 UT detected by the Solar and Heliospheric Observatory (SOHO) LASCO C2 coronograph (see, e.g., <cit.>). Through a standard root-finding Newton-Raphson method <cit.>, for every w one has to search for the solution(s) (if any) of
f_v(a) v(a;t_SolO,t_Wind,v_SolO,w,γ)-v_Wind=0
or
f_r(a) r(a;t_SolO,t_Wind,v_SolO,r_SolO,w,γ)-r_Wind=0
using formulae (<ref>)–(<ref>) (case a>0). We initialized the root-finding algorithm with an initial guess of approximately the same order of magnitude of |a_drag(SolO)|,|a_drag(Wind)|, and iterate until convergence to a local positive value (did not the scheme converge, we would set a=0).
For the sake of simplicity, we applied this scheme to
(<ref>) and, since formula (<ref>) is case-defined and the time intervals depend on the unknown a, we eventually checked that for each experiment the corresponding time condition was fulfilled once a is found (did not the time condition apply, we would reject the solution).
Figure <ref> contains the results of this analysis. Specifically, in the top left panel a positive extra-acceleration a was obtained for each choice of w (cyan curve), as opposed to a drag deceleration at SolO and Wind until w=700 km/s (orange and green curves). The sum of the corresponding contributions provided two profiles that symmetrically fit the constant value a_mean in (<ref>) with a notable degree of accuracy, independently of the ambient solar wind (from the magnification in the top right panel, the maximum error committed is about 0.1 m/s^2 attained at w=400 km/s). Furthemore, it is worth mentioning the optimality reached at w=700 km/s, with almost an exact match between the three accelerations: indeed, this is the value for the solar wind closest to v_SolO and v_Wind.
The bottom panels of the same figure describe the outcomes of two further tests. First, we generated two sets containing ten values of the extra-acceleration a computed for two sets of ten random realizations of the initial speed in the range [v_SolO - 50,v_SolO + 10 ] km/s and of the final speed in the range [v_Wind-10,v_Wind+50] km/s, respectively (the reason for this choice of ranges is two-fold: it guarantees that v_SolO<v_Wind, and a maximum error of 50 km/s is plausible while accounting for the uncertainty on the temporal location of the MC boundary). The left panel contains average values ⟨ a⟩ and the corresponding standard deviations σ_v(SolO),σ_v(Wind) computed over the two sets (these standard deviations stabilize after ten random realizations of the inital/final speeds). Note that σ_v(SolO)≈σ_v(Wind)≈ 1 m/s^2 independently of w, and we coherently re-obtained the best agreement between the two profiles for w=700 km/s. Second, in the bottom right panel of Figure <ref>, we computed the absolute error ε=|f_r(a)| from (<ref>) at Wind location for the solutions a obtained as in the top left panel using (<ref>). The overall error as a function of w did not exceed ε=0.046805 AU (relative error ≈ 5%), attained at w=700 km/s. Note that for this value of the wind speed, we found, at the same time, the best outcome as far as a is concerned, though the largest error on r. This suggests to rely on a trade-off strategy when fitting the real data either with the v(t) model (equation (<ref>)) or the r(t) model (equation (<ref>)).
In this respect, we infer that the EDBM can accurately describe the dynamics of a vast sample of interplanetary CMEs, especially of those excluded by the simplest form of the DBM, like the ones propelled beyond the solar wind speed (case with w=700 km/s in Figure <ref>). Indeed, when w=700 km/s is assumed, the intermediate condition v_SolO<w<v_Wind holds, which cannot be modelled using the classical DBM (see <ref>).
§ DISCUSSION AND CONCLUSIONS
As CMEs travel through interplanetary space, they can experience residual acceleration during their expansion due to several factors beyond solar wind drag. These include:
* Magnetic forces: CMEs are highly magnetized plasma structures, carrying their own magnetic field. As they expand into interplanetary space, their magnetic field interacts with the Sun’s Interplanetary Magnetic Field (IMF). This interaction can generate magnetic forces that can lead to residual acceleration, depending on the alignment and strength of the magnetic fields. The magnetic pressure from the Sun’s field, which decreases with distance, may provide a residual push on the CME as it expands.
* Pressure gradients: as CMEs move away from the Sun, they encounter regions of lower density and pressure. The difference between the internal pressure of the CME and the external pressure of the surrounding solar wind can cause the CME to continue expanding and accelerate. If the internal pressure of the CME remains higher than the external pressure for an extended period, this imbalance can drive residual acceleration during the CME’s expansion.
* Internal magnetic reconfiguration: the internal dynamics of a CME, including its magnetic tension forces and plasma flows, can also contribute to residual acceleration. Indeed, CMEs contain complex magnetic structures that can undergo reconfiguration or magnetic reconnection as they expand. These internal processes can release energy, contributing to the acceleration of the CME. For example, if magnetic loops within the CME reconnect, the release of magnetic energy could provide a push that accelerates the CME further into space.
* Gravitational forces: although relatively weak at large distances from the Sun, gravitational forces from the Sun can still play a role. Near the Sun, gravity decelerates the CME, but as it moves farther away, the influence of gravity decreases. If the CME has not yet reached a terminal velocity, the reduction in gravitational influence can result in a relative acceleration as the opposing force weakens.
* Plasma and magnetic pressure balance: the expansion of the CME involves the balance between plasma pressure and magnetic pressure within the CME and in the surrounding solar wind. As the CME expands and its internal pressure decreases, the balance between these pressures can change, leading to further acceleration. If the magnetic pressure within the CME remains relatively high, it could continue to push the CME outward.
In general, these factors certainly contribute to the complex dynamics of CMEs as they travel through space, influencing their speed and trajectory beyond the initial influence of the solar wind. More specifically, the present study showed that the combination of these effects can be modelled by an extra-acceleration term that, when added to the drag force, contributes to explain the observations of a CME performed by SolO and Wind much more reliably than the standard DBM.
Understanding the processes itemized above is essential for predicting the behavior of CMEs and their potential impact on space weather and Earth’s environment. Disentangling which of these processes is currently at work in the evolution of CME under studio needs further analysis, which, as it is beyond the scope of the present work, is devoted to a future paper. Finally, it is also worth noting the interesting possibility of combining this extended drag-based model in the neural network developed by <cit.>, so as to also refine their AI-based model and potentially make it even more predictive.
§ ACKNOWLEDGMENTS
SG was supported by the Programma Operativo Nazionale (PON) “Ricerca e Innovazione” 2014–2020. All authors acknowledge the support of the Fondazione Compagnia di San Paolo within the framework of the Artificial Intelligence Call for Proposals, AIxtreme project (ID Rol: 71708). AMM is also grateful to the HORIZON Europe ARCAFF Project, Grant No. 101082164. SG, MP and AMM are also grateful to the Gruppo Nazionale per il Calcolo Scientifico
- Istituto Nazionale di Alta Matematica (GNCS -
INdAM). MR is also grateful to the Gruppo Nazionale per la Fisica Matematica
- Istituto Nazionale di Alta Matematica (GNFM -
INdAM).
§ APPENDIX
§.§ Computation of the solutions of the EDBM
§.§.§ v_0,v≤ w
Assume to integrate (<ref>) in [0,t] such that v_0,v(t)≤ w:
∫_v_0^vdv'/a+γ(v'-w)^2=∫_0^tdt' .
Distinguishing between a>0 and a<0, we obtain two different primitives for the left-hand side and get:
t=
1/√(aγ)(arctan(√(γ/a)(v-w))-arctan(√(γ/a)(v_0-w))) , a>0
-1/2√(-aγ)ln((√(γ)(v-w)+√(-a))(√(γ)(v_0-w)-√(-a))/(√(γ)(v_0-w)+√(-a))(√(γ)(v-w)-√(-a))) , a<0
;
now solving for v in both the expressions and setting v(t)≤ w yield formulae (<ref>) in the case 0≤ t≤arctan(√(γ/a)(w-v_0))/√(aγ) and (<ref>).
As regards r(t), for a>0 a second integration provides
r(t) =∫_0^t v(t')dt'=wt+r_0+√(a/γ)∫_0^ttan(√(aγ)t'+arctan(√(γ/a)(v_0-w)))dt'
=wt+r_0-1/γln(|cos(√(aγ)t+arctan(√(γ/a)(v_0-w)))|√(a+γ(v_0-w)^2/a)) ,
using |cos x|=1/√(1+tan^2x). In the interval [0,arctan(√(γ/a)(w-v_0))/√(aγ)] the cosine is positive, so we can remove the absolute value and obtain equation (<ref>) (first case).
For a<0, we conveniently set A√(γ)(v_0-w)+√(-a), B√(γ)(v_0-w)-√(-a) and C-2√(-aγ). We have
r(t)=∫_0^tv(t')dt'=wt+∫_0^tAe^Ct'+B/Ae^Ct'-B√(-a/γ)dt'+r_0=(w-√(-a/γ))t+r_0+2/C√(-a/γ)ln|Ae^Ct-B/A-B| ,
where the integral is first computed by splitting the fraction as
Ae^Ct'+B/Ae^Ct'-B=1+2B/Ae^Ct'-B ,
and then performing the two subsequent (monotonic) changes of variable u=Aexp(Ct')-B and U=1+B/u. Again, we can disregard the absolute value for t≥0, and replacing back the values of A,B,C we get (<ref>).
§.§.§ v_0,v≥ w
The procedure is the same as in Appendix <ref>. Since now the integration of (<ref>) reads
∫_v_0^vdv'/a-γ(v'-w)^2=∫_0^tdt' ,
formulas derived for v(t) and r(t) are simply swapped for a≶0. Upon substituting a↦-a, √(aγ)↦-√(-aγ) or the other way around depending on the sign of a, an analogous argument for time intervals, absolute values, and a corresponding re-definition of constants A,B,C hold. This leads to equations (<ref>) (a>0), (<ref>) (first case, a<0) for v(t), and (<ref>) (a>0), (<ref>) (first case, a<0) for r(t).
§.§.§ v_0<w<v
This time the integration of the EDBM gives rise to two contributions:
∫_v_0^wdv'/a+γ(v'-w)^2+∫_w^vdv'/a-γ(v'-w)^2=t ;
in addition, from Figure <ref>, the forward dynamics rules out the case a<0 (it is possible only backward in time) and requires t>t_*arctan(√(γ/a)(w-v_0))/√(aγ) to arrive at v(t)>w (cf. Appendix <ref>). Then, we have
∫_w^vdv'/a-γ(v'-w)^2=t-t_* ,
which is the case of Appendix <ref> with lower limit of integration equal to w. So, upon solving for v, we obtain expression (<ref>) in the case t>t_*.
Concerning r(t), we need to integrate equation (<ref>) (second case) from t_* to t:
r(t)=w(t-t_*)+∫_t_*^tAe^Ct'+B/Ae^Ct'-B√(a/γ)dt'+r_* ,
with r_*=r(t_*), Aexp(2arctan(√(γ/a)(v_0-w))), B -1, C2√(aγ). This relationship is formally identical to the one of r(t) in Appendix <ref>, case a<0. We find
r(t)=(w-√(a/γ))(t-t_*)+r_*+1/γln(Ae^Ct-B/Ae^Ct_*-B) .
Lastly, we determine r_* by enforcing continuity at t=t_* with the former expression in (<ref>):
r_*=wt_*+r_0-1/γln√(a+γ(v_0-w)^2/a) ;
hence, replacing back in the expression of r(t), we retrieve the latter of (<ref>).
§.§.§ v<w<v_0
Following Appendix <ref>, an analogous reasoning is applied in the case a<0. The corresponding sign adaptation of quantities involving a (see Appendix <ref>) and the continuity requirement with the first relationship of equation (<ref>) produce formulae (<ref>), (<ref>) for t>arctan(√(-γ/a)(v_0-w))/√(-aγ).
aasjournal
|
http://arxiv.org/abs/2409.03062v1 | 20240904202337 | MobileUNETR: A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation | [
"Shehan Perera",
"Yunus Erzurumlu",
"Deepak Gulati",
"Alper Yilmaz"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
MobileUNETR
Perera et al.
Photogrammetric Computer Vision Lab, The Ohio State University
Wexner Medical Center, The Ohio State University
{perera.27, yilmaz.15, erzurumlu.1}@osu.edu
[email protected]
MobileUNETR: A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation
Shehan Perera10009-0005-3831-0404 Yunus Erzurumlu10009-0006-5798-5842 Deepak Gulati20000-0003-3374-5992 Alper Yilmaz10000-0003-0755-2628
September 9, 2024
============================================================================================================================================
§ ABSTRACT
Skin cancer segmentation poses a significant challenge in medical image analysis. Numerous existing solutions, predominantly CNN-based, face issues related to a lack of global contextual understanding. Alternatively, some approaches resort to large-scale Transformer models to bridge the global contextual gaps, but at the expense of model size and computational complexity. Finally many Transformer based approaches rely primarily on CNN based decoders overlooking the benefits of Transformer based decoding models. Recognizing these limitations, we address the need efficient lightweight solutions by introducing MobileUNETR, which aims to overcome the performance constraints associated with both CNNs and Transformers while minimizing model size, presenting a promising stride towards efficient image segmentation. MobileUNETR has 3 main features. 1) MobileUNETR comprises of a lightweight hybrid CNN-Transformer encoder to help balance local and global contextual feature extraction in an efficient manner; 2) A novel hybrid decoder that simultaneously utilizes low-level and global features at different resolutions within the decoding stage for accurate mask generation; 3) surpassing large and complex architectures, MobileUNETR achieves superior performance with 3 million parameters and a computational complexity of 1.3 GFLOP resulting in 10x and 23x reduction in parameters and FLOPS, respectively. Extensive experiments have been conducted to validate the effectiveness of our proposed method on four publicly available skin lesion segmentation datasets, including ISIC 2016, ISIC 2017, ISIC 2018, and PH2 datasets. The code will be publicly available at: https://github.com/OSUPCVLab/MobileUNETR.githttps://github.com/OSUPCVLab/MobileUNETR.git.
§ INTRODUCTION
Skin cancer, among the most prevalent and rapidly increasing forms of cancer worldwide, poses a significant global health challenge <cit.>. Given the various forms of skin cancer that appear between patients and different levels of severity, accurately identifying and categorizing skin lesions becomes a complex task. One of the primary difficulties in diagnosing this form of cancer lies in visual inspection of the lesions. The subjective nature of the visual process, influenced by factors such as lighting conditions, individual expertise, and the inherent variability in the way skin cancer presents itself in different patients, make visual categorization a difficult task. To improve diagnostic precision, dermatologists use dermoscopy, a non-invasive technique for skin surface microscopy. Dermoscopy provides physicians with high-resolution images of the affected skin, allowing a closer examination of the characteristics of the lesion <cit.>. Although this advancement has undoubtedly improved the accuracy of human visual analysis, it has not completely eliminated the challenges associated with human subjectivity. Dermatologists, even with the help of dermoscopic images, may still differ in their interpretation of skin lesions. This lack of consistency in diagnosis among medical professionals emphasizes the need for additional tools that can offer objective and standardized assessments. Recognizing these challenges, there has been a growing effort to integrate Computer-Aided Diagnostic (CAD) systems to support physicians in the diagnosis of skin lesions.
Early iterations of CAD systems designed for skin cancer segmentation were often approached through complicated multi-step image processing pipelines <cit.>, <cit.>. Techniques employed in these early iterations include color-space transformations, principal component analysis, and the use of hand-crafted features, to name a few. Despite their progress in medical diagnosis, these approaches struggled to accurately delineate affected skin regions. Rule-based and hand-crafted systems often oversimplified complex, variable skin lesions, including artifacts and noise from body hair.
The development of deep learning and its adoption represents a crucial step towards enhancing the efficiency and accuracy of CAD systems. These systems employ advanced neural networks to delineate the boundaries of the lesions, allowing a more precise assessment of their characteristics. Deep learning algorithms, with their ability to automatically learn intricate patterns and features directly from data, have demonstrated superior performance in segmenting skin lesion <cit.>]. These algorithms can discern subtle variations in color, texture, and shape, adapting dynamically to the diverse manifestations of skin cancer between different individuals.
Central to the success of deep learning for medical image segmentation is the introduction of the encoder-decoder architecture. Encoder-Decoder architectures implemented via Fully Convolutional Neural Networks (FCNNs) have particularly excelled in this domain and have become the State-Of-The-Art (SOTA) for many segmentation tasks <cit.>. Although highly successful, one of the major drawbacks of FCNN/CNN based approaches is their lack of long-range contextual understanding. Although CNNs excel at capturing local features within an image, they inherently struggle to gather broader context information or a global relationship between different elements. In particular, in the case of skin cancer, where lesions can vary significantly from patient to patient, a global understanding becomes crucial to help the model overcome ambiguities. To overcome context limitations within CNNs, researchers have resorted to larger and deeper models to help improve the overall receptive field through pure convolutions <cit.>. However, this solution comes with its own set of challenges. Larger models require more computational resources, making them computationally expensive and slower to train and deploy. Additionally, the pursuit of a larger receptive field through sheer model size may lead to diminishing returns, emphasizing the need for a more efficient and effective approaches. The integration of self-attention modules introduced in the Transformer <cit.> architecture with convolutional layers has been suggested as a means to enhance the non-local modeling capability <cit.> and offer promising long-range contextual understanding benefits for many downstream tasks.
Originally developed for Natural Language Processing (NLP), the Transformer architecture has seen significant adoption to many computer vision tasks. With the initial Vision Transformer <cit.> that allowed Transformers to perform image classification, researchers were provided with an architecture that is capable of modeling long-range dependencies and gathering global context clues at every stage of the model. However, as a trade-off the self-attention mechanism, central to the Transformer architecture, proves computationally expensive, especially at large spatial dimensions. Additionally, ViTs produce single-scale features, in contrast to multi-scale features typically generated by CNN models <cit.>. This trade-off between global awareness and computational efficiency presents a significant challenge when employing transformer architectures in resource-limited real-world applications.
To overcome current bottlenecks in widely adopted CNN and Transformer architectures we introduce MobileUNETR, a novel end-to-end transformer based encoder decoder architecture for efficient image segmentation. At a high level, challenging and complex image segmentation tasks often benefit from feature extraction capabilities that consider local and global contextual information within the feature encoding stage. However, segmentation approaches typically focus on optimizing the feature extractor while overlooking the importance of developing novel decoding strategies. Common segmentation frameworks in medical imaging, utilizing complex CNN and/or Transformer structures, generally favor excluding Transformer based decoders, opting instead for pure CNNs <cit.>. This choice can be attributed to the fact that, despite being great at capturing global information, Transformers are unable to capture intricate local details which are highly useful when generating accurate segmentation masks. To overcome the over-reliance on pure CNN layers within the decoder stage, we propose a novel highly effective and light-weight decoder capable of learning and integrating local/global details to generate highly accurate segmentation masks.
We demonstrate the advantages of MobileUNETR in terms of model size, run time complexity, and accuracy on four publicly available skin lesion segmentation datasets, including ISIC 2016 <cit.>, ISIC 2017 <cit.>, ISIC 2018 <cit.>, and PH2 <cit.> datasets. We demonstrate a significant increase in performance across all datasets and advanced architectures and training methodologies while reducing the model size and complexity by 10x and 23x, respectively.
Our main contributions can be summarized as follows.
* We propose a novel lightweight and efficient end-to-end Transformer based hybrid model for skin lesion segmentation, where local and global contextual features are enforced at each stage to retain global awareness of a given scene.
* To overcome the over-reliance on CNN based decoding strategies we introduce a novel Transformer based hybrid decoder that simultaneously utilize low-level and global features at different resolutions for highly accurate and well aligned mask generation.
* The proposed architecture surpasses large and highly complex CNN, Transformer, and Hybrid models in segmentation with only 3 millions parameters and 1.3 GFLOP of complexity resulting in a 10x and 23x reduction in computational complexity, respectively, than the current SOTA models.
§ RELATED WORKS
Skin lesion segmentation is critical in automated dermatological diagnosis; however, it is difficult due to lesion diversity and the presence of noise in the images. Traditional image processing methods have given way to advanced deep learning systems, particularly Convolutional Neural Networks (CNNs), and then Transformer-based methods, which have considerably improved segmentation accuracy and reliability.
§.§ CNN Based Methods
After the increasing popularity of Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs), they have become the go-to tools for skin lesion segmentation tasks. They have creatively solved difficulties such as feature discernment and data variability management. The field has seen notable developments, such as the introduction of a multistage fully convolutional network (FCN) to the field by <cit.>, which incorporates a parallel integration method to enhance the segmentation of skin lesions' boundaries. <cit.> have contributed similarly by creating an improved convolutional-deconvolutional network specifically optimized for dermoscopic image analysis and integrating various color spaces to better diagnose lesions. <cit.> expanded on this trend of architectural innovation with their DoubleU-Net, which combines multiple U-Net structures to improve segmentation accuracy.
In parallel, efforts to develop automated detection systems have been prominent. <cit.> worked on the early detection of malignant skin lesions using dilated convolutions across multiple architectures such as VGG16, VGG19 by <cit.>, MobileNet by <cit.>, and InceptionV3 by <cit.>, as well as the HAM10000 dataset by <cit.> for training and testing. The use of pre-trained networks and deep learning models is also evident in the multiple winning solutions at the ISIC 2018 Challenge <cit.>, where many built their model on the DeepLab <cit.> architecture using pre-trained weights from PASCAL VOC-2012 <cit.> and used ensemble approach's among others with models such as VGG16, U-net, DenseNet by <cit.>, and Inceptionv3, fine-tuning these with additional training iterations for state-of-art performance.
<cit.> and <cit.> proved the flexibility of these models in varied context-aware settings by improving feature extraction in CNNs, the former through modified skip connections and the latter with multistage UNets. Furthermore, <cit.> introduced a new focal Tversky loss function to address data imbalance, a significant difficulty in medical imaging, improving the precision recall balance for small lesion structures.
The ISIC 2019 challenge also led to several new studies using CNNs for dermoscopic medical imaging. <cit.>, <cit.>, and <cit.> used a variety of CNN architectures with different data augmentation methods. These studies demonstrated CNNs ability to segment skin lesions locally, but their performance shortcomings can be attributed to their inability to extract valuable global context information.
§.§ Transformer Based Methods
Being limited to only local features forced researchers to seek new approaches. This caused an evolution towards the usage of global feature-based tools. This evolution is distinguished by a shift from standard CNN-based techniques toward novel ways of using transformers and self-attention mechanisms. <cit.> pioneered the Dense Deconvolutional Network (DDN) in skin lesion segmentation, employing dense layers and chained residual pooling to capture long-range relationships, a significant departure from prior approaches. Furthermore, <cit.> investigated adversarial learning with SegAN, improving segmentation accuracy by adeptly capturing subtle relationships, a significant development in dermatological imaging. <cit.> and <cit.> substantially advanced skin lesion segmentation with their new methodologies. Mirikharaji and Hamarneh implemented a star-shape prior (SSP) in a fully convolutional network to improve accuracy and reliability by penalizing non-star-shaped regions while preserving global structures. The use of shape priors to segment complex skin lesion patterns was demonstrated in this study. <cit.> supplemented this with CPFNet, which uses pyramidal modules to collect global context in feature maps, successfully managing skin lesion variability and enhancing delineation accuracy in intricate lesion patterns.
Using transformers in neural networks, pioneered by <cit.>, was a significant turning point. <cit.> and <cit.> introduced transformers to computer vision. <cit.> demonstrated the effectiveness of self-attention mechanisms in image recognition models, which is helpful for complex skin lesion patterns. Additionally, <cit.> created TransUNet, which combines Transformers and U-Net to improve medical picture segmentation. The strength of TransUNet is in effectively encoding picture patches from CNN feature maps, which is essential for capturing detailed global context in segmentation tasks. <cit.> demonstrated TransUNet's performance in skin lesion segmentation, stressing its superior accuracy and dice coefficient over standard models, emphasizing the benefits of merging CNNs with transformers in medical imaging. Moreover, <cit.> developed the boundary-aware Transformer (BAT) for segmentation of skin lesions.
BAT incorporates a boundary-wise attention gate in its transformer structure to address unclear lesion boundaries, efficiently collecting global and local information in skin lesion imaging. FAT-Net, a feature-adaptive transformer network for segmentation of skin lesion, was introduced by <cit.>. FAT-Net adeptly maintains long-range dependencies and contextual nuances by incorporating an extra transformer branch into the standard encoder-decoder structure, precisely addressing the variability and irregularity in skin lesions and improving melanoma analysis.
§ METHODOLOGY
In this section, we introduce MobileUNETR, our high-performance, efficient, and lightweight architecture for skin lesion segmentation. As shown in Figure <ref>, the core MobileUNETR architecture consists of two main modules: (1) First, a lightweight hybrid encoder that efficiently generates coarse high-level and fine-grained low-level features; and (2) A novel lightweight hybrid decoder that effectively combines multilevel features while factoring in local and global context clues to generate high-accuracy semantic segmentation masks.
§.§ Model Complexity
The overarching goal of the medical imaging community is the pursuit of performance over complexity on a particular task such as skin lesion segmentation. One of the main contributions of the proposed MobileUNETR architecture is to demonstrate that well-constructed lightweight and efficient models can offer much better performance compared to large computationally expensive architectures. As seen in Figure <ref>, the proposed architecture is 10X smaller and 23X more computationally efficient against SOTA architectures in skin lesion segmentation while generating better results. Simplifying the model not only enhances training and performance on small datasets but also facilitates deployment in resource-limited environments.
§.§ Encoder
Two major groups of deep learning architectures exist in medical vision research, CNNs and Transformers, each with their own advantages and disadvantages. CNNs have been the defacto approach for many medical vision applications due to its efficiency, natural inductive biases and its ability to hierarchically encode features. However, despite its success, pure CNN based feature encoders are unable to effectively gain global contextual understanding of a given scene. Many hand-crafted approaches have been proposed to help CNNs obtain a larger receptive field such as dilated convolutions <cit.> and deeper models, however, image size and computational complexity constraints further research is required to help improve overall performance. Unlike CNNs, Transformers are designed to achieve a true global understanding of a scene. However, their computational constraints at large spatial resolutions hinder their adoption for efficient deep learning applications.
By exploiting the natural advantages and disadvantages of CNNs and Transformer architectures, our proposed encoder maximizes feature representation capabilities while significantly minimizing computational complexity and parameter count. At a high level, the feature extraction modules can be broken down into two stages: 1) CNN based local feature extraction and downsampling, 2) Hybrid Transformer/CNN based local and global representation learning.
CNN based local feature extraction: End-to-end transformer models for computer vision, such as ViT and its derivatives <cit.> result in large computationally complex models due to large sequence lengths generated for each input image. By combining the sequence length bottleneck in Transformers and the natural tendency of ViTs to learn low-level features in early layers <cit.>, simple CNN based early feature extraction substitution can be added to significantly reduce the computational complexity of the architecture. Specifically, MobileNet <cit.> downsampling blocks are used within the proposed architecture to minimize the computational complexity of the low-level feature extraction stage without compromising the learned feature representations. Additionally, CNN based features allow the model to better incorporate spatial information compared to pure ViT based approaches while effectively reducing the spatial dimensions of the input data, allowing downstream transformer layers to efficiently learn global feature representations.
Hybrid Transformer/CNN blocks: Once efficient down-sampling is performed via CNNs to mitigate the computational complexity associated with large spatial resolutions, the MobileViT block is used to simultaneously extract local and global representations. The MobileViT block allows us to incorporate the long-range contextual benefits of Transformers while maintaining spatial ordering and local inductive biases. The operation can be broken down into two main components, as seen in Figure <ref>. First, CNN-based depth-wise separable convolution <cit.> is applied to encode spatial information and project features into high-dimensional space. Finally, to model long-range dependencies, the tensor is unfolded into non-overlapping flattened patches, and self attention layers are applied to capture interpatch relationships. This combination allows each feature map to have local and global understanding of the scene at each stage, improving its contextual understanding of the scene.
§.§ Decoder
Most segmentation models emphasize the importance of the encoder stage for great segmentation performance. Here, local and global understanding is favored to ensure that relevant features that contain both are learned, compressed, and passed forward to the next stage. Most encoder-decoder approaches that use CNNs, Transformers, or CNN/Transformer dual encoders for feature extraction heavily rely on pure convolution to map extracted features to the final segmentation mask. A drawback of this approach is that, by using pure CNN layers within the decoder, we force the model to use information extracted at the bottleneck to learn features that ensure local continuity without providing it the capability to recalibrate itself using global contextual information. Additionally, naively stacking CNN layers can lead to large decoder modules, adding to the overall computational complexity of the encoder-decoder architecture. Our novel hybrid decoder architecture is a fast, computationally efficient, and lightweight approach that allows the model to hierarchically construct the final segmentation mask while ensuring that local and global context features are used at every stage of the decoding process. The proposed decoder module at 1.5 million parameters efficiently combine the benefits of CNN and Transformer architectures, into an alternative to CNN based decoding methods.
Simple Hybrid Decoder: Typical CNN decoder modules in medical imaging <cit.> extract and refine the features of the encoder with a combination of transpose and standard convolutions. Using this structure allows the model to hierarchically increase the spatial resolution while refining features at each level with the help of features provided via skip connections. Despite their success, decoders that rely solely on CNNs face challenges in dynamically adapting their own features to ensure that the features learned at each stage are globally aligned. The proposed lightweight decoder performs three operations at each stage to ensure that the features extracted at each decoding stage are locally and globally aligned Figure <ref>e. First, the feature map from the previous stage is upsampled via transpose convolutions. Next, we perform a local refinement of the upsampled features by combining information with the respective skip connections. Finally, the Transformer/CNN hybrid layers are used to allow the model to dynamically adjust itself based on long-range global contexts. By combining local refinement with global refinement stages, we allow the decoder to generate features that improve segmentation results by improving both local and global boundaries that are well aligned at each stage.
§ EXPERIMENTAL RESULTS
To showcase MobileUNETR's effectiveness as a highly competitive segmentation architecture, we perform multiple experiments across widely popular skin lesion segmentation datasets as well as compare and contrast its performance of the proposed model against high performing segmentation models.
§.§ Dataset
To evaluate the performance of our efficient and lightweight model, MobileUNETR, we used four publicly available datasets for segmenting skin lesions. The International Skin Imaging Collaboration (ISIC) has developed and released three widely used datasets ISIC 2016, ISIC 2017 and ISIC 2018 for the task of skin lesion segmentation. Additionally, we evaluated our model performance on the PH2 dataset made available by Dermatology Service of Hospital Pedro Hispano, Portugal. The dataset breakdowns are provided below.
§.§ Implementation Details
Our proposed MobileUNETR and accompanying experiments are trained and evaluated using PyTorch on a server equipped with a CPU and RTX 3090 GPU. All models follow a simple training procedue with AdamW <cit.> optimizer with parameters B1 = 0.9 and B2 = 0.999, employing a batch size of 8. The experimental setup incorporates a linear warming-up stage spanning 40 epochs, during which the learning rate gradually increases from 0.0004/40 to 0.0004. Subsequently, a cosine annealing scheduler is employed to decay the learning rate over 400 epochs. Adhering to established practices, we employ straightforward data preparation and augmentation techniques available in PyTorch, ensuring the accessibility and reproducibility of our results.
§.§ Results on ISIC 2016
The ISIC 2016 dataset represents one of the first standardized skin lesion segmentation task comprising of 900 training images and 300 testing images. Our proposed MobileUNETR is bench marked against nine different architectures, encompassing FCNN, Attention-augmented FCNNs, Generative Adversarial Network (GAN)-based methods, and Transformer-based methods. Performance results across seven metrics are consolidated in Table 1 with a 2.17% and 1.21% increate in IoU and Dice metrics respectively.
§.§ Results on ISIC 2017
The ISIC 2017 improves the scope of skin lesion segmentation by expanding the data corpus size. This dataset comprises 2500 training images with 600 testing images. Our proposed MobileUNETR is systematically bench marked against 12 diverse architectures. We showcase that our model consistently demonstrates improvements in IOU, Dice, and accuracy metrics in all architectures while maintaining a lightweight and efficient design. Results are presented in Table 2 where we boast a 2.47% and 1.84% increase in IoU and Dice metrics respectively.
§.§ Results on ISIC 2018
The ISIC 2018 dataset stands out as the most comprehensive among commonly utilized skin lesion segmentation datasets. The dataset comprises of 2694 training images together with 1000 testing images. Similar to previous experiments, proposed MobileUNETR is bench marked against a diverse set of 10 architectures, covering a wide range of architectures. Performance outcomes across seven metrics for ISIC 2018 are consolidated in Table 3. Our results consistently reveal enhancements ranging from 2.54%to 1.71% in IOU, and Dice metrics across all architectures, while maintaining a lightweight and efficient design.
§.§ Results on ISIC PH2
Finally, we present the evaluation of MobileUNETR's performance using the PH2 dataset. Unlike the earlier ISIC datasets, PH2 represents a relatively compact dataset, providing an opportunity to highlight the generalization capabilities of our hybrid architecture in handling smaller datasets. Aligning with our previous experiments, we benchmarked the proposed architecture against nine diverse architectures, and performance results are presented in Table 4. Our results consistently reveal improvements ranging from 2.68% and 1.3% in IOU and Dice metrics, respectively. Successful experiments on PH2 demonstrate the adaptability of our proposed model for applications involving sparse datasets.
§.§ Comparison to Advanced Training Techniques
As an alternative to designing lightweight deep learning architectures a class of advanced training techniques called Parameter Efficient Fine Tuning (PEFT) <cit.> has been prevalent in recent research. To demonstrates that despite the compact size of the architecture, our model achieves results that rival those of larger architectures employing advanced training techniques we compare our method with recent solutions employing these advanced techniques. Table 5 showcases the effectiveness of well-designed lightweight architectures, proving they can be as effective as large complex models and emphasizing that over-parameterization is not the future of modern deep learning.
§ CONCLUSION
Encoder-decoder architectures provide researchers with a strong architectural paradigm for medical image segmentation. Although it has been used successfully to push the boundaries of medical image segmentation, larger and more complex versions of the encoder decoder paradigm may not be the solution for modern deep learning architectures. This paper introduces MobileUNETR, an innovative and efficient hierarchical hybrid Transformer architecture with tailored for image segmentation. Unlike existing methods, MobileUNETR efficiently integrates local and global information in both the encoder and decoder stages, leveraging the benefits of convolutions and transformers. This integration allows the encoder to extract local and global features during the encoding stage, while allowing the decoder to reconstruct these features, ensuring both local and global alignment in the final segmentation mask. By incorporating local and global features at each level, MobileUNETR avoids the need for large, complex, and over-parameterized models. This not only enhances performance, but also significantly reduces model size and complexity. Extensive experiments were carried out that compared and contrasted our proposed medical image segmentation method with four widely used public datasets (ISIC 2016, ISIC 2017, ISIC 2018, and PH2 dataset). Comparative analyses with state-of-the-art methods demonstrate the effectiveness of our MobileUNETR architecture, showcasing superior accuracy performance and excellent efficiency in model training and inference. Across all datasets MobileUNETR demonstrates a 1.3% to 2.68% increase in Dice and IoU metrics with a 10x and 23x reduction in parameters and computational complexity, compared to current SOTA models. We hope that our method can serve as a strong foundation for medical imaging research, since the application of MobileUNETR in image segmentation is endless. Additionally, we hope that our work here has opened the door to motivate further research in efficient architectures in medical imaging research.
splncs04
|
http://arxiv.org/abs/2409.02742v1 | 20240904142134 | Phosphorus Abundances of B-Type Stars in the Solar Neighborhood | [
"Yoichi Takeda"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Phosphorus Abundances of B-Type Stars in the Solar Neighborhood
Yoichi T a k e d a11-2 Enomachi, Naka-ku, Hiroshima-shi, Japan 730-0851
e-mail: [email protected]
Month Day, Year
Phosphorus abundances of ∼ 80 apparently bright sharp-lined early-to-late B-type
stars on the upper main sequence are determined by applying the non-LTE analysis
to the P ii line at 6043.084 Å, with an aim of getting information on the P
abundance of the galactic gas (from which these young stars were formed) in
comparison with the reference solar abundance (A_⊙≃ 5.45).
These sample stars turned out to be divided into two distinct groups with respect
to their P abundances: (1) chemically peculiar late B-type stars of HgMn group show
considerable overabundances of P (supersolar by ∼ 0.5–1.5 dex), the extent of which
progressively increases with T_ eff. (2) In contrast, the P abundances of
normal B-type stars are comparatively homogeneous, though a notable difference is
observed between the LTE and non-LTE cases. Although their LTE abundances are near-solar,
a slight gradual trend with T_ eff is observed. However, after applying the negative
non-LTE corrections (amounting ∼ 0.1–0.5 dex), this T_ eff-dependence is
successfully removed, but the resulting non-LTE abundances (their mean is ≃ 5.20)
are appreciably underabundant relative to the Sun by ∼ 0.2–0.3 dex.
The cause of this systematic discrepancy (contradicting
the galactic chemical evolution) is yet to be investigated.
Galaxy: solar neighborhood – stars: abundances – stars: chemically peculiar –
stars: early-type – stars: population I
§ INTRODUCTION
Astrophysical interest in the cosmic abundance of phosphorus (P; Z = 15) is rapidly
growing these days. While one reason is that the mechanism of how this element is synthesized
in the chemical evolution of the Galaxy is not yet well understood, another intriguing
motivation lies in its astrobiological context.
That is, P is (along with H, C, N, O) an indispensable key element
for life, which is the backbone of nuclear acids (RNA, DNA) or cell
membranes, and plays a significant role in producing/reserving
the vital energy via ATP (Adenosine TriPhosphate).
Especially, the original P abundance of protoplanetary material (or star-forming gas)
in comparison with that of the Sun (comparatively P-rich) is an important factor,
because substantially subsolar case would make it difficult to leave sufficient amount P
for life on the surface of planets due to its strongly partitioning nature in the planetary
core (Hinkel et al. 2020).
Accordingly, not a few stellar spectroscopists devoted their energies to P abundance determinations
since 2010s, and more than a dozen papers have been published during the short period of
the past decade (see, e.g., Table 1 of Sadakane & Nishimura 2022, Sect. 1 of Maas et al. 2022,
and the references therein). These authors established the phosphorus abundances of
various late-type (FGK-type) stars by using neutral P i lines in the near-infrared
region (Y-band or H-band) or in the ultraviolet region (∼ 2135Å).
It has thus revealed that [P/Fe] (logarithmic P-to-Fe ratio) tends
to progressively increase with a decrease in the metallicity ([Fe/H])
like α-elements for stars of disk population (from [P/Fe] ∼ 0
at [Fe/H] ∼ 0 to [P/Fe] ∼ +0.5 at [Fe/H]∼ -1)
whereas it turns to drop again in the metallicity regime of halo
population ([P/Fe] ∼ 0 at [Fe/H] ≲ -2), as shown
in Fig. 1 of Bekki & Tsujimoto (2024).
While several theoreticians tried to explain this trend of [P/Fe]
mainly in terms of the P production by core-collapse supernovae,
Bekki & Tsujimoto (2024) recently argued that oxygen–neon (ONe)
novae (triggered in close binary systems including heavier white
dwarfs) should play a significant role in the galactic nucleosynthesis
of P.
However, these studies are directed only to comparatively cool stars of lower mass (typically
around ∼ 1 M_⊙) which have ages on the order of ∼ 10^9–10^10 yr.
Meanwhile, much less efforts have been made to young hotter stars, such as B-type stars
of ∼ 3–10 M_⊙ (reflecting the gas composition of the Galaxy in the more
recent past; i.e. several times ∼ 10^7–10^8 yr ago). Actually, phosphorus
abundance determinations of B stars are generally scarce, excepting HgMn stars (chemically
peculiar late B-type stars showing considerable overabundance of P by up to ∼ 1–2 dex),
for which P abundances have been reported for quite a number of stars as compiled by
Ghazaryan & Alecian (2016).
This is presumably due to the difficulty in finding useful P lines of sufficient
strengths. In the atmosphere of B-type stars, P atoms are mainly in the ionization stages
of P ii (late–mid B) or P iii (early B), as illustrated in Fig. 1.
Since available P ii or P iii lines in the optical wavelength
regions are all of high-excitations (χ_ low > 10 eV), they are
fairly weak in strength for the case of usual (near-solar) P abundances and thus
not easy to detect.[Although strong low-excitation P ii or
P iii lines do exist in the ultraviolet region, they are not suitable for
reliable abundance determinations (i.e., too strong and apt to suffer blending).]
As such, published studies of P abundances for “normal” B-type stars are quite limited:
* Phosphorus abundances are reported for the well-studied benchmark sharp-lined star
ι Her (B3 IV), as summarized in Table 1 of Golriz & Landstreet
(2017). However, the results by three studies (Pintado & Adelman 1993; Peters & Polidan
1985; Peters & Aller 1970; based either on P ii or P iii lines) show
rather large diversities in A[A is the logarithmic number abundance
of the element (P) relative to that of hydrogen with the usual normalization of
A_ H = 12 for H.] from ∼ 5.8 to ∼ 6.4; i.e., apparently P-rich
if simply compared with the solar abundance (A_⊙ = 5.45).[
In this article, Anders & Grevesse's (1989) solar photospheric P abundance of
A_⊙ = 5.45 is adopted as the reference, in order to keep consistency with
Kurucz's (1993) ATLAS9/WIDTH9 program. See Appendix A for a more detailed discussion
on this subject.]
* Pintado & Adelman (1993) also determined the P abundance of γ Peg
(B2 IV) from P iii lines to be A ∼ 5.4 (i.e., almost solar).
* In Allen's (1998) abundance studies on early A and late B stars, attempts
of P abundance determinations for 7 normal stars based on P ii lines in
the optical region turned out to be unsuccessful, though 5 HgMn stars
were confirmed to be significantly P-rich (A ∼ 5.9–7.8).
* Fossati et al.'s (2009) analysis on 2 normal late B-type stars (21 Peg and
π Cet) based on P ii lines yielded [P/H][As usual, [X/H]
is the differential abundance of element X relative to the Sun;
i.e., [X/H] ≡ A_ star( X)- A_⊙( X).] ∼ +0.3 dex
(moderately overabundant) for both.
* In Niemczura et al.'s (2009) abundance study on late B-type stars (including HgMn
stars), the P abundances of 3 normal stars were determined as A = 6.01 (HD 49481),
5.60 (HD 50251), and 5.28 (HD 182198); i.e., nearly solar or moderately supersolar.
* Przybilla et al. (2006) derived a near-solar P abundance of A = 5.53 (± 0.06)
for the B-type supergiant β Ori (B8 Iae) based on 4 P ii lines.
Therefore, phosphorus abundance problem of normal B-type stars is far from being
settled (near-solar? or somewhat supersolar?), given such insufficient data
by different investigators for only a small number of stars. What we require
is a comprehensive study based on a large sample of stars, by which wealthy
and homogeneous data of P abundances would be accomplished.
Motivated by this consideration, I decided to conduct an extensive analysis of
P abundances for ∼ 80 young B-type stars (including HgMn stars) by using
the P ii line at 6043.08 Å (hereinafter often
referred to as P ii 6043; it is the strongest P ii line in
the optical region and almost free from any appreciable blending),
in order to estimate the P composition of the galactic gas
at the time of their formation.
Besides, an emphasis was placed on incorporating the non-LTE affect in
P abundance determinations based on statistical equilibrium calculations.
Since non-LTE calculation for P has never been carried out
so far (to the author's knowledge) and all the past investigations mentioned
above were done based on the assumption of LTE, it would be interesting
to see how the new non-LTE results compare with the previous ones.
In addition, the significance of the non-LTE effect in the determination of
phosphorus abundance in the Sun (and FGK-type stars) based on P i lines
in the near-infrared region was also examined as a related topic.
This supplementary analysis is separately presented in Appendix A.
§ OBSERVATIONAL DATA
The observational materials used in this study are the high-dispersion
(R ∼ 70000) and high-S/N (∼ 200–700) spectra obtained by HIgh Dispersion
Echelle Spectrograph (HIDES) placed at the coudé focus of the 188 cm reflector
at Okayama Astrophysical Observatory, which are the same as already employed in the
previous two papers of the author. (i) 64 early-to-late B-type stars (mostly normal stars
but some are late-B chemically peculiar stars) observed in 2006 October, which were
studied by Takeda et al. (2010; hereinafter referred to as Paper I) for their O and
Ne abundance determinations by using O i 6156–8 and Ne i 6143 lines.
(ii) 21 late B-type stars (HgMn-type peculiar stars and normal stars) observed
in 2012 May, which were analyzed by Takeda et al. (2014; hereinafter Paper II)
for their Na abundance determinations based on Na i 5890/5896 lines.
See these original papers for more details about the adopted spectra.
The program stars (85 in total) are all sufficiently sharp-lined (projected
rotational velocities are v_ esin i ≲ 60 km s^-1) and are located
in the solar neighborhood (within ≲ 1 kpc). Their fundamental stellar data
are listed in Table 1, where two groups (i) and (ii) are presented separately.
These 85 targets are plotted on the log L vs. log T_ eff diagram in Fig. 2,
where Lejeune & Schaerer's (2001) standard theoretical evolutionary tracks
(solar metallicity, non-rotating models with mass loss; see Sect. 2 therein
for the details of the input physics they adopted)
corresponding to different stellar masses are also depicted.
This figure indicates that the sample stars are in
the mass range of 2.5 M_⊙≲ M ≲ 9 M_⊙.
§ ATMOSPHERIC PARAMETERS
Regarding the effective temperature (T_ eff) and the surface gravity (log g)
of each star, the same values as adopted in Papers I and II are used unchanged (cf. Table 1),
which were determined from colors (b-y, c_1, m_1, and β)
of Strömgren's uvbyβ photometric system.
As to the microturbulence (ξ), the values assumed in Paper I
[3 (± 2) km s^-1 for early-to-late B-type stars] and Paper II
[1 (± 1) km s^-1 for late B-type stars] were not consistent.
In this paper, T_ eff-dependent values are assigned as
ξ = 1 (± 1) km s^-1 (10000 K < T_ eff < 16500 K) and
ξ = 2 (± 1) km s^-1 (16500 K < T_ eff < 23000 K).
This is due to the fact that ξ plays a significant role only for the case of
P-rich chemically peculiar stars (HgMn stars) found only among late B-type stars,
while the P-line strengths of normal stars (existing over the entire T_ eff
range) are so weak that their P abundances are practically ξ-independent.
Therefore, the same value as in Paper II (ξ = 1 km s^-1; specific
to late B-type stars) is adopted for T_ eff < 16500 K. while a tentative
value of 2 km s^-1 is roughly assumed at T_ eff > 16500 K
(all are normal stars with weak lines of phosphorus).
The model atmospheres adopted for each of the targets are the same as in Papers I
and II, which are the solar-metallicity models constructed by two-dimensionally
interpolating Kurucz's (1993) ATLAS9 model grid in terms of T_ eff and log g.
§ SPECTRUM FITTING ANALYSIS AND EVALUATION OF EQUIVALENT WIDTHS
As already mentioned in Sect. 1, it is important to employ a spectral line
of as large transition probability (log gf) as possible for successful
P abundance determinations of B-type stars, because lines tend to be
considerably weak and hard to detect for near-normal P abundances.
In this respect, the suitable candidate lines in the optical region
are the P ii lines of 4s ^3P^∘ – 4p ^3D transition
(lower excitation potentials of χ_ low∼ 10.8 eV, multiplet 5).
They are at 6024.13, 6034.34, 6043.08, 6087.84, and 6165.60 Å
and have log gf values of +0.20, -0.15, +0.44, -0.38, and -0.41,
respectively, according to the VALD database (Ryabchikova et al. 2015).
Therefore, we invoke the P ii line at 6043.08 Å, which is the strongest
one among these and also free from any appreciable blending. Besides,
this line has another advantage that it lies almost in the middle part of
the relevant Echelle order (covering 5990–6100 Å) where the S/N ratio
is comparatively higher.
The procedures of analysis (spectrum fitting, equivalent width derivation,
estimating abundance errors due to parameter uncertainties) are essentially the same
as adopted in Papers I and II (cf. Sect. 4 therein). Note that all the calculations
in this section are done with the assumption of LTE at this stage.
§.§ Synthetic Spectrum Fitting
First, a spectrum-fitting technique was applied to the 6040–6050 Å region
(comprising the P ii 6043 line), by which the best-fit between theoretical and
observed spectra is accomplished. Here, the parameters varied are the abundances of
P, O, and Ne (+ Mn if necessary), rotational broadening velocity (v_ esin i),
and radial velocity (V_ rad). The data of all atomic lines included in this
wavelength region were taken from the VALD database (Ryabchikova et al. 2015).
Specifically, the data for the relevant P ii line at 6043.084 Å are
χ_ low = 10.802 eV (lower excitation potential), log gf = +0.442
(logarithmic gf value), Gammar = 9.22 (radiation damping parameter),
Gammas = -5.76 (Stark effect damping parameter), and Gammaw = -7.73 (van der Waals
effect damping parameter).[
Gammar is the radiation damping width (s^-1), logγ_ rad.
Gammas is the Stark damping width (s^-1) per electron density (cm^-3)
at 10^4 K, log(γ_ e/N_ e).
Gammaw is the van der Waals damping width (s^-1) per hydrogen density
(cm^-3) at 10^4 K, log(γ_ w/N_ H).]
The phosphorus abundances could be successfully established for 83 stars
(out of 85 program stars), except for HD 029248 and HR 3652, for which
the P abundance was tentatively fixed at an arbitrary value in the fitting.
The agreement between the theoretical spectrum (for the solutions
with converged parameters) with the observed spectrum for each star
is shown in Fig. 3.
§.§ Equivalent Widths and Their Errors
Next, the equivalent width of the P ii 6043 line W_6043 was inversely
evaluated from the abundance solution resulting from the fitting analysis.
The W_6043 values derived in this manner are given in Table 1. While
this line is generally weak for ordinary B stars (1 mÅ≲ W_6043≲ 10 mÅ),
it can be stronger (up to W_6043∼ 100 mÅ) for P-rich peculiar stars (cf. Fig. 4a).
The error involved with W_6043 was estimated as
δ W ∼ϵ W_6043(1 - R_0)/R_0
according to Takeda (2023; cf. Sect. 6.2 therein)
where ϵ (≡ ( S/N)^-1) is the random fluctuation of the
continuum level and R_0 (≡ 1 - F_0/F_ c) is the line depth
at the line center.
By inserting the S/N ratios (measured around the P ii 6043 line; see column 12
in Table 1) into Eq. (1), typical values of δ W were found to be
∼ 1–3 mÅ in most cases[
Errors of W_6043 evaluated by Cayrel's (1988) formula (depending on S/N, pixel
size, and line widths) were found to be smaller (typically by several times) than
δ W defined by Eq. (1), and thus not taken into account.] (or up to
∼ 10 mÅ in the exceptional case of low S/N and very shallow R_0),
as depicted by error bars attached to the symbols in Fig. 4a.
The impact of δ W on the P abundance (denoted as δ_W) can be significant
(e.g., a few tenths dex or even more) for the very weak line case where W_6043
and δ W are on the similar size.
§.§ Impact of Parameter Uncertainties
How the ambiguities in atmospheric parameters (T_ eff, log g,
and ξ) affect the P abundances was estimated by repeating the analysis on
the W_6043 values while perturbing these standard parameters interchangeably
by ± 3%, ± 0.2 dex, and ± 1 km s^-1, which are the typical
uncertainties for T_ eff and log g (cf. Sect. 3 in Paper I) and
for ξ (cf. Sect. 3 of this paper).
The resulting A^ L (P abundances in LTE),
δ_T± (abundance changes by perturbations of T_ eff),
δ_g± (abundance changes by perturbations of log g), and
δ_ξ± (abundance changes by perturbations of ξ)
are plotted against T_ eff in Figs. 4b, 4c, 4d, and 4e,
respectively.
As seen from Figs. 4c and 4d, both |δ_T| and |δ_g|
are ≲ 0.1 dex and thus not very significant.
According to Fig. 4e, |δ_ξ| is negligibly small for normal stars,
while |δ_ξ| amounts up to ∼ 0.1 dex for P-rich late B-type
peculiar stars. The error bars attached to the symbols in Fig. 4b
are the root-sum-squares of δ_W, δ_T, δ_g,
and δ_ξ.
§ STATISTICAL EQUILIBRIUM CALCULATIONS FOR P II
§.§ Atomic Model
The non-LTE calculations for P ii were carried out based on the P ii model
atom comprising 83 terms (up to 3s^2 3p 9g at 154582 cm^-1) and 1206 radiative
transitions, which was constructed by consulting the updated atomic line data
(filename: “gfall21oct16.dat”)[http://kurucz.harvard.edu/linelists/gfnew/]
compiled by Dr. R. L. Kurucz. Though the contribution of P i was neglected,
P iii was taken into account in the number conservation of total P atoms.
Regarding the photoionization cross section, the data calculated by Nahar et al.
(2017)[The cross-section profiles (as functions of ionizing photon energy)
were roughly digitized from their Fig. 1, where attention was paid to reproduce
only the global trend, because many sharp resonance peaks included in the
original data are very difficult to read out.]
were used for the lowest 4 terms (^3P, ^1D, ^1S, and ^5S^∘),
while the hydrogenic approximation was assumed for the remaining terms.
Otherwise (such as the treatment of collisional rates), the recipe described in Sect. 3.1.3
of Takeda (1991) was followed (inelastic collisions due to neutral hydrogen
atoms were formally included as described therein, though insignificant
in the atmosphere of early-type stars considered here).
§.§ Grid of Models
The calculations were done on a grid of 36 (= 9 × 4)
solar-metallicity model atmospheres
resulting from combinations of nine T_ eff values
(9000, 10000, 12000, 14000, 16000, 18000, 20000, 22000, and 24000 K)
and four log g values (3.0, 3.5, 4.0, and 4.5).
while assuming ξ = 2 km s^-1.
Regarding the input P abundance, three values of A(P) = 4.45,
5.45, and 6.45 (corresponding to [P/H] = -1, 0, and +1) were assumed,
resulting in three kinds of non-LTE grids.
The depth-dependent non-LTE departure coefficients to be used for each star were
then evaluated by interpolating the grid (for each [P/H])
in terms of T_ eff and log g.
§ NON-LTE EFFECT ON P ABUNDANCE DETERMINATIONS
§.§ Characteristic Trends
Fig. 5 displays the l_0^ NLTE(τ)/l_0^ LTE(τ) (the
non-LTE-to-LTE line-center opacity ratio; almost equal to ≃ b_ l) and
S_ L(τ)/B(τ) (the ratio of the line source function to the Planck
function; nearly equal to ≃ b_ u/b_ l) for the transition
relevant to the P ii 6043 line (b_ l and b_ u are the non-LTE
departure coefficients for the lower and upper levels) as functions of optical depth
at 5000Å for selected representative cases.
Likewise, Fig. 6 illustrates how the theoretical equivalent widths calculated in LTE
(W^ L) as well as in non-LTE (W^ L) and the corresponding non-LTE
corrections (Δ≡ A^ N - A^ L, where A^ L and A^ N are
the abundances derived from W^ N with LTE and non-LTE) depend upon T_ eff.
The following characteristic trends are read from these figures.
* As seen from Fig. 6, the inequality W^ N > W^ L (and Δ < 0) holds
in most cases (except for the high T_ eff end at ≳ 20000 K where the trend
is inverse), which means that the non-LTE effect tends to strengthen the
P ii 6043 line.
* For a given [P/H], |Δ| almost correlates with W as well as T_ eff
in the sense that |Δ| takes the largest value around T_ eff∼ 15000 K
where W reaches a maximum, as recognized from each panel of Fig. 6.
This is understandable because an increase of W makes the line formation zone shallower
where the departure from LTE is larger.
* As to g-dependence of the non-LTE effect, |Δ| tends to be larger for lower
log g (i.e., lower density atmosphere) as seen at T_ eff≲ 15000 K,
though the situation is not so simple at the higher T_ eff regime
where Δ gradually shifts in the direction of changing its sign.
* A more important factor significantly affecting the degree of departure from LTE is
the P abundance ([P/H]) assumed in the calculations. That is, the non-LTE effect
(|Δ|) tends to be progressively less significant with an increase in [P/H]
(Fig. 6d → Fig. 6e → Fig. 6f) if compared at the same
T_ eff and log g, despite that the line strength (W) is enhanced
with increasing [P/H] (Fig. 6a → Fig. 6b → Fig. 6c).
* This [P/H]-dependence may be understood by considering the mechanism controlling
the level populations for the relevant P ii 6043 line.
The lower level (4s ^3P^∘; χ_ low≃ 10.8 eV)
of this transition is radiatively connected to the ground level (3p^2 ^3P;
χ_ low≃ 0 eV) by lines of large transition probabilities at
∼ 1150–1160 Å. Then, this strong UV transition is almost in the condition
of radiative detailed balance (if it is sufficiently optically thick), which
makes the 4s ^3P^∘ level as if “meta-stable”.
In this case, if the subordinate lines originating from 4s ^3P^∘
become optically thin, this level would be overpopulated (b > 1) by cascading
from the upper levels. Here, the P abundance would play a significant role
in the sense that lower [P/H] (thinner optical thickness of subordinate lines)
leads to more enhanced cascades and larger overpopulation. Actually, the extent
of overpopulation systematically increases with a decrease in [P/H] if compared
at the same depth (dotted lines → solid lines →
dashed lines in Fig. 5a–5c).
* Note, however, that this argument is based on the presumption that
the 3p^2 ^3P – 4s ^3P^∘ UV transition is so optically
thick to be in detailed balancing. This condition is destined to break down
when double ionization proceeds and P ii is replaced by P iii
(T_ eff≳ 16000 K; cf. Fig. 1d), because the population of
P ii ground level comes short due to ionization. In this case,
the 4s ^3P^∘ level can not be meta-stable any more, and
eventually becomes underpopulated (b < 1). Figs. 5a–5c illustrate this
situation of how the overpopulation progressively turns into underpopulation
in the optically-thin layer as T_ eff increases from late B to early B.
This also explains the reason why Δ becomes positive (i.e., non-LTE
line weakening) at T_ eff≳ 20000 K (cf. Figs. 6d–6f).
§.§ Non-LTE Corrected Abundances
Since non-LTE corrections are appreciably dependent upon [P/H], it is necessary
to adopt a correction corresponding to an adequate [P/H] consistent with
the final non-LTE abundance. Therefore, we proceed as follows.
First, three kinds of non-LTE abundances are derived from W_6043
(A_-1^ N, A_0^ N, and A_+1^ N, corresponding to
[P/H] = -1, 0, and +1, respectively) by using three sets of departure
coefficients prepared for each star (cf. Sect. 5.2).
These three non-LTE abundances are sufficient to express A^ N
by a second-order polynomial in terms of x (≡ [P/H]) as
A^ N = a x^2 + b x + c,
where a, b, and c are known coefficients.
Meanwhile, according to the definition,
A^ N = x + 5.45.
Combining Eqs. (2) and (3), we have
a x^2 + b x + c = x+ 5.45.
Let us denote the solution of Eq. (4) as x_* (which of two solutions
should be adopted is self-evident), from which we obtain
A_*^ N (= x_* + 5.45) and
Δ_* (= A_*^ N - A^ L)
as the final non-LTE abundance and non-LTE correction.
Such derived Δ_* values are plotted against T_ eff in Fig. 7b
(black filled circles), where Δ_-1 (blue), Δ_0 (green),
and Δ_+1 (red) are also overplotted by open symbols for comparison.
Likewise, the final non-LTE abundances (A_*^ N) are shown
against T_ eff in Fig. 7c.
§ PHOSPHORUS ABUNDANCES OF B-TYPE STARS
It is apparent from Fig. 7c that the P abundances of the program stars are divided into two
groups: (i) late B-type chemically peculiar stars which exhibit conspicuous overabundances
systematically increasing from A^ N∼ 6 (T_ eff∼ 10000 K) to
A^ N∼ 7 (T_ eff∼ 16000 K), and (ii) normal early-to-late B-type
stars (10000 ≲ T_ eff≲ 22000 K) which show rather
similar P abundances irrespective of T_ eff. These two groups of stars are
separately discussed below.
§.§ P-rich Peculiar Stars
The former P-enhanced group mostly consists of non-magnetic late B-type
chemically peculiar stars (HgMn stars), which are known to show considerable
overabundance of P. Nevertheless, P-rich stars and those classified as
HgMn peculiar stars are not strictly equivalent but some exceptions do exist
(P-strong stars classified as normal or P-weak HgMn stars).
The tendency observed in Fig. 7c (P is overabundant in HgMn stars and its anomaly
increases with T_ eff) is actually a reconfirmation of the trend shown
in Fig. 4 of Ghazaryan & Alecian (2016), though the extent of overabundance
seen in their figure ([P/H] ∼ +2 at T_ eff∼ 14000 K) appears to
be somewhat overestimated by ∼ +0.5 dex in comparison with Fig. 7c,
which is presumably due to their neglect of non-LTE corrections.
It should be noted, however, that P abundances derived by using the conventional
model atmosphere (1D plane-parallel model with vertically changing physical variables
but homogeneous abundances) are of limited significance in the present case
of HgMn stars, because the chemical composition of P is likely to be stratified in their
atmospheres (i.e., increasing with height) due to the element segregation process
as indicated by recent studies (see, e.g., Catanzaro et al. 2016, Ndiaye et al. 2018,
Alecian & Stift 2019).
§.§ Superficially Normal B-type Stars
We are now to discuss the photospheric P abundances of normal B-type stars,
which should retain the composition of galactic gas from which they were formed.
As long as LTE abundances are concerned, their A^ L values almost
distribute around the solar abundance (A_⊙ = 5.45) but exhibit a
T_ eff-dependent trend at T_ eff≲ 16000 K
(dA^ L/dT_ eff∼ 0.1 dex/1000 K) as shown in Fig. 4c.
However, since the non-LTE corrections (Δ) also have a systematic gradient with
T_ eff just in the inverse sense (Fig. 7b), the non-LTE abundances (A^ N)
turn out to be almost independent upon T_ eff (Fig. 7c), accomplishing a reasonable
homogeneity. An inspection of Fig. 7c suggests that the demarcation line dividing
the two groups may be set at ∼ 5.7. Then, those 61 stars satisfying the criterion
A^ N < 5.7 are regarded as normal B stars, for which the mean abundance is
calculated as ⟨ A^ N⟩ = 5.20 (standard deviation is σ = 0.18).
Here, we are confronted with a somewhat puzzling problem if this result is compared
with the reference solar abundance of A_⊙ = 5.45. That is, P abundances
of young B-type stars (representing the gas composition at the time of
some ∼ 10^7–10^8 yr ago) are by ∼ 0.2–0.3 dex “lower”
than that of the Sun (formed ∼ 4.6 × 10^9 yr ago).
This may suggests that P abundance of galactic gas has “decreased” with time,
which apparently contradicts the standard concept of chemical evolution in the Galaxy
(elements are synthesized and expelled by stars, by which gas is chemically
enriched with the lapse of time).
Given that P abundances were determined based only on one line (P ii 6043),
whether its transition probability is credible or not may be worth checking,
because it directly affects the result. The adopted log gf = +0.442 for
this line (taken from VALD) was obtained by Dr. Kurucz in 2012 based on
the new observed data of P ii levels, by which the previous Kurucz & Bell's
(1995) value of +0.384 was somewhat revised.
This VALD value is also quite consistent with the data (+0.42) of Wiese et al. (1969),
which is also included in the database of NIST (National Institute of Standards
and Technology).
Therefore, it is unlikely that our A^ N results are significantly
underestimated due to the use of an erroneous gf value.
Another possibility is that the actual solar P abundance might be lower
than the currently believed value (A_⊙∼ 5.4–5.5).
This problem is separately focused upon in Appendix A.
As discussed therein (cf. Sect. A5), although a possibility can not be ruled
out that A_⊙ could be somewhat reduced by ≲ 0.1–0.2 dex
(i.e., down to ∼ 5.3), since uncertainties are still involved with
P i line formation calculations (e.g., H i collision rates,
treatment of 3D effect), there is no convincing reason for such a downward
revision of A_⊙.
Moreover, if the solar photospheric P abundance were to be appreciably changed,
another problem of discrepant A_⊙ from A_ meteorite would
newly emerge, because a good agreement has already been accomplished between
these two (Asplund et al. 2009).
Therefore, frankly accepting the result of the analysis, we conclude that
the phosphorus abundances of B-type stars (⟨ A^ N⟩) are
systematically lower than that of the Sun (A_⊙) by ∼ 0.2–0.3 dex.
The cause for such a puzzling discrepancy (apparently contradicting
the scenario of galactic chemical evolution) would be worth further earnest
investigation. Meanwhile, follow-up studies by other researchers are also
desirably awaited for an independent check of this observational finding.
§ SUMMARY AND CONCLUSION
Recently, special attention is being focused on the abundance of phosphorus
in the universe, mainly because of its astrobiological importance as a key
element for life. For example, whether or not a sufficient amount of P
(required for the rise of life) remains on the surface of a planet depends
critically upon the primordial P abundance of the material, from which a star
and the associated planetary system were formed.
Stimulated by such increasing interest, a number of spectroscopic studies
intending to establish stellar P abundances have been published lately.
These P-specific investigations are directed to late-type (FGK-type) stars of
lower mass (around ∼ 1 M_⊙), which are generally long-lived and
thus retain the information of P composition in comparatively earlier time
of the Galaxy (∼ 10^9–10^10 yr ago) when they were formed.
However, little efforts have been made so far to comprehensive P abundance
determinations of young hotter stars, such as B-type stars of
∼ 3–10 M_⊙, which reflect the gas composition of the Galaxy
in the more recent past (several times ∼ 10^7–10^8 yr ago).
This is presumably because P lines of sufficient strength usable as abundance
indicators are scarce. Actually, many of those B-type stars for which
such determinations were done are P-rich chemically peculiar stars (HgMn
stars), while the number of normal B-type stars with known P abundances
is quite limited.
Thus, motivated by the necessity of clarifying the behavior of phosphorus
in hotter stars of higher mass, P abundances of ∼ 80 apparently
bright sharp-lined early-to-late B-type stars on the upper main sequence
were determined from the P ii 6043.084 Å line (the strongest
P ii line in the optical region), with an aim of getting information
on the composition of this element in the young galactic gas in comparison
with the abundance of the older Sun (age of 4.6× 10^9 yr).
A special emphasis was placed upon taking into account the non-LTE effect
based on extensive statistical-equilibrium calculations on P ii atoms,
since the assumption of LTE was assumed in all the previous P abundance
determinations.
Regarding the procedures of analysis, a spectrum-fitting was first applied
to the wavelength region comprising the P ii 6043 line, and its
equivalent width (W_6043) was then derived from the fitting-based
abundance solution for each star. Finally, non-LTE abundance/correction
as well as possible error were evaluated from such established W_6043.
An inspection of the resulting P abundances revealed that the program stars
are divided into two groups.
The first group is those showing a considerable overabundance of P
(supersolar by ∼ 0.5–1.5 dex), the extent of which progressively
increases with T_ eff. These P-rich stars are observed
at T_ eff≲ 16000 K (late B-type) and mostly belong to
chemically peculiar stars of HgMn-type.
The second group consists of normal B-type stars, whose P abundances are
comparatively homogeneous without such a prominent P anomaly as in the first
group. However, different trends are observed between the LTE and non-LTE cases.
(i) Though the LTE abundances tend to distribute around the solar value,
they show a slight gradient (i.e., increasing with a decrease in T_ eff
at T_ eff≳ 16000 K).
(ii) Meanwhile, this systematic trend disappears in the non-LTE
abundances which are satisfactorily uniform, because of the cancellation
due to the T_ eff-dependent negative non-LTE corrections (amounting
to ∼ 0.1–0.5 dex).
This T_ eff-independent nature seen in the non-LTE abundances
of normal B stars suggests that they represent the P composition of
the galactic gas at the time when these young stars were born (some
∼ 10^7–10^8 yr ago).
One puzzling problem is, however, that these non-LTE abundances (around ∼ 5.2)
are appreciably lower than the P abundance of the Sun (formed ∼ 4.6 × 10^9 yr
ago) by ∼ 0.2–0.3 dex, which means that the galactic gas composition of P
has decreased with time in contradiction to the concept of chemical evolution.
Although other possibilities (e.g., error in the adopted gf value of
P ii 6043 line?, inadequacy in the current solar P abundance?)
were also examined, they do not seem to be so likely.
It may thus be concluded that the discrepancy of P abundance between the Sun
and B-type stars really exists, the cause of which should be further investigated.
This investigation has made use of the SIMBAD database, operated by CDS,
Strasbourg, France, and the VALD database operated at Uppsala University,
the Institute of Astronomy RAS in Moscow, and the University of Vienna.
Alecian, G., & Stift, M. J.2019MNRAS4824519
Allen, C. S., 1998, Abundance analysis of normal and
mercury-manganese type late-B stars from optical spectra (PhD Thesis:
University College London) [https://discovery.ucl.ac.uk/id/eprint/10097529/].
Anders, E., & Grevesse, N.1989Geochim. Cosmochim. Acta53197
Asplund, M., Grevesse, N., & Sauval, A. J., 2005,
in it Cosmic Abundances as Records of Stellar Evolution and Nucleosynthesis,
ASP Conf. Ser., 336, 25.
Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P.2009ARA&A47481
Bekki, K., & Tsujimoto, T.2024ApJL967L1
Biémont, E., Martin, F., Quinet, P., & Zeippen, C. J.1994A&A283339
Caffau, E., Steffen, M., Sbordone, L., Ludwig, H.-G., & Bonifacio, P.
2007A&A473L9
Catanzaro, G., Giarrusso, M., Leone, F., Munari, M., Scalia, C.,
Sparacello, E., & Scuderi, S.2016MNRAS4601999
Cayrel, R. 1988, in The Impact of Very High S/N Spectroscopy on Stellar Physics,
Proceedings of IAU Symposium 132, ed. G. Cayrel de Strobel, M. Spite (Kluwer, Dordrecht), p.345.
Fossati, L., Ryabchikova, T., Bagnulo, S., Alecian, E., Grunhut, J.,
Kochukhov, O., & Wade, G.2009A&A503945
Ghazaryan, S., & Alecian, G.2016MNRAS4601922
Golriz, S. S., & Landstreet, J. D.2017MNRAS4661597
Hinkel, N. R., Hartnett, H. E., & Young, P.2020ApJL900L38
Hoffleit, D. & Jaschek, C., 1991, The Bright Star Catalogue,
5th revised edition, (New Haven, Conn.: Yale University Observatory).
Kurucz, R. L. 1993, Kurucz CD-ROM, No. 13
(Harvard-Smithsonian Center for Astrophysics).
Kurucz, R. L., & Bell, B. 1995, Kurucz CD-ROM, No. 23
(Harvard-Smithsonian Center for Astrophysics).
Lejeune, T., & Schaerer, D.2001A&A366538
Lodders, K., Palme, H., & Gail, H.-P. 2009,
Solar System, Landolt-Börnstein - Group VI Astronomy and Astrophysics,
Volume 4B, p. 712 (Springer, Berlin).
Maas, Z. G., Hawkins, K., Hinkel, N. R., Cargile, P.,
Janowiecki, S., & Nelson, T.2022AJ16461
Moore, C. E.. 1959, A multiplet Table of
Astrophysical Interest: NBS Technical Note No. 36, Reprinted Version
of the 1945 edition (U. S. Department of Commerce, Washington).
Nahar, S. N., Hernández, E. M., Hernández, L., et al.2017JQSRT187215
Ndiaye, M. L., LeBlanc, F., & Khalack, V.2018MNRAS4773390
Niemczura, E., Morel, T., & Aerts, C.2009A&A506213
Peters, G. J., & Aller, L. H.1970ApJ159525
Peters, G. J., & Polidan, R. S., 1985, Proc. IAU Symp. 111, Calibration of Fundamental Stellar Quantities
(Reidel, Dordrecht), p. 417.
Pintado, O. I., & Adelman, S. J.1993MNRAS26463
Przybilla, N., Butler, K., Becker, S. R., & Kudritzki, R. P.2006A&A4451099
Ryabchikova, T., Piskunov, N., Kurucz, R. L., Stempels, H. C., Heiter, U.,
Pakhomov, Yu, & Barklem, P. S.2015Phys. Scr.90054005
Sadakane, K., & Nishimura, M.2022PASJ74298
Steenbock, W., & Holweger, H.1984A&A130319
Takeda, Y.1991A&A242455
Takeda, Y.2023Acta Astron.7335
Takeda, Y., Kambe, E., Sadakane, K., & Masuda, S.2010PASJ621239 (Paper I)
Takeda, Y., Kawanomoto, S., & Ohishi, N.2014PASJ6623 (Paper II)
Tayal, S. S.2004J. Phys. B; At. Mol. Opt. Phys.373593
Wiese, W. L., Smith, M. W., & Miles, B. M. 1969,
Atomic transition probabilities, Vol. II: Sodium through calcium -
A critical data compilation, in Nat. Stand. Ref. Data Ser., NSRDS-NBS 22
(U.S. Government Printing Office, Washington, D.C.).
Appendix A: On the Solar Photospheric Abundance of Phosphorus
§.§ A1. Literature Values of Solar P Abundance
Regarding the phosphorus abundance in the solar photosphere (usually
derived from P i lines in the Y- or H-band of the near-IR region),
not a few spectroscopic studies have been published over the past half century,
in which rather similar A_⊙ values of ≃ 5.4–5.5 are reported.
See Table 2 of Caffau et al. (2007) for a summary of 8 values (5.43, 5.45, 5.45,
5.45, 5.49, 5.45, 5.36, and 5.46) published before 2007. Thereafter,
Asplund et al. (2009) presented a revised value of A_⊙ = 5.41.
The Anders & Grevesse's (1989) value of A_⊙ = 5.45 adopted in
this paper (cf. footnote 3) is almost the same as the mean of these values.
However, all these determinations were done with the assumption of LTE,
given the lack of information regarding the non-LTE effect on the P i
lines so far. Therefore, non-LTE calculations for neutral phosphorus
were newly carried out in order to elucidate how and whether the non-LTE
corrections are important in P abundance determinations for the Sun.
In addition, how this effect would depend upon the atmospheric parameters
is also briefly examined in scope of application to late-type stars in general.
§.§ A2. Atomic Model
The adopted model atom of P i comprises 56 terms (up to 3s^2 3p^2 5d ^4D
at 79864 cm^-1) and 761 radiative transitions, which was constructed in a similar
manner to the case of P ii described in Sect. 5. The contribution of P ii
was taken into account in the number conservation of total P atoms.
Regarding the photoionization cross section, Tayal's (2004) theoretically calculated
data were adopted for the lowest three terms (^4S^∘, ^2D^∘,
^2P^∘; read from Fig. 1–3, Fig. 9, and Fig. 10 of his paper), while the
hydrogenic approximation was assumed for the other terms.
As to the collisional rates, the recipe described in Sect. 3.1.3 of Takeda (1991)
was basically followed. The collision rates due to neutral hydrogen atoms
(which are important in late-type stars but subject to large uncertainties) were
computed by Steenbock & Holweger's (1984) formula (based on the classical Drawin's
cross section), which can be further multiplied by a correction factor (k)
if necessary. Although we adopt k=1 (use of classical formula unchanged)
as the standard choice, a special case of k=10^-3 (considerably reduced
to a negligible level) was also tried in order to see its importance.
§.§ A3. Non-LTE Effect on P I 10581 Line in FGK-type Stars
First, the calculations were done for 10 solar-metallicity models (ATLAS9 models
by Kurucz 1993) resulting from combinations of (T_ eff = 4500, 5000, 5500,
6000, 6500 K) and (log g = 2.0, 4.0), while assuming ξ = 2 km s^-1
and [P/H] = 0 (A = 5.45).
The runs of l_0^ NLTE/l_0^ LTE and S_ L/B with depth
for the transition corresponding to P i 10581.58 line (representative
P i line in the near-IR region) are shown in Fig. 8, while Fig. 9 displays
how W_10581 (equivalent widths calculated in LTE and non-LTE; upper panel (a))
as well as Δ_10581 (non-LTE correction; lower panel (b))
depend upon T_ eff and log g.
The following characteristics are read from these figures.
* The line is generally intensified by the non-LTE effect (W^ N > W^ L
and Δ < 0; cf. Fig. 9), because both l_0^ NLTE/l_0^ LTE > 1
and S_ L/B < 1 (cf. Fig. 8) act in the direction of line strengthening.
* This non-LTE effect becomes progressively larger with an increase in T_ eff
and with a decrease in log g (Fig. 9b).
* How the neutral hydrogen collision is treated has an appreciable impact on the non-LTE
correction. If the standard value of H i collision is practically neglected by
a drastic reduction (k=10^-3), |Δ| increases by ∼ 0.1–0.2 dex (Fig. 9b).
* In summary, non-LTE corrections had better be taken into account in P abundance
determinations from P i lines for FGK stars, especially for those of
comparatively higher T_ eff or lower log g, for which significant
negative corrections amounting to several tenths dex may be expected.
§.§ A4. Reanalysis of Solar P I Lines
Then, statistical equilibrium calculation was done for Kurucz's (1993) ATLAS9 solar
model atmosphere (T_ eff = 5780 K, log g = 4.44, solar metallicity) with [P/H] = 0.0,
in order to reanalyze the solar P i lines by taking into account the non-LTE effect.
The basic data for the solar equivalent widths and log gf values of 15 P i
lines were taken from Table 3 of Biémont et al. (1994), which are the disk-center
values measured from Jungfraujoch Atlas (W_λ^ d.c.) and transition
probabilities based on their refined calculations. Regarding the solar microturbulence,
a reasonable value of ξ_⊙ = 1 km s^-1 was adopted as often assumed.
The resulting P abundances and non-LTE corrections are summarized in Table 2,
from which the following conclusions can be drawn.
The extents of (negative) non-LTE corrections are only a few hundredths dex (k=1 case)
and thus not particularly important, though |Δ| may somewhat increase further by
∼ 0.1 if H i collision is neglected (k = 10^-3). This is mainly due to the
high-gravity nature of the Sun (log g = 4.44) as a dwarf, since |Δ| appreciably
decreases with an increase in log g (cf. Fig. 9b).[Actually, |Δ|
in this case (disk-center intensity spectrum) is somewhat smaller than that obtained
by an interpolation/extrapolation of Fig. 9b (calculated for the flux spectrum),
because the non-LTE effect becomes less significant for the former case of
deeper-forming lines than the latter.]
Therefore, the speculation addressed by Asplund et al. (2009) “departures from LTE
are not expected to be significant for P” (based on an analogy with the S i
case) may be regarded as reasonable.
The mean P abundances averaged over 15 lines are ⟨ A^ L⟩ = 5.43
(LTE abundance), ⟨ A^ N_k=1⟩ = 5.40 (non-LTE abundance for k=1),
and ⟨ A^ N_k=10^-3⟩ = 5.31 (non-LTE abundance for k=10^-3),
where the standard deviation is σ = 0.11 for all the three cases.
This ⟨ A^ L⟩ (5.43) is reasonably consistent with Biémont et al.'s
(1994) result of 5.45 (obtained by using the same lines with the same W_λ^ d.c.
and log gf). According to the standard non-LTE abundance of 5.40
(⟨ A^ N_k=1⟩) obtained here, we may state that the previously
reported results of solar P abundance (cf. Sect. A1) can not be significantly
revised even when the non-LTE correction (a few hundredths dex) is taken into account.
§.§ A5. Is Downward Revision of A_⊙ Possible?
In view of discrepancy between the P abundances of normal B-type stars and that of
the Sun (the former being systematically lower than the latter by ∼ 0.2–0.3 dex)
discussed in Sect. 7.2, some discussion about whether the actual A_⊙
could be lower than the currently accepted value may be in order.
Neglecting the H i collision would further reduce A^ N by ∼ 0.1 dex
down to ∼ 5.3. However, there is no justification for such
a drastic reduction of the classical rates for rather high-excitation P i lines
under question.[Admittedly, such a situation does exist for the case of other lines.
For example, it is known that neutral-hydrogen collision rates calculated by using the
classical cross section are considerably overestimated for the resonance lines of
alkali elements (e.g., Li i 6708 or K i 7699).] Therefore, much can not be
said about this possibility until more information of H i collision
rates for P i is obtained (preferably based on up-to-date quantum-mechanical
calculations).
Alternatively, it was once considered that inclusion of the 3D effect
might reduce A_⊙ by ≲ 0.1 dex, since Asplund et al. (2005) derived an
appreciably lower value of A_⊙ = 5.36 by including the 3D-effect.
However, such a low-scale result was not confirmed by Caffau et al.'s (2007)
new 3D analysis which resulted in A_⊙ = 5.46 (± 0.04), showing that
the 3D correction is insignificant (only a few hundredths dex) for the solar
P abundance determination. Thus, this possibility is not very prospective, either.
In any case, although a possibility of solar P abundance being reduced by
≲ 0.1–0.2 (i.e., down to ∼ 5.3) can not be excluded,
the solar P abundance would then be discrepant from that of meteorite.
That is, since the old A_ meteorite(P) value of 5.57 ± 0.04 derived by
Anders & Grevesse (1989) was revised by Asplund et al. (2009) as
5.43 ± 0.04 (based on the data of CI carbonaceous chondrites taken from
Lodders et al. 2009), a good agreement between A_ meteorite and
A_⊙ is now accomplished. This consistency would break down
if A_⊙ were appreciably changed.
|
http://arxiv.org/abs/2409.02555v1 | 20240904092113 | Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation | [
"Kangkai Zhang",
"Shiming Ge",
"Ruixin Shi",
"Dan Zeng"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.MM"
] |
IEEE Transactions on Circuits and Systems for Video Technology
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation
Kangkai Zhang,
Shiming Ge, Senior Member, IEEE,
Ruixin Shi,
and Dan Zeng, Senior Member, IEEE
Kangkai Zhang is with the Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100084, China, and with Baidu Inc., Beijing 100080, China. Email: [email protected].
Shiming Ge and Ruixin Shi are with the Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100084, China, and with School of Cyber Security at University of Chinese Academy of Sciences, Beijing 100049, China. Email: {geshiming, shiruixin}@iie.ac.cn.
Dan Zeng is with the Department of Communication Engineering, Shanghai
University, Shanghai 200040, China. E-mail: [email protected].
Shiming Ge is the responding author. Email: [email protected].
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Recognizing objects in low-resolution images is a challenging task due to the lack of informative details. Recent studies have shown that knowledge distillation approaches can effectively transfer knowledge from a high-resolution teacher model to a low-resolution student model by aligning cross-resolution representations. However, these approaches still face limitations in adapting to the situation where the recognized objects exhibit significant representation discrepancies between training and testing images. In this study, we propose a cross-resolution relational contrastive distillation approach to facilitate low-resolution object recognition. Our approach enables the student model to mimic the behavior of a well-trained teacher model which delivers high accuracy in identifying high-resolution objects. To extract sufficient knowledge, the student learning is supervised with contrastive relational distillation loss, which preserves the similarities in various relational structures in contrastive representation space. In this manner, the capability of recovering missing details of familiar low-resolution objects can be effectively enhanced, leading to a better knowledge transfer. Extensive experiments on low-resolution object classification and low-resolution face recognition clearly demonstrate the effectiveness and adaptability of our approach.
Low-resolution face recognition, low-resolution object classification, knowledge distillation, domain adaptation.
§ INTRODUCTION
With the rapid development of deep learning, deep models have demonstrated remarkable success in various visual recognition applications <cit.>. For example, EfficientNet <cit.> delivers a top-1 classification accuracy of 88.61% on ImageNet <cit.> in large-scale visual recognition, Groupface <cit.> gives an extreme high accuracy of 99.85% on LFW <cit.> in face verification, cross-domain methods deliver impressive performance in gait recognition <cit.> and micro-expression recognition <cit.>. These achievements can be attributed to the ability of deep models with massive parameters to extract rich knowledge from extensive high-quality datasets. However, it may suffer from a sharp drop in accuracy when directly applying these models in practical scenarios due to domain distribution difference, i.e., the identified objects lack informative details due to occlusion <cit.> or low resolution <cit.>. Meanwhile, it is difficult to correct sufficient low-resolution training data in practical scenarios. Thus, it is necessary to explore a feasible solution that can address a key challenge in low-resolution object recognition: How to effectively transfer knowledge from high-resolution source domain to low-resolution target domain with minimal accuracy loss?
As shown in Fig. <ref>, in spite of the missing of many informative details, low-resolution objects still can be well recognized by subjects when they are familiar with the corresponding high-resolution objects. Recent works <cit.> have shown that it is feasible to improve the recognition capacity of a model by knowledge transfer from high-resolution domain to low-resolution one. According to the level of this cross-resolution knowledge transfer, current approaches can be mainly grouped into sample-level or relation-level approaches.
For sample-level knowledge transfer, Wang <cit.> first proposed to use the corresponding high-resolution images to facilitate the model to extract features from low-resolution images. Subsequently, by learning low-resolution face representations and mimicking the adapted high-resolution knowledge, a light-weight student model can be constructed with high efficiency and promising accuracy in recognizi ng low-resolution faces <cit.>. However, sample-level knowledge is limited and insufficient to help the model extract sufficiently discriminative features, especially for cross-resolution knowledge transfer. Therefore, the researchers explored the relation-level knowledge transfer. Some recent works have shown that transferring the structural similarity instead of representation is beneficial to student learning <cit.>. Ge <cit.> proposed a hybrid order relational distillation to distill richer knowledge from pretrained high-resolution models to facilitate low-resolution object recognition. In general, these approaches have achieved impressive performance. However, they all use low-order relation knowledge to model the mutual information, which may ignore complex high-order inter-sample interdependencies, , contrastive relation, and lead to insufficient knowledge transfer for object recognition.
Recently, contrastive learning approaches <cit.> have been widely used to learn feature representations from data samples by comparing the data with the positive and negative samples in the feature space. These approaches only need to learn discrimination in the feature space. Thus, they will not pay too much attention to pixel details, but can focus on more abstract semantic information, leading to simpler processing than pixel-level reconstruction <cit.>. Recent contrastive learning is combined with knowledge distillation, and these contrastive-based distillation approaches <cit.> aim to capture the correlations and higher-order output dependencies for each sample. Typically, contrastive-based distillation approaches can facilitate cross-resolution knowledge transfer, since they essentially preserve the inter-sample relations which usually are more valuable than the sample representations themselves, especially in visual recognition tasks. The key is the relation modeling for effective knowledge transfer.
To transfer high-order dependency within the representation in both relation estimation and knowledge distillation, we propose a teacher-student learning approach for low-resolution object recognition via cross-resolution relational contrastive knowledge distillation with two streams, as shown in Fig. <ref>. The teacher stream is initialized with a complex pretrained model for high-resolution recognition and the student stream trains a compact model with the help of structural relational knowledge between different resolution samples. By making the high-order relation between low-resolution samples and other high-resolution samples mimic the high-order relation between corresponding high-resolution sample and other high-resolution samples, the student can pay more attention on semantic information instead of pixel details, and then learn the distinction between low-resolution images in the feature space to improve low-resolution object recognition.
Our main contributions are three folds: 1) we propose a cross-resolution relational contrastive distillation approach that is able to distill richer structural knowledge from pretrained high-resolution models to facilitate low-resolution object recognition, 2) we propose a relational contrastive module to extract relational knowledge in contrastive representation space, and 3) we conduct extensive experiments to show the state-of-the-art performance and good adaptability of our approach in low-resolution object recognition.
§ RELATED WORKS
§.§ Low-Resolution Object Recognition
The recognition of low-resolution visual objects is attracting increasing interest due to its widespread applications in long distance surveillance scenarios <cit.>, blurry image analysis <cit.>. Its major challenge is that the informative identity details of the identified objects are seriously missing. In particular, low-resolution objects have less high variance information and the textures can be visually indistinguishable. Recently, an effective way to address this problem is to utilize high-resolution object information for learning improved recognition models. Existing approaches can be categorized into reconstruction-based and prediction-based category. Reconstruction-based approaches employ super-resolution methods to the low-resolution objects before recognition. Grm <cit.> proposed a cascaded super-resolution network, along with an ensemble of face recognition models as identity priors. Chan <cit.> obtained the effective super-resolution by using the rich and diverse prior knowledge in the pretrained GAN. Kong <cit.> proposed resolution invariant model (RIM) to recognize low-resolution faces from CCTV cameras at different resolutions. RIM uses a tri-path GAN to jointly learn face hallucination sub-net and heterogeneous recognition sub-net. Unfortunately, such approaches require additional computation and the recovered details may be not always beneficial to recognition.
By contrast, prediction-based approaches directly recognize low-resolution objects by knowledge transfer and it is essential to sufficiently represent the domain knowledge and transfer them effectively. On the one hand, a direct approach is transferring the knowledge from high-resolution objects, in which the feature vector distance matters. Soma <cit.> proposed to map the low-resolution images to Euclidean space, and then approximate the corresponding high-resolution ones through the distance dimension. Zangeneh <cit.> proposed a new coupled mapping method consisting of two DCNN branches for mapping high and low-resolution face images to non-linear transformed public space. Zha <cit.> proposed an end-to-end transferable coupling network in high-resolution and low-resolution domains respectively, and introduced a transferable triple loss to narrow cross-resolution positive pairs and separate negative pairs, which improves the recognition performance for low-resolution objects.
It has been proved feasible using teacher-student learning to transfer knowledge for facilitating visual applications <cit.>. Such knowledge distillation approaches are mainly based on response, feature and relation. Response-based distillation approaches <cit.> aim to directly imitate the neural response of the last output layer of the teacher model. While feature-based distillation approaches <cit.> mimic the intermediate representations of teacher model to improve the learning of student model by matching original or transformed features. Huang <cit.> proposed to transfer rich privilege information from a wide and complicated teacher network to a thin and simplified student one. Unlike the above two types of approaches using sample-level outputs of specific layers, relation-level approaches <cit.> further explore the relation between data samples, and have shown that transfer structural similarity between instances rather than individual instance representations is beneficial for student learning. Since semantically similar inputs produce similar activations, Tung <cit.> used pairwise activation similarities in each input mini-batch to supervise the student learning, and Park <cit.> proposed to transfer explicit sample relations from pretrained teacher. In general, these approaches base on response or low-order relations between samples are often insufficient for cross-resolution knowledge transfer. To address that, we propose a teacher-student learning approach to facilitate low-resolution object recognition via cross-resolution relational contrastive distillation.
§.§ Contrastive Learning
Contrastive learning is regarded as a very important part of self-supervised learning, which builds representations by learning to encode what makes two things similar or different. Recent works <cit.> have been widely used to learn the feature representations of samples by comparing the data with positive and negative samples in the feature space. Contrastive losses such as NCE <cit.> and infoNCE <cit.> measure the similarities of data samples in a deep representation space, which learn representations by contrasting positive and negative representation pairs. One of the major difficulties in contrastive learning is how to construct the positive and negative samples. Deep InfoMAX <cit.> takes local features of training images and different images as positive and negative samples respectively. Instance Discrimination <cit.> learns to contrast the current embedding with previous embeddings from an online memory bank. The MOCO<cit.> and SimCLR <cit.> apply augmentation to train samples and requires the network to match original image and transformed images through contrastive loss. These methods only need to learn in the feature space, thus avoiding focus too much on pixel details but paying more abstract semantic information instead.
For knowledge distillation, Tian <cit.> proposed to combine contrastive learning with knowledge distillation, and Xu <cit.> represented contrastive task as a self-supervised pretext task to facilitate the extraction of richer knowledge from the teacher to the student. They show that incorporating contrastive learning loss into knowledge distillation can help student learn higher-order structural knowledge which can promote cross domain knowledge transfer. They are based on samples and the mutual relations are still insufficient. Thus, it is necessary to explore more effective forms to model the mutual relations of deep representations instead of the representations themselves. Zheng <cit.> proposed relation knowledge distillation by linking cluster-based and contrastive-based self-supervised learning. However, such methods often suffer from poor generalization. To address that, we take into account higher-order relational information between the samples across different image resolutions.
§ THE APPROACH
The objective of our cross-resolution relational contrastive distillation (CRRCD) is sufficiently distilling high-order relational knowledge from a pretrained teacher for high-resolution recognition and effectively transferring it to learn a compact student for low-resolution recognition. Toward this end, we build the training instances by taking massive pairs of high-resolution images and corresponding low-resolution images in a self-supervised manner, and utilize vectors to define the representation relations. A feature relation module is utilized to estimate the teacher relation vector in teacher space and the student relation vector in cross-resolution space, respectively. The module is a simple learnable network that consists of two linear layers and a nonlinear activation layer. It is employed to estimate the relation vector between sample representations. Additionally, the cross-resolution relation vector is supervised by its corresponding vector in teacher space. In this manner, relation estimation and representation learning is performed in a unified way. In general, the student is trained on the images from source domain but deployed in target domain, and these two domains often exist large representation discrepancy. Therefore, our relation modeling manner needs to address cross-resolution knowledge transfer with good adaptability.
§.§ Problem Formulation
We denote the training set as D={(x^h_i, x^l_i, y_i)}_i=1^|D|, where x^h_i represents the ith high-resolution sample with class label y_i∈{1,2,...,c} and x^l_i corresponds to the corresponding low-resolution sample. Here c is the number of classes. Given a teacher network ϕ^t with parameters 𝒲^t and a student network ϕ^s with parameters 𝒲^s, we denote the representation of a sample pair (x^h, x^l) produced by the two networks as e^t=ϕ^t(𝒲^t;x^h) and e^s=ϕ^s(𝒲^s;x^l), respectively. Let (x^h_i,x^l_i) and (x^h_j,x^l_j) be two sample pairs randomly chosen from the training set. The relation between x^h_i and x^h_j in teacher space can be modeled as v^t_i,j, where v^t_i is a relation vector produced by the feature relation module 𝔽 that takes e^t_i and e^t_j as inputs. Similarly, we denote v^t,s_i,j as the relation vector across the teacher and student space, the inputs of feature relation module are e^t_i and e^s_j, respectively. The specific form is v^t,s = φ(σ(φ_iϕ^t(x_i)-φ_jϕ^s(x_j))), where φ and τ denote the linear transformation and the ReLU function, respectively. We hope that the cross-space relation v^t,s_i,j can be consistent with v^t_i,j with the help of relational contrastive distillation loss.
§.§ Cross-Resolution Relational Contrastive Distillation
Let x represent the input, we denote its empirical data distribution as p(x). For the conditional marginal distributions p(v^t|x), p(v^t,s|x), the sampling procedure is described as:
x_i^h, x_j^h, x_i^l, x_j^l∼ p(x)
v_i, j^t=F^t(ϕ^t(𝒲^t;x_i^h), ϕ^t(𝒲^t;x_j^h))
v_i, j^t, s=F^t, s(ϕ^t(𝒲^t;x_i^h), ϕ^s(𝒲^s;x_j^l)),
where F^t and F^t,s are two learnable networks for computing the relation vectors. v_i, j^t and v_i, j^t, s represent the relationship between the i-th and j-th samples in teacher space and cross-resolution space, respectively. Intuitively, by maximizing Kullback-Leibler (KL) divergence between the joint distribution p(v^t,v^t,s|x) and the product of marginal distributions p(v^t|x)p(v^t,s|x), we can maximize the mutual information (MI) 𝕀 between student and teacher representations <cit.>:
𝕀(v^t,v^t,s)=𝔼_p(v^t,v^t,s|x)logp(v^t,v^t,s|x)/p(v^t|x) p(v^t,s|x).
MI lower bound. To setup an appropriate loss to maximize the mutual information, we define a distribution q with latent variable b which indicates whether the relation tuple (v_i,j^t, v_i,j^t,s) is drawn from the joint distribution (b=1) or the product of marginal distributions (b=0):
q(v^t,v^t,s| b=1)=p(v^t,v^t,s)
q(v^t,v^t,s| b=0)=p(v^t) p(v^t,s).
Here, b=1 means v_i,j^t and v_i,j^t,s are computed based on the same input pair, and b=0 means v_i,j^t and v_i,j^t,s are independently selected. Now, suppose in our data, we give 1 relevant relation pair (b=1) with n irrelevant relation pairs (b=0). Then the priors on the latent b are q(b=1)=1/(n+1) and q(b=0)=n/(n+1). By combining the priors with the Bayes’ rule, the posterior for b=1 is given by:
q(b=1 |v^t,v^t,s)=p(v^t,v^t,s)/p(v^t,v^t,s)+n p(v^t) p(v^t,s).
Then the mutual information is defined as:
log q(b=1 |v^t,v^t,s) ≤-logn+logp(v^t,v^t,s)/p(v^t) p(v^t,s).
Taking the expectation on both sides, Eq. (<ref>) is rewritten as:
𝕀(v^t,v^t,s) ≥logn+
𝔼_q(v^t,v^t,s| b=1)log q(b=1 |v^t,v^t,s),
where 𝕀(v^t,v^t,s) is the mutual information between the relation distributions of the teacher and student embedding. Thus maximizing 𝔼_q(v^t,v^t,s| b=1)log q(b=1 |v^t,v^t,s) the parameters of the student network will increase a lower bound on mutual information.
Relation contrastive loss. Actually, we maximize the log likelihood of the data under the model to estimate true distribution, which is defined as:
ℒ_critic(h) =𝔼_q(v^t,v^t,s| b=1)[log h(v^t,v^t,s)]
+n 𝔼_q(v^t,v^t,s| b=0)[log (1-h(v^t,v^t,s))].
h^*=hmaxℒ_critic (h) ◃ optimal critic .
We term h the critic since the representations are learned to optimize the critic’s score.
Considering that the bound in Eq. (<ref>) and the 𝔼_q(v^t,v^t,s| b=1)[log h(v^t,v^t,s)] is non-positive, we weaken the bound in Eq. (<ref>),
𝕀(v^t, v^t,s) ≥logn+ℒ_critic(h).
We may choose to represent h with any family of functions that satisfy h:{v^t,v^t,s}→[0,1]. In practice,
h(v^t, v^t, s)=e^h_1(v^t) h_2(v^t, s) / τ/e^h_1(v^t) h_2(v^t, s) / τ + n/m,
where n is the number of negatives, m is the dataset cardinality and τ is a temperature for adjusting concentration level. h_1 and h_2 first perform the linear transformation on relations, then normalize the transformed relations with l_2 norm.
In our approach, the inputs for the function h are teacher-space relation v^t and cross-space relations v^t,s. We aim to maximize the mutual information, which is equivalent to minimizing the relation contrastive loss ℒ_rcd:
ℒ_rcd =-∑_q(b=1)log h(v^t, v^t, s)
-n∑_q(b=0)log[1-h(v^t, v^t, s)],
where {(v^t, v^t, s) | b=1} acts as positive pairs while {(v^t, v^t, s) | b=0} acts as negative pairs.
To achieve superior performance and conduct fair comparisons, we also incorporate the naive knowledge distillation loss ℒ_kd along with our relation contrastive loss. Given the presoftmax logits z^t for teacher and z^s for student, the naive knowledge distillation loss can be expressed as
ℒ_k d=ρ^2ℋ(σ(z^t / ρ), σ(z^s / ρ)),
where ρ is the temperature, ℋ refers to the cross-entropy
and σ is softmax function. The complete objective is:
ℒ=ℒ_c l s+αℒ_kd+βℒ_rcd,
where ℒ_cls represents the arcface loss for face recognition, or cross-entropy loss for object classification. We experimentally determine a best combination of the three loss terms, and set α=0.5 and β=2 in our approach.
Relationships to similar distillation approaches. Like CRD <cit.> and CRCD <cit.>, our CRRCD is also based on contrastive learning and has a certain similarity in analysis such as a lower bound on the mutual information. Different from them, our approach is designed for cross-quality knowledge transfer in low-resolution recognition task, and the modeling granularity of relational knowledge between samples is finer and the order is higher. Specifically, compared with CRD, CRRCD takes into account higher-order information between samples in different resolution data and requires less negative samples for training. The main differences from CRCD include: 1) CRRCD focuses on the relation between sample representations, while CRCD calculates the relation between sample gradients which may affect the performance of student model detrimentally on low-resolution recognition and increase the cost, 2) CRRCD facilitates cross-resolution knowledge transfer by modeling the relation between samples in different resolution data, while CRCD only transfers information from the same data resolution, 3) CRRCD uses a more efficient critic function Eq. (<ref>) to estimate the distribution q(b=1 |v^t,v^t,s), which helps to maximize a lower bound on the mutual information. Therefore, our CRRCD can achieve better performance on low-resolution object recognition.
§ EXPERIMENTS
To validate the effectiveness of our cross-resolution relational contrastive distillation approach (CRRCD), we conduct experiments on two representative types of applications: low-resolution object classification and low-resolution face recognition. For the low-resolution object classification experiments, we utilize four benchmark datasets: CIFAR100 <cit.>, SVHN <cit.>, STL10 <cit.> and TinyImageNet <cit.>. The purpose is to assess the performance and generalizability of our approach. Furthermore, we investigate low-resolution face recognition by training models on CASIA-WebFace <cit.> and evaluating them on three face recognition tasks: verification on LFW <cit.>, identification on UCCS <cit.> and retrieval on TinyFace <cit.>. In these experiments, we employe VGG <cit.>, ResNet <cit.>, wide ResNet <cit.>, ShuffleNetV1 <cit.> and ShuffleNetV2 <cit.> as as our backbone models. In the model learning process, we use a batch size of 96 and initialize the learning rate to 0.05. The learning rate is multiplied by 0.1 at epochs 21, 28, and 32. We maintain a fixed random seed of 5 and set the distillation temperature (T) to 4. All experiments are conducted with PyTorch on a NVIDIA 3090 GPU.
§.§ Low-resolution Object Classification
Object classification is a general visual recognition task and has very important applications under the low-resolution condition like industrial inspection and medical diagnosis. In the experiments, we first check the effectiveness of our distillation method and then evaluate the effectiveness and transferability of our approach in low-resolution object classification.
The effectiveness of distillation. Our approach distills cross-resolution contrastive relations between different resolution samples that can better mimic the model capacity of the high-resolution teacher model. To verify that, we conduct two low-resolution object classification experiments on CIFAR100 by comparing with other advanced distillation approaches under both peer-architecture and cross-architecture settings. CIFAR100 has 100 classes containing 600 images each.
Peer-architecture distillation uses homogeneous architecture for teacher-student pairs. The results are shown in Tab. <ref>. From the results, we can see that our CRRCD outperforms six sample-level distillation approaches (KD <cit.>, FitNet <cit.>, AT <cit.>, PKT <cit.>, VID <cit.> and Abound <cit.>) as well as six relation-level distillation approaches (SP <cit.>, RKD <cit.>, CC <cit.>, CRD <cit.>, CRCD <cit.> and WCoRD <cit.>), and is comparable with DKD <cit.>. For example, comparing with WCoRD <cit.> that combines contrastive learning and knowledge distillation to help student learn richer sample-wise knowledge in a certain maturity, when taking ResNet56 as teacher and ResNet20 as student, our CRRCD achieves 72.10% accuracy on CIFAR100 which is 0.54% higher than WCoRD, and gains 0.24% improvement when the teacher and student is ResNet110 and ResNet32. The main reason comes from that our CRRCD focuses on higher-order relational contrasting knowledge. It implies the remarkable effectiveness in improving student learning.
To further explore the flexibility of our approach, cross-architecture distillation applies heterogeneous architecture for teacher-student pairs during learning. In this setting, the gap of knowledge transfer will become larger thus put forward higher requirements for knowledge distillation. The results are shown in Tab. <ref>, where our approach achieves the best accuracy and has better competitiveness than peer-architecture setting. For five cross-architecture students, our CRRCD gains 2.54% improvement over CRD and 1.40% improvement over CRCD on average accuracy, respectively. Especially, when taking WRN50-2 as teacher and ShuffleNetV1 as student, CRRCD achieves 5.45% accuracy improvement over CRD and 1.97% accuracy improvement over CRCD, respectively. Moreover, compared to recent evolutionary knowledge distillation approach (EKD) <cit.>, our CRRCD also gives better classification accuracy. These results shows that our approach can provide a flexible way to distill black-box teacher knowledge and learn discriminative student representations for downstream image recognition task.
Very low-resolution object classification. First, we check the effectiveness of CRRCD on object classification under a very low-resolution of 8 × 8 by evaluating on SVHN dataset. This dataset contains digit images captured from real-world natural scenes, having a resolution of 32×32. We downsample the images by a factor of 4 to create 8×8 data and use them for evaluating very low-resolution digit classification. The teacher model is ResNet56 pretrained with 32×32 images and our student is VGG8 that has very few parameters. We compare our approach to five state-of-the-art very low-resolution image recognition approaches and report the top-1 classification accuracy in Tab. <ref>. Our CRRCD model obtains a classification accuracy of 89.33%, an at least improvement of 1.58%. Compared with other approaches like DeriveNet <cit.> which focuses on learning effective class boundaries by utilizing the class-specific domain knowledge, our CRRCD makes full use of the structural knowledge between different samples and the dark knowledge in the teacher model to obtain stronger feature extraction capability, which greatly improves the recognition performance of model on very low-resolution images.
Representation transferability. After the promising results achieved with the adaptability on low resolution and flexible network architectures, we further verify the cross-dataset transferability of our approach by training on CIFAR100 but testing on STL10 and TinyImageNet. Following CRD <cit.>, we investigate the effectiveness of student representations. A good representation extractor should generate linear separable features. Hence, we use the fixed backbone of student trained on CIFAR100 to extract representations for STL10 and TinyImageNet, and then train a linear classifier to test the classification accuracy. We select WRN-40-2 as teacher and ShuffleNetV1 as student, and compare with three sample-level distillation approaches (KD <cit.>, FitNet <cit.> and AT <cit.>), relation-level distillation approach CRD <cit.> and self-supervised knowledge distillation (SSKD) <cit.>. In the experiment, the input resolution of teacher and student is 32 × 32. As shown in Tab. <ref>, our CRRCD delivers the best accuracy on both STL10 and TinyImageNet. From the results, we find that our approach still has good representation transferability between different objects (e.g., natural objects in CIFAR100 and digits in STL 10). However, all approaches achieve a very low accuracy (e.g., lower than 36%) in recognizing TinyImageNet. The main reason may be insufficient knowledge from 32 × 32 CIFAR100 that is incapable for identifying higher-resolution objects in TinyImageNet. It implies that direct learning from low-resolution images may be ineffective and cross-resolution knowledge transfer can be a more effective way.
§.§ Low-resolution Face Recognition
Low-resolution face recognition is a specified and challenging object recognition task and has very helpful applications like recognizing surveillance faces in the wild. In practical scenarios, the facial images often have low resolution, uneven light intensity, diverse facial posture and facial expression. These will have a huge impact on the recognition accuracy. In our experiments, we take CASIA-WebFace as training set, which contains 10575 categories and a total of 494414 images collected from the web. The teacher is trained on CASIA-WebFace with ResNet50 under the high-resolution of 112×112, and the students are trained on low-resolution CASIA-WebFace with ResNet18. Then, the trained students are used to evaluate face verification on LFW, face identification on UCCS and face retrieval on TinyFace, respectively. In order to verify the validity of the low-resolution students, we emphatically check the accuracy when the input resolution is 16×16 produced by bilinear downsampling. All approaches use the same experimental settings to ensure fair comparisons.
Face verification on LFW. We conduct the comparisons with some state-of-the-art face recognition models on LFW, which contains 6000 pairs of face images. We downsample the images to synthesize low-resolution faces. A 512d feature embedding for each image is extracted for similarity comparison. With a pre-set threshold, each face pair is determined to have the same identity if the similarity of the two faces is greater than the threshold and different identity otherwise. The verification accuracy is reported as the percentage of the pairs that are correctly determined. The results are listed in Tab. <ref>, where some conclusions can be found.
Firstly, the state-of-the-art face recognition models usually deliver very high verification accuracy in recognizing faces under normal resolution. For example, ArcFace <cit.> uses ResNet50 and gives a 99.82% accuracy under the input resolution of 112×112. Our CRRCD approach distills the ResNet50 model into a lightweight ResNet18 student, which still achieves a good accuracy of 95.25% under a much low-resolution of 16×16. This is very helpful for practical deployment in resource-limited conditions. Secondly, when these face recognition models are applied to identify low-resolution images, e.g., recognizing 16×16 images after bilinear upsampling, the accuracy will has a great drop. For example, ArcFace gives an accuracy of 92.30% under the low-resolution condition, having a drop of 7.52%. These results reveal that it is necessary to compensate the missing knowledge to facilitate the recognition of low-resolution objects from high-resolution images or models. Finally, we compare our approach with five recent low-resolution face recognition approaches. In comparison to distillation-based methods, our CRRCD achieves higher accuracy. This improvement can be attributed to its ability to extract high-order relation contrastive knowledge, which proves to be more effective than sample-level knowledge (SKD <cit.> and EKD <cit.>) or low-order relation knowledge (HORKD <cit.>). In low-resolution face recognition tasks, our method exhibits significant advantages compared to the non-parametric low-resolution face recognition model (NPM <cit.>). In <cit.>, deep Rival Penalized Competitive Learning (RPCL) is embedded into state-of-the-art face recognition models to learn margin-based discriminative low-resolution face features. Our CRRCD outperforms RPCL-based models since it implicitly encodes margin-based discriminative representation learning by using anchor-based high-order relation preserving distillation. In cross-resolution knowledge transfer, high-order relation can help the model learn better representations from low-resolution domain and contrastive relation can facilitate the learning of representations in visual recognition task.
Face identification on UCCS. UCCS is collected in real surveillance scenarios and contains 16149 images in 1732 subjects in the wild condition, which is a very challenging benchmark with various levels of challenges. To verify the robustness of our low-resolution student models, we emphatically check the accuracy when the input resolution is 16×16. We follow the setting as <cit.>, randomly select a 180-subject subset, separate the images into 3918 training images and 907 testing images, and report the results with the standard accuracy. In the experiment, we freeze the representation extraction part of each model, modify the final softmax layer into 180-way, and finetune the layer parameters on training set. As shown in Tab. <ref>, our student model achieves an impressive identification accuracy of 97.27%, surpassing the state-of-the-art DirectCapsNet <cit.> by 1.46%. Our approach enhances low-resolution face recognition performance by enabling the student model to acquire discriminative representations. Despite lacking essential information for recognition, our method leverages cross-resolution relational contrastive knowledge from the teacher model and high-resolution data. This allows the student model to learn higher-order feature representations, leading to improved performance.
Face retrieval on TinyFace. TinyFace contains large-scale native low-resolution surveillance face images. In experiment, we finetune basic models on its training set and then evaluate 1:N identification performance on its testing set. Tab. <ref> reports Rank-1, Rank-10 and Rank-20 retrieval results. Different from typical models <cit.> that design margin-based losses to learn discriminative representations, our CRRCD implicitly learns distinct inter-class boundaries under cross-resolution relational constraints with the assistance of a high-resolution teacher and consistently improves retrieval accuracy under various settings. In addition, via high-order knowledge transfer, CRRCD outperforms PeiLi's method <cit.> based on reconstruction and DCR <cit.> that employs two branches to transfer cross-resolution knowledge by feature approximation. There results imply the effectiveness of our approach in learning discriminative and transferable representations.
§.§ Ablation and Further Analysis
Effect of negative number. An important part of knowledge distillation based on contrastive learning is to construct positive and negative sample pairs, and the negative number has a crucial impact on the final performance. We validate five different negative number (64, 128, 256, 512 and 1024) and show the results in the left of Fig. <ref>. Here, increasing negative number will lead to performance improvement, which means higher-order relation knowledge is built and migrated. Meantime, a larger negative number requires more computations. It suggests that the negative number should be carefully selected to balance the accuracy and computation cost. Thus, we set the negative number to 512 since it only gives a small accuracy gain of 0.05% when increasing negative number to 1024. Our approach can significantly reduce the negative number, which is benefited from modeling the structural relationship that does not pass through the samples with rich knowledge, which reduces the dependence on the number of negative samples.
Effect of sampling policy. We consider two negative sampling policies when giving an anchor x_i: x_j,j ≠ i for the unsupervised case without labels, or x_j, y_j ≠ y_i for supervised case, where y_i represents the label associated with sample x_i. What's more to ensure that negative samples are as up-to-date as possible, we store features and gradients in a queue way which will remove the oldest sample when adding the latest sample. Through experiments, the combination of queue and supervised sampling policy can bring at least 0.25% improvement at accuracy on LFW.
Effect of distillation temperature. The distillation temperature τ in Eq.(<ref>) is used to adjust the concentration level. We report the results when τ varies from 0.02 to 0.30 in the right of Fig. <ref>. A temperature between 0.08 to 0.1 works well and we set τ=0.1 for all our experiments. In general, for different downstream tasks, the value of τ should be carefully set in a task-specified manner.
Effect of projected feature dimension. Our feature relation module builds contrastive relation vectors by projecting the 512d feature embeddings into specific-dimensional features. The projected feature dimension affects model performance and computation cost in training. Increasing dimension boosts performance but also raises computation cost. To balance them, we test various feature dimensions and set it to 128. In addition, our approach employs an efficient critic function h(v^t,v^t,s) to estimate the distribution q(b=1 |v^t,v^t,s), which maximizes a lower bound on the mutual information. It is worth noting that the inference complexity is fixed and not affected by the order of structural relationship.
Representation visualization. To further demonstrate the advantages of our approach visually, we first use the t-SNE <cit.> for visualization. It converts similarities between data points to joint probabilities and tries to minimize the KL divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. We randomly select 400 samples each class from SVHN dataset, different numbers indicate different classes in Fig. <ref>. It is obvious that our approach achieves more concentrated clusters than baseline (Same structure as student model, but no distillation strategy) which is trained with softmax loss. And the changes of the distances in classifiers of baseline are more severe than that in classifier of CRRCD. We speculate that transferring high-order relational contrastive knowledge is helpful for student to learn discriminative representations.
Next, we illustrate the estimated similarity distributions of ArcFace and our CRRCD in Fig. <ref>. To quantify their difference, we introduce two statistics for evaluation, the expectation margin and histogram intersection between the two distributions from positive and negative pairs. Typically, smaller histogram
intersection and larger expectation margin indicate better verification performance, since it means that more discriminative deep embeddings are learned. As shown in Fig. <ref>, the deeply learned face features are more discriminative and less overlapped by our CRRCD than by ArcFace, indicating that our approach is effective in enhancing the discriminability and obtains the best performance.
§ CONCLUSION
In this paper, we propose cross-resolution relational contrastive distillation, a novel approach to improve low-resolution object recognition. Our approach successfully transfers high-order relation knowledge from a pretrained high-resolution teacher model to a low-resolution student model. Through extensive experiments on low-resolution object classification and low-resolution face recognition, we validate the effectiveness and adaptability of our approach. Our future work will concentrate on integrating domain generalization and exploring its applicability to a broader spectrum of visual understanding tasks.
Acknowledgements. This work was partially supported by grants from the National Key Research and Development Plan (2020AAA0140001) and Beijing Natural Science Foundation (19L2040).
IEEEtran
[
< g r a p h i c s >
]Kangkai Zhang received his B.S. degree in Electronical Information Science and Technology from the School of Electronic Information and Optical Engineering in Nankai University, Tianjin, China. He obtained a Master's degree in Communication and Information Systems at the Institute of Information Engineering at Chinese Academy of Sciences, Beijing. Currently, he works as a Computer Vision Algorithm Engineer at Baidu Inc. His major research interests are deep learning and computer vision.
[
< g r a p h i c s >
]Shiming Ge (M'13-SM'15) is a professor with the Institute of Information Engineering, Chinese Academy of Sciences. Prior to that, he was a senior researcher and project manager in Shanda Innovations, a researcher in Samsung Electronics and Nokia Research Center. He received the B.S. and Ph.D degrees both in Electronic Engineering from the University of Science and Technology of China (USTC) in 2003 and 2008, respectively. His research mainly focuses on computer vision, data analysis, machine learning and AI security, especially efficient and trustworthy solutions towards scalable applications. He is a senior member of IEEE, CSIG and CCF.
[
< g r a p h i c s >
]Ruixin Shi received her B.S. degree in Information Security from the School of Cyber Security in Beijing Institute of Technology, China. She is now a Ph.D Candidate at the Institute of Information Engineering at Chinese Academy of Sciences and the School of Cyber Security at the University of Chinese Academy of Sciences, Beijing. His major research interests are computer vision and generative modeling.
[
< g r a p h i c s >
]Dan Zeng (SM'21) received her Ph.D. degree in circuits and systems, and her B.S. degree in electronic science and technology, both from University of Science and Technology of China, Hefei. She is a full professor and the Dean of the Department of Communication Engineering at Shanghai University, directing the Computer Vision and Pattern Recognition Lab. Her main research interests include computer vision, multimedia analysis, and machine learning. She is serving as the Associate Editor of the IEEE Transactions on Multimedia and the IEEE Transactions on Circuits and Systems for Video Technology, the TC Member of IEEE MSA and Associate TC member of IEEE MMSP.
|
http://arxiv.org/abs/2409.03083v1 | 20240904211217 | WIMP dark matter in bulk viscous non-standard cosmologies | [
"Esteban González",
"Carlos Maldonado",
"N. Stefanía Mite",
"Rodrigo Salinas"
] | hep-ph | [
"hep-ph",
"astro-ph.CO"
] |
[email protected]
Departamento de Física, Universidad Católica del Norte, Avenida Angamos 0610, Casilla 1280, Antofagasta, Chile
[email protected]
Facultad de Medicina y Ciencia, Universidad San Sebastián, Puerto Montt, Chile
[email protected]
Departamento de Física, Universidad Católica del Norte, Avenida Angamos 0610, Casilla 1280, Antofagasta, Chile
[email protected]
Departamento de Física, Universidad Católica del Norte, Avenida Angamos 0610, Casilla 1280, Antofagasta, Chile
§ ABSTRACT
In this paper, we explored an extension of the classical non-standard cosmological scenario in which the new field, ϕ, which interacts with the radiation component in the early universe, experiences dissipative processes in the form of a bulk viscosity. Assuming an interaction term given by Γ_ϕρ_ϕ, where Γ_ϕ accounts for the decay rate of the field and ρ_ϕ corresponds to its energy density, and a bulk viscosity according to the expression ξ=ξ_0ρ_ϕ^1/2 in the framework of Eckart's theory, we apply this novel non-standard cosmology to study the parameters space for WIMPs Dark Matter candidate production. This parameter space shows deviations from the classical non-standard cosmological scenario, obtaining new regions to search for this candidate. In particular, for certain combinations of the free parameters, we found large regions in which the model can establish the DM and reproduce the current observable relic density.
WIMP dark matter in bulk viscous non-standard cosmologies
Rodrigo Salinas
School of Physics, Nankai University, Tianjin, 300071, China
===================================================================
§ INTRODUCTION
The ΛCDM model is the most successful model in describing the evolution of the universe and fit the observational cosmological data coming from type Ia supernovae <cit.>, observation of the Hubble parameter <cit.>, baryon acoustic oscillations <cit.>, cosmic microwave background <cit.>, among others. Despite that, the model has some lacks such as the nature of Dark Matter (DM) and Dark Energy (DE), where the first one is an unknown, non-baryonic, component of the universe, which is approximately five times more abundant than ordinary matter <cit.>. Some DM candidates naturally arise from theories like Super Symmetry <cit.> or string theory <cit.>. In general, DM candidates are classified into three groups of interest: Weakly Interacting Slim Particles (WISPs) <cit.>, Weakly Interacting Massive Particles (WIMPs) <cit.> and Feeble Interacting Massive Particles (FIMPs) <cit.>. The first group consists of light particles produced through non-thermal mechanism to avoid becoming relativistic. Examples of these particles include Axions, Axions-like particles, and Hidden Photons (or Dark Photons). WIMPs, in contrast, are produced in thermal equilibrium with the Standard Model (SM) bath. As the universe expands, these particles freeze their number via a mechanism known as Freeze-Out, since their interactions become inefficient in comparison with the expansion rate of the universe. These particles were very popular due to the so-called WIMP Miracle, which is able to reproduce the current DM relic density by considering an interaction cross section around the Electro-Weak scale and a mass for the particle around 100 GeV. In fact, to be consistent with the observations in ΛCDM, the total thermal averaged annihilation cross-section for this group must be ⟨σ v⟩_0=few× 10^-26 cm^3/s=few× 10^-9 GeV <cit.>. Nevertheless, almost the full parameter space for WIMPs particles is already covered without any positive signal. In this direction arises the FIMPs, which are produced through non-thermal mechanism and never reach the equilibrium with the thermal bath. Hence, these particles freeze their number via a mechanism known as Freeze-In. To avoid these candidates entering in the equilibrium, their interaction must be even weaker than WIMPs, becoming the FIMPs in an elusive particle since their feeble interactions are difficult to detect with the current instrument. Therefore, it is imperative to find new DM candidates, mechanisms of production or different cosmological scenarios.
In ΛCDM it is assumed that the DM established (froze) its number during a radiation dominated era, which sets the parameters for its search. However, by introducing an additional field (ϕ) in the early universe, it is possible to modify the expansion rate, generating non-standard cosmological histories. That may result in the DM relic density being established in eras different from radiation dominated, making imprints on its relic abundance. When the ϕ state decays into the SM, produce an entropy injection to the SM bath, translating into new parameter space to search these DM particles <cit.>. These non-standard cosmological histories can also be generated considering exotic models such as a bi-metric model, exhibiting the same behavior with an entropy injection to the SM bath <cit.>, and making imprints in the DM production <cit.>. These scenarios are called Non-Standard Cosmologies (NSCs) and bring us new regions in the DM search or re-open windows with parameters space that are discarded in the ΛCDM model, but which could be allowed in these scenarios.
If DM is experimentally detected, its particle physics properties, such as mass and interaction with the SM, will be reconstructed, including their couplings. However, the production of the DM candidate and the scenario that establishes its number are significant in determining the right current relic density. In this context, if the interactions of the detected DM are consistent with ⟨σ v⟩_0, the ΛCDM scenario is favored. On the other hand, if this is not the case, it is imperative to propose alternative cosmological scenarios that might better explain the DM relic density. One possibility to generate new NSC scenarios is the inclusion of non-perfect fluids in the early universe.
In cosmology, all the matter components of the universe are generally described as perfect fluids, providing a good approximation of the cosmic medium. Nevertheless, perfect fluids come from the thermodynamic equilibrium, where their entropy does not increase, and their dynamics are reversible. When non-perfect fluids are considered, effects like viscosity appear, which provide a more realistic description of these cosmic fluids <cit.>, and are relevant in many cosmological processes like reheating of the universe, decoupling of neutrinos from the cosmic plasma, nucleosynthesis, among others. On the other hand, viscosity can also be present in several astrophysical mechanisms as the collapse of radiating stars into neutron stars or black holes and in the accretion of matter around neutron stars or black holes <cit.>. Following this line, viscous fluids must be described by some relativistic thermodynamical approach to non-perfect fluids like Eckart's <cit.> or Israel-Stewart's theories <cit.>. Despite the fact Eckart's theory is a non-causal theory <cit.>, it is widely investigated in the literature due to its mathematical simplicity in comparison with the full Israel-Stewart theory and became the starting point to shed some light on the behavior of the dissipative effects since the Israel-Stewart’s theory is reduced to the Eckart’s theory when the relaxation time for transient viscous effects is negligible <cit.>.
It is known from hydrodynamics that there are two types of viscosity, the shear and bulk viscosity. While the shear viscosity must be important in some scenarios <cit.>, we will focus our study on the bulk viscosity, which can arise due to the existence of mixtures in the universe. In this sense, in a single fluid description, the universe as a whole can be characterized by the particle number density n as n=n_1+...+n_i. So, the simple assumption of different cooling rates in the expanding mixture can lead to a non-vanishing viscous pressure <cit.>. Another explanation for bulk viscous origin is due to DM decaying into relativistic particles, emerging naturally dissipative effects in the cosmic fluid <cit.>. Even more, bulk viscosity can appear in the Cold DM (CDM) fluid due to the energy transferred from the CDM fluid to the radiation fluid <cit.>, and it may also manifest as a component within a hidden sector, reproducing several observational properties of disk galaxies <cit.>. From the point of view of cosmological perturbations, it has been shown that viscous fluid dynamics provides a simple and accurate framework for extending the description of cosmological perturbations into the nonlinear regime <cit.>. Finally, the new era of gravitational wave detectors has also opened the possibility of detecting dissipative effects in DM and DE through the dispersion and dissipation experimented by these waves propagating in a non-perfect fluid <cit.>. As a matter of fact, bulk viscosity could contribute significantly to the emission of gravitational waves in neutron star mergers <cit.>.
The effects of bulk viscosity in the matter components of the universe have been extensively studied in the literature, as the existence of a viscous DE component <cit.>. However, the most common case is to consider a DM that experiences dissipative processes during its cosmic evolution <cit.>, which can describe the recent acceleration expansion of the universe without the inclusion of a DE component (unified DM models) <cit.>. A dissipative stiff matter fluid was previously studied in <cit.> in the full Israel-Stewart theory. Also, bulk viscous DM has been studied in the context of inflation <cit.>, interacting fluids <cit.>, and modified gravity <cit.>. Even more, the viscous effect has been studied in the context of singularities, like Big Rip and Little Rip, for a classical and quantum regime <cit.>. Other scenarios of interest can be found in Refs. <cit.>, where the role of bulk viscosity is studied in the radial oscillation of relativistic stars and their cosmological implications for universes filled with Quark-Gluon plasma, respectively. Last but not least, bulk viscosity was also considered as an alternative to alleviate some recent tensions in the ΛCDM model. For example, a decaying scenario for DM increases the expansion rate relative to ΛCDM and such behavior provides an alleviation for the H_0 and σ_8 tensions <cit.>. In Refs. <cit.>, bulk viscous effects are explored as a viable alternative to relieve the H_0 tension. For an extensive review of viscous cosmology in the early and late time universe see Ref. <cit.>.
This paper aims to study how dissipative effects in the form of bulk viscosity left imprints in WIMPs DM production and its relic density in a non-standard cosmology. In particular, we go further than the classical NSCs scenarios, in which the early universe is dominated by two interacting fluids, namely the new field ϕ and radiation, by considering that ϕ experiences dissipative processes during their cosmic evolution in the form of a bulk viscosity. Working in the framework of Eckart's theory for non-perfect fluids, we compare the NSC scenario with their bulk viscous counterpart, for an specific election of the bulk viscosity. Also, we study the parameter space that can reproduce the current DM relic density varying both, the model and DM parameters, for this novel NSC.
This paper is organized as follows: In Section <ref>, we briefly review the original NSC scenario. Their applicability to WIMP DM is studied in Section <ref>. In section <ref>, we describe the bulk viscous extension to the original NSC model, being compared the two models in Section <ref>. The parameter space for DM production that leads to the current relic density is analyzed in Section <ref>. Finally, in section <ref>, we present some conclusions and future remarks. Through this paper, we consider c=1 units.
§ NON-STANDARD COSMOLOGIES
In the ΛCDM model, the total energy density budget of the universe at early times is composed of radiation (ρ_γ) and DM (ρ_χ), with a negligible cosmological constant in comparison with the other fluids. Following Refs. <cit.>, a straightforward manner to produce NSCs scenarios is adding an extra field ϕ in the early universe, with an associated energy density ρ_ϕ, which will decay in SM plasma. The Friedman equation and the continuity equation in this scenario are
3H^2=ρ_t/M_p^2,
ρ̇_t+3H(ρ_t+p_t)=0,
where “dot” accounts for the derivative with respect to the cosmic time, H≡ȧ/a is the Hubble parameter, with a the scale factor, and M_p=2.48×10^18 GeV is the reduced mass Planck. The total energy density and pressure of the universe are ρ_t=ρ_γ+ρ_ϕ+ρ_χ and p_t=p_γ+p_ϕ+p_χ, respectively. Also, in ΛCDM and NSCs scenarios, the DM component is included through the following Boltzmann equation, which accounts for its number density n_χ
ṅ_̇χ̇+3H n_χ=-⟨σ v⟩(n_χ^2-n_eq^2) ,
where ⟨σ v⟩ is the total thermal averaged annihilation cross-section and n_eq is the equilibrium number density defined as n_eq=m_χ^2 T K_2(m_χ/T)/π^2, with K_2 the Bessel function of second kind, m_χ the DM mass and T the temperature of the universe. The DM energy density is related to its mass by ρ_χ=m_χ n_χ.
To consider the relativistic degrees of freedom it is necessary to incorporate the entropy density (s) of the universe defined through the radiation energy density as
s=ρ_γ+p_γ/T =2π^2/45g_⋆ s(T)T^3 ,
where g_⋆ s(T) are the degrees of freedom that contribute to the entropy density. Therefore, assuming an interaction between the new field ϕ and the radiation component, we can obtain from Eq. (<ref>) and (<ref>) the following equations
ṡ+3Hs = Γ_ϕρ_ϕ/T,
ρ̇_̇ϕ̇+3(ω+1)Hρ_ϕ = -Γ_ϕρ_ϕ,
where we consider for the field a barotropic equation of state (EoS) of the form p_ϕ=ωρ_ϕ, with ω the barotropic index. In these equations, Γ_ϕρ_ϕ is the interaction term, where Γ_ϕ accounts for the decay rate of the new field, and is the most simple (and most studied) case as a NSC scenario. Note that the energy density for the DM candidate can be neglected in Eq. (<ref>) and decoupled from the problem due to the subdominant contribution compared with ϕ and radiation. Finally, we can rewrite Eq. (<ref>) in terms of the temperature using Eq. (<ref>), being obtained
Ṫ=(-HT+Γ_ϕρ_ϕ/3s)(dg_⋆ s(T)/dTT/3g_⋆ s(T)+1)^-1.
The latter can be related to the energy density of radiation as ρ_γ=π^2g_⋆(T) T^4/90, with g_⋆ (T) the degrees of freedom that contribute to the plasma energy density.
It is important to highlight that the NSCs scenarios must not modify the Big Bang Nucleosynthesis (BBN) epoch due to the precise observations posterior to this era, which assume the ΛCDM model <cit.>. So, this new field must decay before the beginning of BBN, i.e, when T_end≳ T_BBN∼ 4 MeV, with T_end the temperature of ϕ decays. One way to estimate when a particle goes out of the thermal bath is to analyze if the interaction of the particles is efficient enough to maintain them in equilibrium or if the expansion rate of the universe suppresses their interactions. Therefore, in the limit H=Γ_ϕ, the ϕ particle had fully decayed and the standard ΛCDM cosmology is recovered, relating the temperature of decay with the decay rate as
T_end^4= 90/π^2 g_⋆(T_end)M_p^2 Γ_ϕ^2.
The inclusion of this new field has some remarkable points, namely T_eq, T_c, and T_end. The first one corresponds to the temperature when ρ_ϕ=ρ_γ, i.e., the point where ϕ starts to dominate over radiation. T_c corresponds to the temperature at which the decays of this new field begin to become significant in the entropy injection for ρ_γ. Finally, T_end is the temperature when ϕ decays, as we mentioned before. With these identifications, we can define three regions of interest: RI) T_eq<T, RII) T_eq>T>T_c, and RIII) T_c>T>T_end. A fourth region (RIV) that is not of interest to us can be considered where T<T_end, which corresponds to the standard ΛCDM scenario where the new field has fully decayed. Therefore, the NSC scenario will be characterized by the parameters κ≡ρ_ϕ,ini/ρ_γ,ini, where the sub-index “ini” correspond to the respective energy density evaluated at the initial temperature T_ini=m_χ, the barotropic index ω, and the temperature T_end. As an example, in Fig. <ref> we depict a NSC scenario for κ=10^-2, T_end=7× 10^-3 GeV, ω=0, and m_χ=100 GeV.
Considering constant degrees of freedom, it can be shown that the energy density of the field ϕ goes as ρ_ϕ∝ a^-3(ω+1). On the other hand, for regions I and II, the temperature goes as T∝ a^-1 when T>T_c; while in region III, the temperature goes as T∝ a^-3(ω+1)/8. The temperature takes the usual form T∝ a^-1 after the full decay of ϕ, recovering the standard ΛCDM cosmology. This behavior of the temperature can be seen in Fig. <ref>, for a NSC scenario with κ=10^-2, T_end=7× 10^-3 GeV, ω=0, and m_χ=100 GeV. It is important to note that the fluctuations in the temperature and radiation energy density are produced by the full numerical integration, including the degrees of freedom for entropy and radiation.
§.§ WIMPs in non-standard cosmologies
The WIMPs are thermally produced in the early universe, being in equilibrium with the thermal bath. They are very popular DM candidates due to the so-called WIMP miracle where, to reproduce the observations in ΛCDM, the total thermal averaged annihilation cross-section must be ⟨σ v⟩_0=few× 10^-9 GeV^-2 to obtain the current DM relic density. Hence, larger values of ⟨σ v⟩ remain the particles in the thermal bath and, therefore, when its number is frozen it produces an under-abundance of DM relic density which can be alleviated with a multi-component DM. On the other hand, if ⟨σ v⟩<⟨σ v⟩_0, the DM particles go out of equilibrium quickly and the over-abundance of DM relic density forbids those values for their interaction.
To obtain the DM relic density it is useful to define the Yield of DM as Y≡ n_χ/s and the dimensionless quantity x≡ m_χ/T. An analytical solution for Eq. (<ref>) can be obtained in the limit Y≫ Y_eq, giving
Y∝1/m_χ J(x_fo),
with J=∫_x_fo^∞ x^-2⟨σ v⟩(x) dx an integral depending on x, where x_fo correspond to the time at which the DM particle goes out of the equilibrium and freeze its number. Note that if the total thermal averaged annihilation cross-section is constant, then the integral turns out to ⟨σ v⟩/x_fo. The latter expression shows that if ⟨σ v⟩ grows, the DM Yield decreases (and vice-versa). An expression for the quantity x_fo is obtained when the DM can not compete with the universe expansion, i.e., when H=Γ=n_eq⟨σ v⟩, from which appears a transcendental equation for x_fo.
In the NSC scenario, the new field produces a boost in the radiation energy density (and in temperature) that can be parameterized by an entropy injection at the time where ϕ starts its decay. This is defined as the entropy density before and after the field ϕ decays, i.e., D≡ s(T_end)/s(m_χ)=(T_end/m_χ)^3. This entropy injection dilutes the DM relic density, considering that the DM Yield depends on entropy density and, therefore, an increment in the entropy density of radiation generates lower values of DM yield. This means that the parameters m_χ and ⟨σ v⟩ that overproduce DM can be allowed in this NSC scenario. Hence, the DM relic can be established in four cases, which are related to the four regions mentioned before:
* RI: This region exists only for κ<1 (ρ_γ,ini>ρ_ϕ,ini) with a Hubble parameter of the form H∼√(ρ_γ/3M_p^2)∝ T^2, i.e., approximately the same Hubble parameter as the standard ΛCDM cosmology. This case is shown in Fig. <ref>, for a NSC with κ=10^-2, T_end=7× 10^-3 GeV, ⟨σ v⟩=10^-11 GeV^-2, m_χ=100 GeV, and ω=0. In this case, the DM freezes its number at the same time in the NSC and in the standard ΛCDM case, overproducing the observed relic density. Nevertheless, when ϕ starts to decay, the DM is diluted by the entropy injection, reproducing the current DM relic density. This allows the parameters (m_χ, ⟨σ v⟩)=(100 GeV, 10^-11 GeV^-2), which were discarded in the ΛCDM scenario.
* RII: In this case, ρ_ϕ starts to dominate over ρ_γ, but the decay of the field is not efficient enough to change the radiation energy density. However, the expansion rate of the universe is dominated by ϕ and the Hubble Parameter can be approximated as H∼√(ρ_ϕ/3M_p^2)∝ T^3(ω+1)/2. In Fig. <ref> it is shown the Yield of DM for κ=1, T_end=0.1 GeV, ω=0, m_χ=100 GeV, and ⟨σ v⟩=10^-11 GeV^-2. Note that the Freeze out for the ΛCDM cosmology happens after the NSC case due to the different rates of expansion of the universe. Nevertheless, after the entropy injection (ϕ decays), the DM Yield reproduces the current relic observable in the NSC scenario.
* RIII: In this region, ϕ is still the dominant fluid of the universe, injecting entropy to the SM bath due to its decay. The expansion rate of the universe can be approximated as H∼√(ρ_ϕ/3M_p^2)∝ T^4 for the decaying period, and the entropy injection dilutes the DM relic density as it shown in Fig. <ref> for κ=10^3, T_end=2 GeV, ω=0, m_χ=100 GeV, and ⟨σ v⟩=10^-11 GeV^-2. Again, the DM freezes its number before the NSC scenario in the decay region, compared with the ΛCDM scenario. Therefore, the entropy injection to the SM ensures that the DM parameters can reproduce the current relic density in the NSC scenario.
* RIV: Finally, in this case the field ϕ has fully decay and the ΛCDM model is recovered, i.e, the DM relic is produced out of the NSC and, therefore, we don't obtain modifications in the DM production.
§ BULK VISCOUS NON-STANDARD COSMOLOGIES
The equations that govern the evolution of the universe are obtained through the Einstein field equations
R_μν-1/2Rg_μν+Λ g_μν=1/M_p^2T_μν,
where R_μν is the Ricci tensor, R the Ricci scalar, g_μν is the metric tensor of the four-dimensional spacetime, and T_μν is the total energy-momentum tensor. In the NSC scenario, as in the classical description of the universe, the total energy budget is described by a perfect fluid, whose respective energy-momentum tensor can be expressed as
T_μν=p_t g_μν+(ρ_t+p_t)u_μu_ν,
where u_μ correspond to the four-velocity of the fluid element. So, for the spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric, given by
dl^2=-dt^2+a^2(t)(dr^2+r^2dϑ^2+r^2sin^2(ϑ) dφ^2),
we obtain the Friedmann Eq. (<ref>) and the acceleration equation
2Ḣ+3H^2=-p_t,
where we have discarded beforehand the cosmological constant because it is negligible in comparison to the other fluids in the epoch of our interest. The continuity Eq. (<ref>) is obtained through the expression ∇^νT_μν=0.
To consider non-perfect fluids in the model, we use, in particular, the framework of relativistic thermodynamic theory out of equilibrium of Eckart, which introduces a small correction Δ T_μν to Eq. (<ref>) according to the expression Δ T_μν=-3Hξ(g_μν+u_μu_ν) <cit.>, where ξ is the bulk viscosity. In the latter, we have considered that the dissipative fluid doesn't experience heat flow and shear viscosity. Therefore, the energy-momentum tensor takes the form
T_μν=P_eff g_μν+(ρ+P_eff)u_μu_ν,
where P_eff=p_t+Π, with Π=-3Hξ the bulk viscous pressure, and the Eqs. (<ref>) and (<ref>) becomes
2Ḣ+3H^2=-p_t-Π,
ρ̇_t+3H(ρ_t+p_t+Π)=0,
while Eq. (<ref>) remains unchanged. Note that the bulk viscosity affects the evolution of the universe through the bulk viscous pressure. In particular, for an expanding universe, the expression Π=-3Hξ is always negative (ξ>0 in order to be consistent with the second law of thermodynamics <cit.>) and, therefore, the viscosity leads to an acceleration in the universe expansion, according to Eq. (<ref>).
We aim to study the effects that the bulk viscosity produces in the classical NSC scenario described in Section <ref>. For this purpose, we need to take into account that the division of the total energy budget of the universe into different components is merely a convention since the energy-momentum tensor describes all the fluids components as a whole. Hence, the effective pressure for this NSC is P_eff=p_γ+p_ϕ+p_χ+Π, where we can make the identification P_eff,ϕ=p_ϕ+Π, i.e., this new field ϕ is the fluid that experience dissipative processes during their cosmic evolution, and Eq. (<ref>) becomes
ρ̇_̇ϕ̇+3(ω+1)Hρ_ϕ=-Γ_ϕρ_ϕ-3HΠ ,
while the other equations of interest in the NSC scenario remain unchanged. In this sense, bulk viscosity can depend, particularly, on the temperature and pressure of the dissipative fluid <cit.>. Therefore, a natural choice for the bulk viscosity of the dissipative fluid is to consider a dependency proportional to the power of their energy density ξ=ξ_0ρ_ϕ^1/2, where ξ_0=ξ̂_̂0̂M_p in order to obtained ξ̂_̂0̂ as a dimensionless parameter, election that has been widely investigated in the literature. Therefore, the latter expression takes the form
ρ̇_ϕ+3(ω+1)Hρ_ϕ=-Γ_ϕρ_ϕ+9M_pξ̂_̂0̂H^2ρ_ϕ^1/2.
Note that the parameterization chosen for the bulk viscosity has the advantage that, when the field ϕ fully decays in SM plasma, the dissipation becomes negligible and we recover the standard ΛCDM scenario without viscosity.
For the comparison between the classical NSC scenario and their bulk viscous counterpart, we numerically integrate Eqs. (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), showing the results in the next subsection.
§.§ Comparison between scenarios
In this section, we will compare all the features discussed above between the NSC described in Section <ref> and the bulk viscous NSC described in Section <ref>.
In Fig. <ref>, we depict the evolution of ρ× (a/a_0)^4 as a function of the temperature T for both scenarios, considering the values κ=10^-3, T_end=7× 10^-3 GeV, m_χ=100 GeV, ξ̂_̂0̂=10^-2, and ω=0. The solid and dashed-dotted lines correspond to the NSC with bulk viscosity and the NSC, respectively, while the red and blue lines correspond to the new field ϕ and the radiation component, respectively. We also present the values of T_eq (cyan), T_c (grey), and T_end (green) for the NSC with bulk viscosity (dashed) and the NSC (dotted). From the figure, it can be seen a boost in the production of the new field ϕ for the bulk viscous NSC in comparison with the NSC case, which leads to a higher increment in the energy density of radiation due to the decay of this viscous state and, therefore, there is a higher entropy injection to the SM bath.
The Yield DM production is depicted in Fig. <ref>, showing a comparison between the NSC (red line) and the NSC with bulk viscosity (blue line) for m_χ=100 GeV, ⟨σ v⟩=10^-11 GeV^-2, T_end=7× 10^-3 GeV, and κ=10^-3. The dashed and dotted-line correspond to x_eq (cyan) and x_c (grey) for NSC with and without bulk viscosity, respectively. The green dashed line corresponds to x_end, which is the same for both models, while the magenta dashed-dotted line corresponds to x_fo. The current DM relic density is illustrate in the green strap. In this case the Freeze-Out occurs before T_eq, corresponding to RI, and from x_fo to x_c, the behavior for both cases is the same in comparison with the ΛCDM model. Then, the entropy injection begins due to the decay of ϕ in both scenarios, which sets the final relic density in agreement with the current observation in the bulk viscous NSC scenario rather than the NSC scenario.
A comparison in the models' parameter space (κ, T_end) that can reproduce the observed DM relic density is shown in Fig. <ref> for the NSC with bulk viscosity (blue line) and the NSC (red line). It is considered the particular case where ω=0, m_χ=100 GeV, and ⟨σ v⟩=10^-11 (GeV)^-2. When the DM freezes its number in RIII (higher values of κ), the two studied cases are similar and reproduce the DM relic density almost for the same parameters. On the other hand, if it is established in RII, the behavior is similar, but there is a deviation from the NSC scenario. Meanwhile the value of κ is going lower (RI), the difference between the NSC with and without bulk viscosity is significant. The latter highlight that, for a given value of T_end, large values of κ can reproduce the current DM relic density, similarly to the independence of κ in RIII. This feature can be explained from Eq. (<ref>), where we can see that the bulk viscosity has a positive contribution (ϕ particles production) and competes with the decays coming from Γ_ϕρ_ϕ. For higher values of κ, the contribution of the viscosity is almost neglected concerning the decay term and, therefore, the case with and without bulk viscosity are similar. In the lower values of κ, the effects of viscosity are dominant over Γ_ϕρ_ϕ and show a different behavior from the NSC, explaining why for a certain value of T_end the value of κ is not relevant. This fact can be appreciated when the blue line crosses the red area in which ρ_ϕ<ρ_γ in the NSC, meaning that the entropy injection from the ϕ decays is neglected. Nevertheless, the viscosity included in the new state makes significant imprints in the entropy injection for radiation. The latter can be understood considering that the right hand side of Eq. (<ref>) can be rearranged as
ρ_ϕ(-Γ_ϕ+3ξ̂_̂0̂H^2ρ_ϕ^l-1/M_p^4m-3)=ρ_ϕ(-Γ_ϕ+ν_ϕ),
where we defined ν_ϕ as the viscous term on the left hand side of Eq. (<ref>). This helps us to visualize the dominance between the decay and the viscosity. The Fig. <ref> depicts the behavior of these two quantities for ω=0 and ξ̂_̂0̂=10^-2. The figure shows a benchmark for three points of the form (T_end, κ): RI for (10^-2 GeV, 10^-3) (dashed-dotted line), RII for (0.1 GeV, 0.1) (dashed line), and RIII for (2 GeV, 100) (solid line). The blue color palette represent the evolution of the ν_ϕ term and the horizontal lines (red color palette) represents the Γ_ϕ term. The vertical lines (purple color palette) are the values of T_end for the three points mentioned above. This illustrates why the parameter space is similar for higher values of κ, because for a wide range of temperatures, the term Γ_ϕ dominates over ν_ϕ. Meanwhile, the value of κ and T_end are diminishing, there is a significant parameter space in which ν_ϕ dominates over Γ_ϕ. Moreover, to even lower values of κ for T_end=3× 10^-3 GeV, the solutions are very similar to the point evaluated in RI, explaining the large values of κ that reproduce the DM relic density for almost the same values of T_end. Note, from Figs. <ref>, <ref>, and <ref>, that this feature described applied merely when the RI exist (ω≤1/3).
As it was shown, the inclusion of viscosity changes the evolution of the energy density. This implies an increment in the entropy injection to radiation leading into lower values of the parameter space to search the DM relic density. This behavior can also be seen for different values of the barotropic index ω. For example, in Figs. <ref> and <ref>, we shown the parameter space of the model that reproduces the current DM relic density for the NSC with (blue line) and without (red line) bulk viscosity for m_χ=100 GeV and ⟨σ v⟩=10^-11 GeV^-2. In the case of ω=-2/5 (Fig. <ref>), the parameter space is highly different between both scenarios because the viscous term dominate the behaviour of the NSC earlier, observing that for the certain value T_end∼8× 10^-1 GeV there is a vast space of κ-values that reproduce the current DM relic density. On the other hand, for ω=2/5 (Fig. <ref>), we do not see the aforementioned behaviour since the RI does not exists. However, there is almost one order of magnitude of difference between both scenarios, allowing lower values of κ that can reproduce the current DM relic density.
Finally, in Fig. <ref>, we present the parameter space for the WIMP DM candidate, namely, its mass (m_χ) and total thermal averaged annihilation cross-section (⟨σ v⟩), for the ΛCDM model (black line), the classical NSC (red line), and the NSC with bulk viscosity (blue line). We consider the particular case where ω=0, κ=10^-2, and T_end=7× 10^-3 GeV. Note that, if the DM is established in the ΛCDM model, then the total thermal averaged annihilation cross-section for the candidate must be ⟨σ v⟩_0=few × 10^-9 GeV^-2 in the range of mass considered. However, both NSC scenarios reach this limit only when the DM mass is decreasing. Therefore, values of the total thermal averaged annihilation cross-section higher than ⟨σ v⟩_0 are not allowed and are represented in the gray zone. The blue and red zones represent the condition ρ_ϕ<ρ_γ for which the parameters that reproduce the DM relic density go closer to the ΛCDM case for the NSC with and without bulk viscosity, respectively. This behaviour is due to the low quantity of entropy injected to radiation. It is important to highlight that the inclusion of the bulk viscosity, as it was shown before, leads to a down displacement in the values for the DM parameters such as (κ, T_end) (see Figs. <ref> - <ref>).
§.§ Parameter space for dark matter
We have already presented the differences between the NSC with and without bulk viscosity, studying the entropy injection, the DM production, and the imprints in the parameters space to obtain the current DM relic density. Now, we are interested in the study of the NSC with bulk viscosity in two perspectives: (i) if we detect a DM signal with specific parameters (m_χ, ⟨σ v⟩), which cosmological model could adjust those parameters? and (ii) for a specific model benchmark, which are the DM parameters that could reproduce the current observable relic density?
The first perspective is illustrated in Figs. <ref>, <ref>, and <ref>. In particular, in Fig. <ref>, we depict the parameter space (T_end,κ) that reproduces the current DM for ω=0 and ⟨σ v⟩=10^-11 GeV^-2, considering three particular cases, namely, m_χ=100 , 1000 and 10^4 GeV. The most important conclusion is that the curves of parameters space allowed in (T_end, κ) are shifted to the right (and slightly down) when the DM mass is higher (and vice-versa), which is also applied to the restricted areas ρ_ϕ<ρ_γ. On the other hand, in Fig. <ref>, we depict the same parameter space (T_end,κ) but for a fixed DM mass given by m_χ=100 GeV, considering three particular cases, namely, ⟨σ v⟩=10^-10, 10^-11, and 10^-12 GeV^-2. Again, we can see a shift of the curves when is varied the values of the total thermal averaged annihilation cross-section ⟨σ v⟩ that generate the observed DM abundance. In particular, the curves are displaced downward (and slightly to the right) if the value of ⟨σ v⟩ increases (and vice-versa). Note that, in this case, the restricted zone ρ_ϕ<ρ_γ is not affected, taking the same values for all the variations of ⟨σ v⟩. It is important to note that the region in which the model becomes independent of the values for κ (in the particular case when ω<1/3) can be displaced to avoid the BBN epoch, as it is possible to see in Fig. <ref>. In particular, the latter can be done for a fixed ⟨σ v⟩=10^-11 GeV^-2 value and a DM mass in the range m_χ>100 GeV; or for a fixed DM mass m_χ=100 GeV value and a total thermal averaged annihilation cross-section in the range ⟨σ v⟩_0>⟨σ v⟩>10^-11. In general, higher values of m_χ combined with lower values of ⟨σ v⟩ would open this window to explore. Finally, in Fig. <ref>, we depict the parameter space (T_end,κ) for the model that reproduces the current DM relic density for ω=0, m_χ=100 GeV, and ⟨σ v⟩=10^-11 GeV^-2, considering five cases, namely, ξ̂_0=10^-3, 5×10^-3, 10^-2, 2.5×10^-2, and 5×10^-2. For higher values of κ (RIII), there are no significant differences among the curves, meanwhile, for lower values of κ (RII and I), the differences are significant when the value of ξ̂_̂0̂ increases. Also, when ξ̂_̂0̂→ 0, the curves tend to the classical NSC scenario because the dissipation is negligible and Eq. (<ref>) reduces to Eq. (<ref>). On the other hand, higher values of viscosity make more prominent the below curvature, generating a shorter range of T_end for a large range of κ.
The second perspective is illustrated in Figs. <ref>, <ref>, and <ref>, where we study the free parameters of the DM for different elections of κ, T_end, and ξ̂_0, respectively. In the figures, the grey zone corresponds to the DM parameter space not allowed in the ΛCDM model and the black line to the parameter space that reproduces the current DM relic density in the same model. In particular, in Fig. <ref>, we depict the parameter space (m_χ, ⟨σ v⟩) that reproduce the current DM relic density for ω=0, ξ̂_̂0̂=10^-2, and T_end=7× 10^-3 GeV, considering three particular cases, namely, κ=10^2, 1, and 10^-2. From the figure, we can see that only for κ<1 exists the region in which ρ_ϕ<ρ_γ, for all time. If the values of κ increase, then the parameter space allowed to shift the total thermal averaged annihilation cross-sections to lower values. Also, when κ≪1, we recover the ΛCDM scenario. On the other hand, in Fig. <ref>, we depict the parameter space (m_χ, ⟨σ v⟩) that reproduce the current DM relic density for ω=0, ξ̂_̂0̂=10^-2, and κ=10^-2, considering three particular cases, namely, T_end=10^-2, 10^-1, and 1 GeV. From the figure, we can see that higher values of T_end tend the NSC with bulk viscosity to the ΛCDM scenario, since the ϕ state decays rapidly and there is no significant entropy injection, meanwhile, lower values of T_end allow a large range of ⟨σ v⟩. Finally, in Fig. <ref>, we depict the parameter space (m_χ, ⟨σ v⟩) that reproduces the current DM relic density for ω=0, κ=10^-2, and T_end=7×10^-3 GeV, considering four particular cases, namely, ξ̂_0=5×10^-2, 2.5×10^-2, 10^-2, and 10^-3. From this figure we can see that for lower values of ξ̂_0 the model tends to the NSC scenario without bulk viscosity (see also Fig. <ref>), meanwhile, higher values of ξ̂_̂0̂ shift the DM parameters that can reproduce the relic density to the left, splitting the NSC with bulk viscosity from the classical NSC case and allowing lower values in ⟨σ v⟩. Again, the colored zones represent the cases when ρ_ϕ<ρ_γ at any time. Therefore, the dissipation of the ϕ field gives us new zones to search for the WIMPs candidates. An important result can be noticed for different values of ξ̂_0. These values generate a displacement in the parameter space which translates into that different values of ξ̂_0 could reproduce different NSCs scenarios without bulk viscosity, i.e, a specific value of ξ̂_0 and ω can match the same parameter space in a classical NSC with different ω.
For a further comparison, in Fig. <ref>, we consider different kinds of fluids for T_end=7× 10^-3 GeV and ξ̂_0=10^-2. To solve the differential equations, we consider the value κ=10^-2 for the barotropic index ω=-1/3, -1/5, and 0; κ=10^2 for ω=1/3; and κ=10^4 for ω=1. The consideration of different values in κ is related to the evolution of the fluid itself, i.e., fluids with ω>1/3 diluted rapidly compared to radiation and, therefore, need higher initial energy to generate an effective entropy injection to radiation before the decay of ϕ. From the figure we can see that, for the curves with ω<0, their slopes tend to lean downward. On the other hand, the curves with ω>0 tend to lean their slopes upward. The case with ω=1 must be analyzed carefully because it enters into the forbidden ΛCDM zone, which translates into values of ⟨σ v⟩ slightly higher than ⟨σ v⟩_0 for a DM range mass of 10^-1 GeV ≤ m_χ≤ 10^3 GeV.
§ CONCLUSIONS
In this paper, we explored an extension of the classical NSC scenario in which the new field ϕ, which interacts with the radiation component in the early universe, experiences dissipative processes in the form of a bulk viscosity. Working in the framework of Eckart's theory, we studied the difference between both scenarios, considering a bulk viscosity proportional to the energy density of the field according to the expression ξ=ξρ_ϕ^1/2. In addition to being one of the most studied, this parameterization has the characteristic that, when the field ϕ fully decays in SM plasma, the dissipation becomes negligible and we recover the ΛCDM model without viscosity. Following this line, in the case that DM is discovered with its physics parameters (m_χ,⟨σ v⟩) reconstructed, it is imperative to determine if those parameters are in agreement with the ΛCDM model or not. Hence, the inclusion of this novel NSC scenario brings to life parameters for WIMPs DM candidates that were discarded in the ΛCDM model and in the classical NSC scenario, obtaining new regions or re-open windows to search them. We study this new NSC assuming the most studied interacting term of the form Γ_ϕρ_ϕ, searching for the parameter space for DM production that leads the current observable relic density.
As it was shown in Figs. <ref>, <ref>, and <ref>, the model parameters that reproduce the right abundance of DM relic are very close to higher values of κ (RIII). On the contrary, when κ decreases, the case with viscosity shows clear differences, as the independence in κ-values of the model in RI, similarly as in RIII. This behavior is merely by the inclusion of the viscosity as it was shown in Fig. <ref>, in which lower values in ξ̂_0 tend to reproduce the NSC scenario without bulk viscosity and higher values of ξ̂_0 generate the independent zone in κ (for RI) sooner. The variation of DM mass or ⟨σ v⟩ shifts the parameters to the left/right or up/down when the parameters mentioned are lower/higher, respectively. On the other hand, when the DM parameters are explored for specific benchmarks of the model, it can be seen from Fig. <ref> that present new zones to obtain the current relic density, giving the possibility to reach lower values of ⟨σ v⟩ for the range of mass studied. Nevertheless, for lower values of DM mass, the NSC with and without bulk viscosity are similar. Fig. <ref> also shows that for lower values of ξ̂_0, the parameters are similar to the classical NSC case, meanwhile ξ̂_0 is higher, the slope in the curves of parameters is more pronounce to lower values of ⟨σ v⟩, i.e, the inclusion of higher values of ξ̂_0 provides lowers values for the total thermal averaged annihilation cross-section. Finally the variation of the model parameters κ/T_end shifts the current DM relic density to left/right or up/down when the parameters mentioned are higher/lower, respectively.
Therefore, this paper is a further step in the study of WIMPs as DM candidates and a first approximation to highlighting the imprints that the bulk viscosity can leave in these particles and their relic density in the early universe through a NSC scenario.
§ ACKNOWLEDGMENTS
E.G. was funded by Vicerrectoría de Investigación y Desarrollo Tecnológico (VRIDT) at Universidad Católica del Norte (UCN) through Proyecto de Investigación Pro Fondecyt 2023, Resolución VRIDT N°076/2023. He also acknowledges the scientific support of Núcleo de Investigación No. 7 UCN-VRIDT 076/2020, Núcleo de Modelación y Simulación Científica (NMSC).
apsrev4-2
|
http://arxiv.org/abs/2409.02476v1 | 20240904064912 | Phase changes of the flow rate in the vertebral artery caused by debranching thoracic endovascular aortic repair: effects of flow path and local vessel stiffness on vertebral arterial pulsation | [
"Naoki Takeishia",
"Li Jialongb",
"Naoto Yokoyamac",
"Hisashi Tanakad",
"Takasumi Gotoe",
"Shigeo Wada"
] | physics.bio-ph | [
"physics.bio-ph",
"q-bio.TO"
] |
1
.001
Phase changes of the flow rate in the VA caused by dTEVAR
N. Takeishi & N. Yokoyama et al.
mode = title]Phase changes of the flow rate in the vertebral artery caused by debranching thoracic endovascular aortic repair: effects of flow path and local vessel stiffness on vertebral arterial pulsation
1]Naoki Takeishi[
type=editor,
auid=000, bioid=1,
orcid=0000-0002-9568-8711
]
[1]
[email protected]
2]Li Jialong[
]
3]Naoto Yokoyama[
orcid=0000-0003-1460-1002
]
[1]
[email protected]
4]Hisashi Tanaka
5]Takasumi Goto
2]Shigeo Wada
[1]Department of Mechanical Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395,
Japan
[2]Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, 560-8531, Osaka, Japan
[3]Department of Mechanical Engineering, Tokyo Denki University, 5 Senju-Asahi, Adachi, 120-8551, Tokyo, Japan
[4]Graduate School of Medicine, Division of Health Science, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan
[5]Department of Cardiovascular Surgery, Osaka University Graduate School of Medicine, Suita, Osaka, 565-0871, Japan
[cor1]Corresponding author
§ ABSTRACT
Despite numerous studies on cerebral arterial blood flow,
there has not yet been a comprehensive description of hemodynamics in patients undergoing debranching thoracic endovascular aortic repair (dTEVAR),
a promising surgical option for aortic arch aneurysms.
A phase delay of the flow rate in the left vertebral artery (LVA) in patients after dTEVAR compared to those before was experimentally observed,
while the phase in the right vertebral artery (RVA) remained almost the same before and after surgery.
Since this surgical intervention included stent graft implantation and extra-anatomical bypass,
it was expected that the intracranial hemodynamic changes due to dTEVAR were coupled with fluid flow and pulse waves in cerebral arteries.
To clarify this issue,
A one-dimensional model (1D) was used to numerically investigate the relative contribution (i.e., local vessel stiffness and flow path changes) of the VA flow rate to the phase difference.
The numerical results demonstrated a phase delay of flow rate in the LVA but not the RVA in postoperative patients undergoing dTEVAR relative to preoperative patients.
The results further showed that the primary factor affecting the phase delay of the flow rate in the LVA after surgery compared to that before was the bypass, i.e., alteration of flow path,
rather than stent grafting, i.e., the change in local vessel stiffness.
The numerical results provide insights into hemodynamics in postoperative patients undergoing dTEVAR,
as well as knowledge about therapeutic decisions.
Intracranial blood flow, vertebral artery, 1D analysis, nonlinear wave dynamics, Riemann invariants, debranching TEVAR, computational biomechanics
[
[
3 September 2024
====================
§ INTRODUCTION
Aortic aneurysm, which is morphologically defined as the focal dilation and structural degradation of the aorta,
is an asymptomatic disease,
and its rupture is a significant cause of death worldwide.
For instance, in the United States of America, aortic aneurysms and dissections cause over 10,000 deaths each year <cit.>.
To date, thoracic endovascular aortic repair (TEVAR) is the only effective treatment for aortic aneurysm.
TEVAR has been a particularly attractive surgical intervention because it is less invasive than conventional open surgical repair <cit.>.
The aorta is the largest conduit artery in the body, and due to its extraordinary ability to expand and contract, it serves as a reservoir that transforms the highly pressured and pulsatile heart output into a flow with moderate fluctuations <cit.>.
Deterioration of this buffering effect due to stiffening of the arterial wall is the main cause of hypertension <cit.>.
These dynamics of the aorta are also important for propagation of the pulse wave,
which has been recognized as an early indicator of the health status of the cardiovascular system <cit.>.
Thus, understanding the hemodynamic differences between pre- and postoperative TEVAR patients based on pulse-wave dynamics is of paramount importance, not only for surgical decision-making to achieve optimal clinical outcomes, but also for evaluating postoperative hemodynamics.
In TEVAR, the aortic arch is mapped according to the segmentation of the vertebral column or landing zone <cit.>,
where zone 0 ranges from the ascending aorta (AA) to the origin of the innominate artery (IA),
zone 1 ranges from the IA to the origin of the left common carotid artery (LCCA),
zone 2 ranges from the LCCA to the left subclavian artery (LSA),
and zones 3 and 4 follow in the longitudinal direction.
TEVAR for partial arch debranching,
known as debranching TEVAR (dTEVAR),
includes both stent graft implantation and extra-anatomical bypass,
where the stent graft is placed from zone 1 or 2 with a length of several millimeters depending on the patient.
This operation is most often performed in high-risk patients with thoracic aortic aneurysms (TAA) <cit.>.
Currently, expanded polytetrafluoroethylene (ePTFE) and polyethylene therephthalate (PET) are the two most popular graft materials in abdominal aortic aneurysm repair due to their remarkable biocompatibility and durability.
Additional supra-arch vessel reconstruction in dTEVAR is known to prevent cerebral infarction <cit.> and ischemia in the cerebral circulation <cit.>.
A previous experimental study has shown that total intracranial blood flow was preserved after dTEVAR,
with significant decrease of flow in the LVA and significant increase of flow in the RVA <cit.>.
The hemodynamic mechanism of such changes in posterior cerebral circulation after dTEVAR remains uncertain yet due the technical difficulty of measurements.
Thus, our primary concern here is to evaluate bilateral vertebral artery blood flows under pulsation between the patients before and after 1dTVEAR with axillo-axillary artery (AxA-AxA) bypass,
and to clarify the hemodynamic mechanism associated with postoperative significant changes of posterior cerebral circulation.
Since dTEVAR includes extra-anatomical bypass,
it was expected that the intracranial hemodynamic changes due to dTEVAR would be coupled with fluid flow and pulse waves in the VA.
Therefore, the objective of this study was to numerically clarify whether there exist phase differences of the flow rate in the VA,
as a representative intracranial vessel,
between patients before and after dTEVAR,
especially in cases with one bypass from the right axillary artery (RAxA) to the left axillary artery (LAxA) and embolization at the branch point between a stent-grafted aortic arch and the LSA (see Figure <ref>).
If phase differences of the flow rate exist,
we clarify how surgery contributes to it.
Hereafter, we refer to this operation as single-debranching TEVAR (1dTEVAR).
A one-dimensional (1D) model analysis was used to numerically investigate blood flow rates in the L/RVA in pre- and postoperative patients after 1dTEVAR.
In a human aorta with ∼10 mm radius and ∼0.3 mm thickness,
the wave speed, quantified by the traditional foot-to-foot wave velocity method, e.g., by <cit.>, is over 5 m/s <cit.>.
Such conventional experimental methodology, however, is still difficult to quantify the effect of pulse waves in the VA on the intracranial blood supply.
Furthermore, assuming that one pulse has a duration of 1 second,
the ratio between the wave length and radius is 500, i.e., long wave.
Numerical analysis for blood flow with long-wave pulses still involves a heavy computational load even with a two-dimensional model,
although several attempts for this issue were reported, e.g., in <cit.>.
1D model analysis is one of the most effective and practical non-invasive solutions to investigate hemodynamics with long-wave pulses <cit.>.
Thus, for practical purposes, 1D modeling has been widely used in circulatory systems,
such as the systemic arteries <cit.>,
coronary circulation <cit.>,
the circle of Willis <cit.>,
and large vasculature, including venous systems coupled with contributions of the microcirculation modeled by lumped parameters <cit.>.
Recently, 1D modeling was applied to a problem involving both blood flow rate and blood oxygenation <cit.>.
<cit.> used patient-specific medical imaging data on the anatomy of the circle of Willis to perform a 1D model analysis of intracranial blood flow assuming a steady state,
coupled with three-dimensional tissue perfusion.
In this study,
the experimental evidence on differences in the flow rate in the L/RVA between patients before and after 1dTEVAR was reported.
Next, using 1D numerical model, the phase difference of the flow rate in the L/RVA in pre- and postoperative patients was evaluated.
Simulations were also performed for different local vessel stiffnesses and lengths of vessels that had stiffened due to stent grafting.
§ MATERIALS AND METHODS
§.§ Subjects and measurements
This retrospective study was conducted in accordance with the guidelines of the Declaration of Helsinki.
All experimental protocols were approved by the institutional review board of Osaka University.
All subjects provided oral and written informed consent to participate in this study.
A prospective analysis of blood flow in the left and right internal carotid arteries (L/RICA) and the L/RVA in 9 patients before and after dTEVAR was performed between January 2015 and January 2020 at Osaka University Hospital,
where RAxA-LAxA bypass was performed (see also Figure <ref>B).
A ringed 8-mm ePTFE graft (FUSION, MAQUET Getinge Group, Japan) was used in all the debranching procedures (see also <cit.>).
Two-dimensional (2D) cine phase-contrast data was acquired with 3T magnetic resonance imaging (MRI) (MR750-3T, GE Healthcare, Waukesha, WI, USA).
Validations of the resolution of 3T MRI have been performed in in vivo studies, for instance, by <cit.> and <cit.>.
In all patients, MRI was performed within 6 months prior to the dTEVAR procedure and within 1 month afterward <cit.>.
The vascular tree models of both pre- and postoperative patients were reconstructed based on geometrical parameters of vessel diameter and length between branch points, as shown in Figure <ref>.
Arterial vascular geometries, including reference radius (r_0) and length (L) in a representative patient, were measured both pre- and postoperatively at Osaka University Hospital,
and the data are summarized in Table <ref>.
The diameters of the end terminals,
which are denoted by the vessel ID = χ^' in Figure <ref>(A),
were determined so that the time-average flow rate became similar to that obtained with experimental measurements by <cit.> or <cit.> (see Figure <ref>).
The lengths of the end terminals were uniformly set to 50 mm.
For simplicity, branch angles were uniformly set to 30 deg.
The effect of branch angle is mentioned in <ref>.
To represent arterial trees in patients after 1dTEVAR,
The Young's modulus was set to be almost 100 times larger in the stent-grafted area (vessel ID = 4; see Table <ref>) than in preoperative patients,
and also created a bypass from the RAxA to the LAxA (vessel ID = 7 and 16) with embolization at the branch point from the stent-grafted thoracic aorta to the LSA (see Figure <ref>(B)).
In this study,
the bypass was considered to have a radius of 4.5 mm and a length of 262 mm,
and the stent-grafted artery was considered to have a radius of 12.5 mm and a length of 40 mm.
The diameters and lengths of end terminals were also the same as those in the preoperative vascular tree model.
§.§ Mathematical model and simulation
The 1D governing equations describe the conservation of mass and momentum:
∂ A/∂ t + ∂ Q/∂ x = 0,
∂ Q/∂ t + ∂/∂ x( αQ^2/A) + A/ρ∂ p/∂ x + K_R Q/A = 0,
where x is the axial direction,
A = A (x, t) is the area of a cross-section at x and at time t,
Q (= UA) is the mean volumetric flow rate across a section,
U = U (x, t) is the velocity of the fluid averaged across the section,
p = p (x, t) is the pressure,
ρ (= 1.05 × 10^3 kg/m^3) is the blood density,
α is the coefficient of the velocity profile,
and K_R (= 22 πV = 22 πμ/ρ) is the drag coefficient for the blood viscosity μ (= 4.5 × 10^-3 Pa·s).
A flat velocity profile was assumed, and set to be α = 1 <cit.>.
Assuming static equilibrium in the radial direction of a cylindrical tube or thin elastic shell,
one can derive a pressure relationship of the form <cit.>
p - p_ext = β( √(A) - √(A_0)),
and
β = √(π) h_0 E(x)/( 1 - ν^2 ) A_0,
E(x) = r_0/h_0[ k_1 exp( k_2 r_0 ) + k_3 ],
where h_0 = h_0 (x), r_0 = r_0 (x), and A_0 = π r_0^2 are the vessel thickness, vessel radius, and sectional area, respectively, at the equilibrium state (p, Q) = (p_ext, 0),
E(x) is the Young's modulus,
p_ext (= 0) is the external pressure, assumed as a constant,
ν is the Poisson ratio, which is set to be ν = 0.5 for practical incompressibility, and
k_i (i = 1–3) are the coefficients set to k_1 = 2 × 10^6 kg/(s^2·m), k_2 = -2.253 × 10^3 1/m, and k_3 = 8.65 × 10^4 kg/(s^2·m) <cit.>.
These parameter values were also used in previous 1D blood flow analyses, e.g., by <cit.> and <cit.>.
Since the effect of h_0 does not appear in β owing to equations (<ref>) and (<ref>),
and since the vessel stiffness is simply characterized by β, which is determined mainly by the reference radius r_0 and parameters k_i,
the value of h_0 in each artery was simply assumed as one-tenth of the diameter 2 r_0, i.e., h_0 = 2 r_0/10.
Thus, the order of magnitude of the calculated Young's modulus, referring to the values of r_0 in Table <ref>, was O(E) = 10^-1 MPa,
which is consistent with that of aortic elasticity in conscious dogs <cit.>.
The Young's modulus of the bypass was set to be 100 times larger than that obtained with the reference radius (r_0 = 4.5 mm),
i.e., E_bypass = 43.3 MPa.
Given that a previous numerical analysis of ePTFE stent grafts used a Young's modulus of 55.2 MPa <cit.>,
we set the same order of magnitude of the Young's modulus in the stent grafted region, i.e., E_s = 10 MPa.
The governing equations (<ref>) can be written as an advection equation:
∂_t v̌ + 𝐉(v̌) ·∂_x v̌ = b̌ →∂_t v̌ + ∂_x F̌ = b̌,
where the variable v̌,
advection term F̌,
source term b̌,
and Jacobian J̌ are written as
v̌ =
[ Q; A ], b̌ =
[ -K_R Q/A
0 ], F̌ =
[ Q^2/A + β/3 ρ A^3/2
Q ], 𝐉(v̌) =
[ 2 Q/A -( Q/A)^2 + β/2 ρA^1/2; 1 0 ].
The flow and pulse wave are integrated over time by the Lax-Wendroff method with Δ t = 10^-5 s from t^n to t^n+1 = t^n + Δ t.
The precise descriptions are given in the Appendix.
To capture the internal pressure profile in the heart during a cardiac cycle,
wherein vascular pumping is accelerated in the systolic phase and attenuated in the diastolic phase <cit.>,
the inlet pressure p_in(t) is given as <cit.>
p_in(t) = p_0 +
p_a2[1
+ 11-ϵ_s(
sin(πT̃_s)
+ ϵ_ssin(3 πT̃_s)
)
] for
0 ≤ t < T4
p_a2[1
- 11-ϵ_d(
sin(πT̃_d)
+ ϵ_dsin(3πT̃_d)
)
] for T4≤ t < T
,
and
T̃_s = t - T/8T/4, T̃_d = t - 5T/83T/4,
where p_a (= 4333 Pa) is the amplitude from the base pressure p_0 (= 10666 Pa),
T (= 1 s) is the wave period,
and ϵ_s or ϵ_d is the waveform parameter (black line in Figure <ref>).
Hereafter, the subscripts s and d denote the systolic and diastolic phases, respectively.
In this study, ϵ_s = 0.1 and ϵ_d = 0 <cit.> were used,
so that the derivatives up to the fourth of p(t) are continuous.
The waveform reflects a fast expansion during the systolic phase 0 ≤ t < T/4 and a slow contraction during the diastolic phase T/4 ≤ t < T.
The parameters in our simulations and their values are shown in Table <ref>.
Figure <ref> shows pressures and calculated flow rates during a period T (= 1 s).
The model pressure given in equations equation (<ref>) and (<ref>) well captures the profile of human aortic pressure (Figure 3a in the work by <cit.>).
A no-reflection condition was applied for each outlet by a backward Riemann invariant W_- = 0.
A more precise description of the methodology is presented in the Appendix.
§ RESULTS
§.§ Measurements of flow rate in pre-and postoperative patients
The flow rates in both the L/RVA in 9 patients before and after 1dTEVAR during a single heartbeat were measured by 2D cine phase-contrast MRI.
The mean flow rates are shown in Figure <ref>,
where the heartbeat was divided into 12 parts and the flow rate at the i-th phase Q_i was normalized by the total flow rate Q_all (= ∑_i^12 Q_i) in both the L/RVA during a period T.
The first phase (i.e., 1/12) was defined when the pulse wave was detected by an accelerometer (GE Healthcare) with the tip of a forefinger.
Compared to the peak in the second phase of the cardiac cycle (i.e., 2/12) in the LVA of preoperative patients,
the peak in postoperative patients was delayed to the third phase (i.e., 3/12).
Thus, a 1/12-period (∼8%) phase delay in the LVA was observed between before and after surgery (Figure <ref>(A)).
On the other hand,
the peak of the flow rate ratio in the RVA (the first phase, 1/12) remained almost the same before and after surgery (Figure <ref>(B)).
Since the difference of flow rate ratio in the RVA between the first and second phases of the cardiac cycle was very small,
we concluded that the first phase of the cardiac cycle was the time point of the peak of the flow rate ratio.
§.§ Model validation
Using a preoperative vascular tree model (Figure <ref>),
the mean flow rates Q_mean in eight different arteries were investigated,
specifically AA, VA (= LVA + RVA), L/RICA, and L/RSA.
The calculated mean flow rates were normalized by the mean inlet flow rate Q̅_in = (1/T)∫_0^T Q_in dt,
and the flow rate ratio Q_RATIO = Q_mean/Q̅_in in each vessel was compared with data in a previous experimental study by <cit.> (Figure <ref>A).
The outlet diameters were determined so that the errors in the flow rate ratio in each vessel were uniformly less than 5%, i.e. | Q_RATIO^sim/Q_RATIO^exp - 1| < 0.05,
where the superscripts “sim” and “exp” denote the simulation and experiment, respectively.
Using the same preoperative vascular tree model,
the flow rate ratio between the L/RICA and L/RVA was calculated.
The flow rate ratio in patients after 1dTEVAR was also calculated with the postoperative vascular tree model (Figure <ref>B).
The diameters of the end terminals were the same as those in the preoperative vascular tree model.
The calculated flow rate ratio in each artery was compared with previous experimental measurements by <cit.> (Figures <ref>B and <ref>C).
In the experiments,
although there were no large differences before and after surgery in the flow rate ratio in the L/RICA or the LVA,
the flow rate ratio in the RVA increased after surgery.
The numerical results qualitatively agree with these experimental measurements.
§.§ 1D model analysis of the flow rate in the ICA in pre-and postoperative patients
To gain insight into the mechanism of the phase difference in the LVA and the lack of such a difference in the RVA (Figure <ref>),
1D numerical simulations were performed using arterial vascular tree models before and after surgery (Figure <ref>).
Model verifications are shown in the Appendix (see Figures <ref>).
The time history of the pressure and flow rates in the L/RVA during a cardiac cycle T are shown in Figure <ref>(A) and <ref>(B), respectively,
where the results before and after surgery are superposed.
Data are shown after the pressure and flow rates have reached the stable periodic phase (t ≥ 2 s).
As described regarding the experimental measurements of flow rates (Figure <ref>(A)),
the numerical results of the flow rate waveform in the LVA were delayed, by 6.8% of a single period,
which is similar to the experimental measurements (Figure <ref>(A)).
As in experimental measurements, the phase difference was also quantified by the time point observed at the maximum flow rate.
On the other hand,
the numerical results of the flow rate waveform in the RVA were only slightly delayed, by 0.84% of a single period (Figure <ref>(B)).
We concluded that our numerical results of the phase difference in the L/RVA are consistent with the experimental measurements (Figure <ref>(B)).
§.§ Effects of local vessel stiffness and flow path
To clarify whether the flow rate waveforms in the LVA (Figure <ref>(A)) can be caused by the local vessel stiffness, flow path changes, or both,
the effect of both of these factors on the flow rate in the L/RVA was investigated with two different additional vascular tree models.
The Young's modulus in the stent-grafted region was increased by 100 times postoperatively compared to the preoperative state without the bypass or embolization, the so-called “stent model",
while the flow path change was the same as that in the postoperative vascular tree model (i.e., new bypass and embolization) and the Young's modulus was the same as that in the preoperative model, the so-called “flow path model".
The flow rate and pressure waveforms during a cardiac cycle T are shown in Figure <ref>(A) and <ref>(B).
The flow rate waveforms obtained with the stent model were different from those in the postoperative vascular tree model in both the L/RVA.
On the other hand,
the flow rate waveforms obtained with the flow path model collapsed with those obtained with the preoperative vascular tree model.
These agreements or disagreements were also commonly observed in the pressure waveforms.
The phase difference δ of the maximum flow rate in both the L/RVA between the preoperative vascular tree model and the three aforementioned models (stent, flow path, and postoperative) are summarized in Figure <ref>(C),
where the results were normalized by T.
Changing the flow path, as in the flow path model, decreased the ratio of the phase difference of the flow rate δ/T by only 7.1% in the LVA and 1.3% in the RVA relative to the ratios obtained in the postoperative model.
On the other hand, changing the local Young's modulus, as in the stent model, caused quite small phase differences, i.e., O(δ/T) = 10^-2%.
§.§ Effects of stent grafting and bypass angle
The effects of the local vessel stiffness (Young's modulus) and stent-grafted length on the phase difference in the L/RVA were further investigated using the postoperative vascular tree model.
Simulations were performed for different Young's modulus valuses E/E_s (= 0.01, 0.1, 1, 10) in the stent-grafted artery with standard length L_0 = 40 mm,
where E_s (= 10 MPa) was the original Young's modulus in the stent-grafted region.
The ratio of the phase difference δ/T remained almost the same (less than 0.2%) in both the L/RVA.
Even when the length of the stent-grafted region increased to L/L_0 = 1, 2.25, 3.5, 4.75, 6,
the results of δ/T remained the same (less than 0.1%; data not shown),
where L_0 (= 40 mm) was the original length of the stent-grafted region.
The phase delay in the LVA was also insensitive to the bypass angle θ,
defined at the branch point between the vessel and bypass,
especially for θ≥ 60 deg as shown in Figure <ref>.
Note that the relative difference of the mean flow rate in the RVA obtained with the reference angle (30 deg) and that obtained with 120 deg at the branch angle of the bypass was smaller than 1% (data not shown).
The results were obtained with the postoperative vascular tree model with a standard stent length of L_0 = 40 mm and a Young's modulus of E_s = 10 MPa.
These results, including those shown in Figure <ref>(C), suggest that the phase delay of the flow rate in the VA between before and after surgery (Figure <ref>) arises mostly from the alteration of the flow path, i.e., by the new bypass and embolization rather than by local vessel stiffness due to stent grafting.
Note that the effect of the stiffness (or Young's modulus E) of the straight tube on the fluid velocity is described in the Appendix,
and the results showed that the pulse-wave speed c increased with E with the ratio of velocity O(c/U) > 10^2 (Figure <ref>D).
§ DISCUSSION
A previous clinical study by <cit.> showed hemodynamic changes in blood flow through the L/RVA after 1dTEVAR, while postoperative total intracranial blood flow was almost the same as that measured preoperatively.
This might be due to systemic hemodynamic compensation causing cerebral blood flow to be strictly maintained through cerebral autoregulation <cit.>.
Despite the maintained flow mass in patients after 1dTEVAR,
experimental measurements demonstrated phase differences of the flow rate in the LVA but not in the RVA between pre- and postoperative patients undergoing 1dTEVAR (Figure <ref>).
It is expected that the aforementioned hemodynamic compensation and the phase differences in the VA in postoperative patients might be due to two mechanical factors: structural alterations in flow paths due to the new bypass and embolization, and an increase of local vessel stiffness due to stent grafting.
However, much is still unknown about this matter, in particular about the relative contribution of these two factors to the phase of the flow rate in the VA between pre- and postoperative patients undergoing 1dTEVAR.
To explore this issue,
A 1D model was used to numerically investigate flow rates in the VA, both before and after 1dTEVAR.
Using a postoperative arterial tree model (Figure <ref>(B)),
The numerical results demonstrated a phase delay in the LVA compared to preoperative models, as shown in Figure <ref>(A).
Stent grafting,
which is characterized by a locally increasing Young's modulus,
contributed negligibly to the phase delay in the LVA (see <ref>).
Thus, the experimentally observed phase delay (Figure <ref>(A)) was mainly caused by alteration of the flow path, i.e., by the new bypass and embolization, rather by local vessel stiffness due to stent grafting (Figure <ref>).
In this simulation,
branch angles were uniformly set to 30 deg for simplicity.
A previous experimental study showed that the energy loss at bifurcations was very small as reported in <cit.>.
This is because the branch angle only affects the total pressure continuity (equations <ref>(b, c) and <ref>(b, c), see the Appendix),
and the order of magnitude of the static pressure term p is much greater than those of the other terms.
Indeed,
the relative difference of the mean flow rate in the RVA obtained with the reference angle (30 deg) and that obtained with 120 deg at the branch angle of the bypass is smaller than 1% (data not shown).
Thus, at least in this model, the phase delay in the LVA was insensitive to the bypass angle θ.
We did not consider postoperative shape deformation and diameter changes due to device placement.
In the future we will perform systematic analyses of the effect of postoperative vascular configurations on the phase change.
In this study, the Young's modulus in model vasculatures was also approximated using equation (<ref>).
Referring to previous numerical assessments of the mechanical behavior of ePTEF,
we set the same order of magnitude of the Young's modulus for the stent-grafted area (i.e., E_s =10 MPa),
which was approximately 100 times larger than that of vessels with similar diameter.
Considering different Young's moduli of PET stent from O(10^0) MPa <cit.> to O(10^3) MPa <cit.> in previous numerical studies,
the simulations were performed for different orders of magnitude of Young's modulus in an ePTFE-stent-grafted area (E/E_s = 0.01-10).
However, the ratio of the phase difference δ/T remained almost the same (less than 0.2%) in both the L/RVA.
Since model parameters in the bifurcation model (γ_1 and γ_2 in equations <ref>(b, c) and <ref>(b, c), see the Appendix) and in Young's modulus (k_1, k_2, and k_3 in equation (<ref>)) were fixed in this study,
further precise analysis of the effects of bifurcation angles and Young's modulus of the stent should be performed experimentally to confirm whether those mechanical factors have less impact on the phase differences.
The velocity profile was implicitly expressed as flat,
derived from Newtonian and laminar fluid flow.
However, it is expected that the blood flow profile, especially in a real artery, is much more complex,
and therefore it is usually modeled as plug flow due to its turbulent nature and cellular dynamics.
Since the Reynolds number in the aorta,
estimated based on the mean flux between systolic and diastolic phases,
is over 4 × 10^3,
the flow should be turbulent.
Furthermore, the blood velocity profile is also affected by frequency-dependent inertia.
The ratio between the transient inertia force and the viscous force can be estimated by the Womersley number Wo = r_0 (ω/V)^1/2≈ 1.2-12,
where r_0 (= 1-10 mm) is the radius of the artery (see Table <ref>),
ω (= 2 π f) is the angular frequency,
f (= 1/T) is the cardiac frequency that is roughly estimated as 1 Hz,
and V (= μ/ρ≈ 4.3 × 10^-6 m^2/s) the kinematic viscosity of the blood.
Thus, it is expected that a more rigorous velocity flow profile can reproduce unsteady hemodynamics in patients before and after dTEVAR.
In this study, however, we focused on explaining the experimentally observed phase delay of the mean volumetric flow rate in the LVA seen in postoperative 1dTEVAR patients relative to that before surgery,
and thus, the flow profile predominantly affecting the unit mean volumetric flow rate should not change the calculated phase delay.
In this simulation,
we simply modeled arterial trees from the heart (or LV) to eight end terminals,
whose diameters and lengths were determined so that the time average flow rate became similar to that obtained in experimental measurements by <cit.> or <cit.> (Figure <ref>).
Even with the simplest boundary condition at the end terminals (no-reflection condition, i.e., W_- = 0),
our numerical model reproduced the phase delay of the flow rate in the LVA without the phase difference in the RVA in postoperative patients,
and also clarified that the phase delay arose mostly from alteration of the flow path (Figure <ref>).
Further precise boundary conditions for both the inlet (heart) and outlets will leads to patient-specific analyses preoperatively and to the evaluation of hemodynamics postoperatively, topics that we will address in the future.
Depending on the areas of TAA,
different numbers of bypasses (1d, 2d, and 3d) can be selected.
If phase delays of flow rate are present in the LVA of postoperative patients who have undergone different surgical operations,
it is expected that the phase differences between before and after surgery are caused by the new bypasses.
Quantitative analyses of hemodynamic changes caused by these surgeries should be performed in future studies.
Modification of the aforementioned model factors, e.g., boundary conditions, will potentially reproduce hemodynamics in patients who undergo these various surgeries.
The numerical results based on pulse-wave dynamics provide fundamental knowledge regarding hemodynamic changes between pre- and postoperative patients undergoing dTEVAR,
and will be helpful not only in surgical decision-making for optimal clinical outcomes but also in evaluating hemodynamics after surgery.
§ CONCLUSION
A 1D model was used to numerically investigate blood flow rates, with the goal of explaining experimental evidence on the phase delay of the flow rate in the LVA but not in the RVA after 1dTEVAR relative to before.
The numerical model can reproduce the flow distribution in the major arteries from the heart,
and can capture the flow rate ratio in the L/RVA in both pre- and postoperative patients.
The numerical results showed that the phase delay was mainly caused by the bypass, i.e., by alteration of the flow path, rather than by stent grafting, i.e., the change of local vessel stiffness.
Bypass angles and the effects of the length and Young's modulus of the stent were also investigated,
but all were insensitive to the phase delay.
We hope that our numerical results will provide fundamental knowledge about therapeutic decisions for dTEVAR.
§ ETHICAL APPROVAL
Not required.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGEMENTS
The presented study was partially funded by Daicel Corporation.
Last but not least, N.T. and N.Y. thank Mr. Tatsuki Shimada for his assistance in the preparation of this work.
§ APPENDIX
§.§ A1. Methodology
The variables of the ith-vessel segment at specific time t^n (v̌_i^n and F̌_i^n) in equation (<ref>) are described as:
v̌_i^n =
[ Q_i^n; A_i^n ], F̌_i^n =
[ (Q_i^n)^2A_i^n + β3 ρ (A_i^n)^3/2; Q_i^n ].
The discretized form of equation (<ref>) is obtained with the Lax-Wendroff method:
v̌_i^n+1 = v̌_i^n
- 1/2 Δ c( F̌_i+1^n - F̌_i-1^n )
+ 1/2 (Δ c)^2[ 𝐉( v̌_i^n + v̌_i+1^n/2) ( F̌_i+1^n - F̌_i^n )
- 𝐉( v̌_i^n + v̌_i-1^n/2) ( F̌_i^n - F̌_i-1^n ) ],
= v̌_i + 1/2 Δ c[ F̌_i-1 + F̌_i^n
+ 1/Δ c𝐉( v̌_i^n + v̌_i-1^n/2) ( F̌_i-1^n - F̌_i^n ) .
. - {F̌_i^n + F̌_i+1^n + 1/Δ c𝐉( v̌_i^n + v̌_i+1^n/2) ( F̌_i^n - F̌_i+1^n ) }],
where Δ c = Δ x/Δ t, and Δ x is the segment length of the vessel.
The boundary values of Q and A are determined by the Riemann invariants (W_+ and W_-),
which represent a forward- and backward-traveling wave at speeds λ_+ and λ_- as eigen values of the Jacobian 𝐉.
Riemann invariants are the characteristic variables of the following hyperbolic system transformed from equation (<ref>):
∂_t W̌ + λ̌·∂_x W̌ = 0,
where W̌ = (W_+, W_-)^T,
and λ̌ = (λ_+, λ_-)^T.
By choosing the reference conditions (A = A_0, U = 0),
we obtain the solutions to system (<ref>):
W_± = Q/A± 4 √(β/2 ρ)( A^1/4 - A_0^1/4),
= U ± 4 ( c - c_0),
and λ_± = U ± c,
where c = √(β/(2 ρ)) A^1/4 is the wave speed,
and c_0 is the wave speed at A = A_0.
Flow conditions along the lines (1 → 2) (see Figure <ref>(A)) are described by mass conservation and total pressure continuity <cit.>:
Q_1 - Q_2 = 0,
p_1 + ρ U_1^2/2 - ( p_2 + ρ U_2^2/2 ) = 0,
and W_1^n+1(L) and W_2^n+1(0) are derived from values at the previous time step at the distal end (denoted by L) and proximal end (denoted by 0) of an artery by extrapolating the outgoing Riemann invariants along the characteristic lines <cit.>:
W_1, +^n+1 (L) = W_1, +^n( L - λ_1, +Δ t ),
W_2, -^n+1 (0) = W_2, +^n( 0 - λ_2, -Δ t ).
Flow conditions at the bifurcations (3 → 1 + 2) (Figure <ref>(B)) are described as <cit.>:
-Q_1 - Q_2 + Q_3 = 0,
p_3 + ρ U_3^2/2 - sign(Q_3) f_m (Q_3/A_3) - [ p_1 + ρ U_1^2/2 + sign(Q_1) f_t( Q_1/A_1 + A_3/2) ] = 0,
p_3 + ρ U_3^2/2 - sign(Q_3) f_m (Q_3/A_3) - [ p_2 + ρ U_2^2/2 + sign(Q_2) f_t( Q_2/A_2 + A_3/2) ] = 0,
W_3, +^n+1 (L) - W_3, +^n (L - λ_3, +Δ t) = 0,
W_1, -^n+1 (0) - W_1, -^n (0 - λ_1, -Δ t) = 0,
W_2, -^n+1 (0) - W_2, -^n (0 - λ_2, -Δ t) = 0,
where
f_m (Q/A) = γ_1 ρ (Q/A)^2,
f_t (Q/A) = γ_2 ρ (Q/A)^2 √(2 ( 1 - cosα)).
In this study, we set γ_1 = 0, γ_2 = 2, and α = π/6.
At the confluences (1 + 2 → 3) (Figure <ref>(C)), we have:
Q_1 + Q_2 - Q_3 = 0,
p_1 + ρ U_1^2/2 - sign(Q_1) f_t( Q_1/A_1 + A_3/2) - [ p_3 + ρ U_3^2/2 + sign(Q_3) f_m (Q_3/A_3) ] = 0,
p_2 + ρ U_2^2/2 - sign(Q_2) f_t( Q_2/A_2 + A_3/2) - [ p_3 + ρ U_3^2/2 + sign(Q_3) f_m (Q_3/A_3) ] = 0,
W_1, +^n+1 (L) - W_1, +^n (L - λ_1, +Δ t) = 0,
W_2, +^n+1 (L) - W_2, +^n (L - λ_2, +Δ t) = 0,
W_3, -^n+1 (0) - W_3, +^n (0 - λ_3, -Δ t) = 0.
§.§ A2. Weakly non-linear form
To verify the calculated solutions,
the numerical results are compared with those obtained with a weak non-linear equation instead of the theoretical formula.
When Q = Q_0 + dQ and A = A_0 + dA,
Equation (<ref>) can be written as:
W_± = Q_0 + dQ/A_0 + dA± 4 c_0 ( ( 1 + dA/A_0)^1/4 - 1 ),
≈Q_0/A_0( 1 - dA/A_0) + dQ/A_0± c_0 dA/A_0.
When Q_0 = 0, we finally have a weak non-linear form of the Riemann invariants (<ref>)
W_± ≈dQ/A_0± c_0 dA/A_0.
Equation (<ref>) predicts that wave propagations, including reflective wave W_-, are negligible for the small flow rate dQ ≈ 0 and small vessel contraction A ≈ A_0.
Indeed, the maximum values of |W_±| in the middle point of the straight tube almost collapse on those obtained with p_a = 0 when the maximum inlet flow rate Q_in^max decreases (Figure <ref>(B)),
where the tube length and reference radius are set to L = 1 m and r_0 = 10^-3 m, respectively,
and the Young's modulus is E_0 = 0.1 MPa,
referring to the value of aortic elasticity in conscious dogs <cit.>.
The inlet pressure wave form (<ref>) and no-reflection condition for the outlet W_- = 0 are also considered.
The small maximum inlet inflow rate Q_in^max is controlled by the inlet pressure amplitude p_a
(see Figure <ref>(A)).
Furthermore, the aforementioned weakly non-linear formalization simultaneously leads equation (<ref>) to be approximated as λ_±≈ c_0±,
and the results are shown in Figure <ref>(C).
We also confirmed that the average fluid velocity increased with tube stiffness, as shown in Figure <ref>(D).
cas-model2-names
|
http://arxiv.org/abs/2409.03031v1 | 20240904185034 | Debugging with Open-Source Large Language Models: An Evaluation | [
"Yacine Majdoub",
"Eya Ben Charrada"
] | cs.SE | [
"cs.SE"
] |
Debugging with Open-Source Large Language Models: An Evaluation]Debugging with Open-Source Large Language Models:
An Evaluation
University of Gabes, Tunisia
[email protected]
University of Gabes, Tunisia
[email protected]
§ ABSTRACT
Large language models have shown good potential in supporting software development tasks. This is why more and more developers turn to LLMs (e.g. ChatGPT) to support them in fixing their buggy code. While this can save time and effort, many companies prohibit it due to strict code sharing policies. To address this, companies can run open-source LLMs locally. But until now there is not much research evaluating the performance of open-source large language models in debugging. This work is a preliminary evaluation of the capabilities of open-source LLMs in fixing buggy code. The evaluation covers five open-source large language models and uses the benchmark DebugBench which includes more than 4000 buggy code instances written in Python, Java and C++. Open-source LLMs achieved scores ranging from 43.9% to 66.6% with DeepSeek-Coder achieving the best score for all three programming languages.
< g r a p h i c s >
Seattle Mariners at Spring Training, 2010.
Enjoying the baseball game from the third-base
seats. Ichiro Suzuki preparing to bat.
20 February 2007
[revised]12 March 2009
[accepted]5 June 2009
[
Eya Ben Charrada
====================
§ INTRODUCTION
"I'd spend an hour figuring out what exactly goes wrong, then five minutes writing the code to fix it, and then half an hour testing the whole thing. That's just over 5% coding vs. almost 95% non-coding."[text taken from an answer on stackoverflow regarding the time spent debugging https://softwareengineering.stackexchange.com/a/93323]
Debugging is known to be time consuming and frustrating. Therefore it is not surprising to find out that developers are turning to large language models to help them solve their problems. In a study with practitioners, Khojah et al. <cit.> found that software engineers were found to turn often to chatGPT for assistance in various software engineering tasks.
Recent research showed promising results in using LLMs for software Engineering tasks in general and for debugging in particular. For example, LLMs were able to perform well in bug reproduction <cit.>, fault localisation <cit.> and program repair <cit.>.
Despite these advantages, using current state of the art LLMs such as
ChatGPT can be inappropriate for practitioners due to code sharing policies. In fact, most companies consider their code to be private and don't want it to be sent to LLMs run by third parties.
A solution to this problem would be to run an open source LLM locally.
So far, there has been very limited assessments of the debugging capabilities of open-source large language models. In fact, earlier works mostly focus on evaluating code generation capabilities, for which many benchmarks exist such as the famous OpenAI's HumanEval <cit.> and its descendants (e.g. HumanEval+ <cit.> and Multilingual HumanEval <cit.>) or the Google's MBPP <cit.>.
The goal of this work is to evaluate and compare the capabilities of open-source large language models in performing debugging tasks. We would like to answer the following two research questions:
* RQ1: How do open source LLMs perform in debugging? To answer this question, we use benchmarking to evaluate five open-source LLMs. The benchmark we used includes more than 4000 buggy code instances in Python, C++ and Java.
* RQ2: How does the performance of open-source LLMs in code generation impact their performance in debugging? We compare the scores that the LLMs obtained for debugging with the scores that they achieved for coding as evaluated by the HumanEval Benchmark.
Our evaluation suggests that although less capable than the most advanced closed source models (e.g. GPT-4), some open source models were able to achieve decent results compared to their relatively small size. For instance, DeepSeek-coder-instruct, which has only 34B parameters, achieved a score above 63% in all three programming languages. We also found that except for DeepSeek-coder, all models that achieved a higher scores in HumanEval also got better scores in debugging.
The contributions of this work are:
* We conduct an empirical study that evaluates the debugging capabilities of open source Large Language Models using a large benchmark that includes a few thousands of buggy code instances
* We compare the debugging capabilities of the open source LLMs to their coding capabilities as evaluated by the HumanEval benchmark
* We provide an extensive discussion of the strengths and limitations of current debugging and coding benchmarks
§ OPEN SOURCE LARGE LANGUAGE MODELS
There are many open-source LLMs available in the market. Although nearly[All models we found used the transformer architecture.] all models use the transformer architecture, they differ in their capabilities due to various factors such as model size, quality and volume of training data, and fine-tuning methods.
For this evaluation, we selected five reputed models. Four of them are code models, while the last one is a general-purpose model.
§.§ Code models
We selected the coding models that achieved the best results on the HumanEval benchmark <cit.>. HumanEval is a code generation benchmark released by OpenAI that includes 146 coding tasks.
We present each of the coding models in the following paragraphs.
§.§.§ Code Llama
Code Llama <cit.>
is a family of large language models that is specialised for code, based on Llama2. Code Llama models have been created by fine-tuning the general language model Llama2 using code specific datasets. The developers of Codellama found that for a given budget, fine-tuning the generic Llama2 to generate code outperforms the same architecture trained on code only.
The training was done with publicly available code (mostly near-deduplicated dataset), which includes 8% of natural language text related to code such as discussions or questions and answers including code snippets. In addition to supporting several natural languages, the Code Llama models are trained to handle long contexts of up to 100K tokens.
Meta AI released Codellama in three main variants namely (1) Code Llama, which is the foundation model (2) Code Llama - Python, which is specialized for python code generation and Code Llama - Instruct, which is fine-tuned to follow human instructions. All models are available in four sizes: 7B, 13B, 34B and 70B.
For this evaluation, we use the Code Llama - Instruct 70B variant. This variant was trained using 1 trillion tokens and achieved the best performance on HumanEval with a 67.8% pass@1.
§.§.§ Phind-Codellama
Phind-Codellama <cit.> is a fine-tuned version of Code Llama 34B. The first version of Phind-Codellama was fine-tuned on a dataset of nearly 80,000 programming problems and their corresponding solutions. The second version is Phind-CodeLlama-34-v2, which was initialised from the first version, was trained on 1.5B additional tokens. Although Phind-Codellama has smaller number of parameters compared to the larger Code Llama 70B, it was able to achive relatively high results on HumanEval. For instance Phind-CodeLlama-34B-v2 achieved 73.8% pass@1 on HumanEval.
§.§.§ WizardCoder
WizardCoder <cit.> is a family of LLMs that use the Evol-Instruct method <cit.>, an instruction fine tuning method that makes the code instructions more complex and which enhances the performance of coding models.
Wizardcoder is available in five different sizes ranging from 1B to 33B parameters. The 15B version of WizardCoder <cit.>, the results of a collaboration between researchers from Microsoft and researchers from Haong Kong Baptist University, is a fine-tuned version of StarCoder <cit.> and it achieved 57.3 % pass@1 on HumanEval. The 33B version is trained from the DeepSeek-Coder-base model and achieved 79.9% pass@1 on HumanEval <cit.>. In this evaluation we use the WizardCoder-33B-V1.1.
§.§.§ Deepseek-Coder
DeepSeek-Coder <cit.> is a series of code models trained on a dataset comprising 2 trillion tokens from 87 programming languages. The dataset is composed of 87% code and 13% natural language in English and Chinese. The model is available in various sizes, from 1.3B to 33B parameters.
The DeepSeek-Coder-Instruct variant is an enhancement of the base model that was fine-tuned with an additional 2 billion tokens of instruction data. This improved the model's ability to execute coding tasks given using human instructions. DeepSeek-Coder-Base 33B achieved 50.3% pass@1 on HumanEval, while DeepSeek-Coder-Instruct-33B achieved 69.2% pass@1 on HumanEval. We used DeepSeek-Coder-Instruct-33B in our evaluation.
§.§ General-Purpose model: Llama 3
The last model we chose is Llama3, a general purpose LLM. We selected it because it is the best open source LLM available for now, and we wanted to compare its capabilities to the code-specialized large language models.
LLama3, which is developed by Meta AI, was released in two sizes: 8B and 70B each with a pre-trained and instruction finetuned version.
Data quality was a major focus for LLama 3, the model has been pre-trained on over 15 trillion high-quality tokens from publicly available sources, seven times more than LLama 2. The training data incorporates four times more coding data to boost capabilities in that domain and over 5% of the data covers 30+ languages beyond English. The dataset was filtered using a serie of filtering pipelines, heuristic filtering, NSFW detection, deduplication, and quality classifiers.
The model also utilizes a more efficient tokenizer compared to the previous models of Meta AI, and it uses grouped query attention (GQA) to improve inference efficiency and to handle sequences of up to 8,192 tokens.
Llama3 8B achieved 62.2% pass@1 on HumanEval while Llama3 70B achieved 81.7% pass@1 <cit.>. For this evaluation, we used Llama3 70B.
|
http://arxiv.org/abs/2409.02582v1 | 20240904100446 | Legendrian Hopf links in L(p,1) | [
"Rima Chatterjee",
"Hansjörg Geiges",
"Sinem Onaran"
] | math.SG | [
"math.SG",
"math.GT",
"57K33, 57K10, 57K40, 57R25"
] |
R. Chatterjee]Rima Chatterjee
H. Geiges]Hansjörg Geiges
Mathematisches Institut, Universität zu Köln,
Weyertal 86–90, 50931 Köln, Germany
[email protected]
[email protected]
S. Onaran]Sinem Onaran
Department of Mathematics, Hacettepe University,
06800 Beytepe-Ankara, Turkey
[email protected]
R. C. and H. G. are partially supported by the SFB/TRR 191
`Symplectic Structures
in Geometry, Algebra and Dynamics', funded by the DFG
(Project-ID 281071066 - TRR 191); S. O.is partially supported by TÜBİTAK Grant No. 119F411.
R. C. and S. O. would like to thank the Max Planck Institute for
Mathematics in Bonn for its hospitality.
§ ABSTRACT
We classify Legendrian realisations, up to coarse equivalence,
of the Hopf link in the lens spaces
L(p,1) with any contact structure.
[2020]57K33, 57K10, 57K40, 57R25
Legendrian Hopf links in L(p,1)
[
===============================
§ INTRODUCTION
By the (positive) Hopf link L_0⊔ L_1 in the lens space
L(p,1) we mean the (ordered, oriented) link
formed by the two rational unknots given by the spines of the genus 1
Heegaard decomposition, oriented in such a way that their
rational linking number equals 1/p. By the discussion in
<cit.>, this characterisation determines the link up to
isotopy and a simultaneous change of orientations (which can be effected
by an orientation-preserving diffeomorphism of L(p,1));
the key result in the background is the uniqueness of the
genus 1 Heegaard splitting up to isotopy.
The lens space L(p,1) with its natural orientation as a quotient
of S^3 can be realised by a single (-p)-surgery on an unknot. The Hopf link
in this surgery diagram is shown in Figure <ref>;
the link is positive when both L_0 and L_1 are oriented as meridians of
the surgery curve in the same way. Indeed, if we label meridian and longitude
(given by the Seifert framing) of the surgery curve by μ and λ,
the (-p)-surgery amounts to replacing a tubular neighbourhood of
the surgery curve by a solid torus V_1=S^1× D^2, with meridian
μ_1={*}×∂ D^2 and longitude λ_1=S^1×{1}
glued as follows:
μ_1⟼ pμ-λ, λ_1⟼μ.
Then L_1=μ=λ_1 may be thought of as the spine of V_1,
and L_0 as the spine of the complementary solid torus V_0,
with meridian μ_0=λ and longitude λ_0=μ.
A Seifert surface Σ_0
for pL_0 is made up of the radial surface in V_0 between
the p-fold covered spine pL_0 and the curve
pλ_0-μ_0=pμ-λ=μ_1,
and a (positively oriented) meridional disc in V_1. The spine L_1
intersects this disc positively in a single point.
In this paper we extend the classification of Legendrian Hopf links
in S^3, see <cit.>, to Legendrian Hopf links in L(p,1)
for any p∈, with any contact structure. For a brief survey
of the rather scant known results on the classification of Legendrian links
(with at least two components) we refer to <cit.>.
Our result is the first classification of Legendrian links in a
3-manifold other than S^3.
Legendrian links in overtwisted contact manifolds are either
loose
(the link complement is still overtwisted) or exceptional
(the link complement is tight). In the exceptional case, the
link complement may or may not contain Giroux torsion.
In contrast with <cit.>, we only consider the case of vanishing
Giroux torsion (what in <cit.> we called
strongly exceptional);
the case of Giroux torsion adds numerical complexity
but no significant insight. Beware that the individual components
of an exceptional Legendrian link may well be loose.
Our classification of the Legendrian realisations
(up to coarse equivalence, i.e. up to a contactomorphism of the
ambient manifold) of the Hopf link
in L(p,1) is in terms of the rational classical invariants as defined
in <cit.>.
The rational Thurston–Bennequin invariant of the L_i, in any contact
structure on L(p,1), is of the form _(L_i)=_i+1/p
with _i∈.
By symmetry it suffices to show this for L_0, where we can use
the above description of a Seifert surface Σ_0 for pL_0.
The contact framing of L_0 is given by a curve _0μ_0+λ_0
with _0∈. With the identifications in the surgery
description of L(p,1) we have
_0μ_0+λ_0=_0λ+μ=
_0(pλ_1-μ_1)+λ_1.
We can push this curve on ∂ V_1 a little into V_1,
and then the intersection number with the meridional disc bounded by
μ_1 (as part of Σ_0) equals _0p+1. To obtain
_(L_0), this number has to be divided by p.
The rational rotation
number _ is well defined (i.e. independent of a choice
of rational Seifert surface), since the Euler class of any
contact structure on L(p,1) is a torsion class.
We also include the d_3-invariant of
the overtwisted contact structures. This invariant does not play a direct
role in the classification, but in some cases we need it to determine
whether the link components are loose or exceptional, when we appeal to
the classification of exceptional Legendrian rational unknots in L(p,1)
achieved in <cit.>. The notation ξ_d stands
for an overtwisted contact structure with d_3=d. For a complete homotopical
classification of the overtwisted contact structures one also needs
to know their Euler class (or, in the presence of 2-torsion, a finer
d_2-invariant). The Euler class can be computed from
the surgery diagrams we present, using the recipe of
<cit.>. These computations, which are rather involved, are omitted here.
In the cases where we use contact cuts to find the Legendrian realisations,
the homotopical classification of the contact structure in question is
more straightforward.
We use the notation for any one of the tight contact
structures on L(p,1). The homotopical data of the relevant
contact structures containing Legendrian realisations of the Hopf
link can easily be read off from Figure <ref>. These tight
structures are Stein fillable and hence have zero Giroux torsion.
The essence of the following main theorem is that the classical invariants
suffice to classify the Legendrian realisations of the Hopf link in
L(p,1), so the Hopf link is what is called Legendrian simple.
Up to coarse equivalence, the Legendrian realisations of the
Hopf link in L(p,1), p≥ 2, with zero Giroux torsion in
the complement, are as follows. In all cases, the classical
invariants determine the Legendrian realisation.
(a) In (L(p,1),) there is a unique realisation for any
combination of classical invariants (_(L_i),_(L_i))=
(_i+1/p,_i-/p),
i=0,1, in the range _0,_1<0 and
∈{-p+2,-p+4,…,p-4,p-2},
_i∈{_i+1,_i+3,…,-_i-3,-_i-1}.
For fixed values of _0,_1<0 this gives a total
of _0_1(p-1) realisations.
(b) For _0=0 and _1≤ 0 there are
|_1-1| exceptional realisations, all living in
(L(p,1),ξ_d) with d=(3-p)/4, made up of an exceptional
component L_0 with classical invariants (_(L_0),_(L_0))=
(1/p, 0), and a loose component L_1 with
invariants (_(L_1),_(L_1))=(_1+1/p,_1), where
_1∈{_1,_1+2,…,-_1-2,-_1}.
(c) For _0=0 and _1>0 the exceptional
realisations are as follows; in all cases both components are loose.
(c1) For _1=1 there are two realisations,
with classical invariants
(_(L_0),_(L_0))=(1/p,±2/p)
and
(_(L_1),_(L_1))=(1+1/p,
±(1+2/p)).
They live in (L(p,1),ξ_d) with d=(3p-p^2-4)/4p.
(c2) For _1=2, there are three exceptional
realisations. Two of them live in (L(p,1),ξ_d),
where d=3p-p^2-4/4p, and have classical invariants
(_(L_0),_(L_0)) as in (c1)
and
(_(L_1),_(L_1))=(2+1/p,±(2+2/p)
)).
The third one lives in (L(p,1),ξ_d) with d=7-p/4,
and the invariants are (_(L_0),_(L_0))=(1/p,0)
and (_(L_1),_(L_1))=(2+1/p,0).
(c3) For _1>2, there are four exceptional
realisations. The classical invariants are listed in Table
<ref> in Section <ref>.
(d) For _0,_1>0 the exceptional
realisations, with both components loose, are as follows:
(d1) For _0=_1=1, there are exactly p+3
exceptional realisations. They live in (L(p,1),ξ_d),
where d=7p-^2/4p. They have classical invariants
(_(L_0),_(L_0))=
(1+1/p,/p)
and
(_ℚ(L_1),_ℚ(L_1))
=(1+1/p,/p),
where
∈{-p-2,-p,…, p,p+2}.
(d2) For _0=1 and _1>1, there are 2(p+2) exceptional
realisations, whose classical invariants are given in Table <ref>.
(d3)For _0>1 and _1>1, there are 4(p+1) exceptional
realisations, whose classical invariants are given in
Table <ref>.
(e) For _0<0 and _1>0 the exceptional
realisations are as follows. Here L_0 is loose; L_1 is exceptional.
(e1) For _1=1, there are exactly
|_0|(p+1) exceptional realisations. They live in
(L(p,1),ξ_d) where d=(3p-^2)/4p, where
∈{-p, -p+2,…, p-2, p}.
The classical invariants are
(_(L_0),_(L_0))=
(_0+1/p,_0-/p)
and
(_ℚ(L_1),_ℚ(L_1))=
(1+1/p,-/p),
where
_0∈{_0+1,_0+3,…, -_0-3, -_0-1}.
(e2) For _0<0 and _1>1, there are 2|_0|p exceptional
realisations, whose classical invariants are given in
Section <ref>.
The proof of Theorem <ref> largely follows the strategy
used in <cit.> for Legendrian Hopf links in S^3:
find an upper bound on the number of exceptional realisations by enumerating
the tight contact structures on the link complement, and then
show that this bound is attained by giving explicit realisations.
Most of these explicit realisations are in terms
of surgery diagrams, but as in <cit.> there is a
case where a surgery presentation eludes us, and we have to use
contact cuts instead. This case (c1), which is being treated in
Section <ref>, contains most of the conceptually novel aspects
in the present paper. In contrast with <cit.>, we no longer
have a global frame for the contact structure; therefore, the computation
of rotation numbers requires the explicit description of rational
Seifert surfaces, and frames over them, in the context of topological cuts.
The discussion in Section <ref> should prove
useful in a more general analysis of the contact topology of lens
spaces via contact cuts.
One has to be a little careful when comparing this result with
the classification of Legendrian Hopf links in S^3 in <cit.>.
For instance, in case (c1), the contact cut description we use
in Section <ref>
corresponds for p=1 to the interpretation of S^3 as lens space
L(1,1), whereas in <cit.> we read S^3 as L(1,0). In case (a),
the surgery diagram for S^3 is empty, and the discussion in the present
paper only makes sense for p≥ 2. In most other cases, however, one
obtains the correct results for S^3 by allowing p=1 in
Theorem <ref>.
§ UPPER BOUND FOR EXCEPTIONAL REALISATIONS
In this section we determine the number of tight contact structures on the
complement of a Legendrian Hopf link L_0⊔ L_1 in
L(p,1), in terms of the
Thurston–Bennequin invariant of the link components.
We start with S^3, decomposed into
two solid tori V_0,V_1 forming a Hopf link (in the traditional
sense), and a thickened torus T^2×[0,1], i.e.
S^3=V_0∪_∂ V_0=T^2×{0} T^2× [0,1]
∪_T^2×{1}=∂ V_1 V_1.
Write μ_i,λ_i for meridian and longitude
on ∂ V_i, and define the gluing in the decomposition above by
μ_0 = S^1×{*}×{0},
λ_0 = {*}× S^1×{0},
μ_1 = {*}× S^1×{1},
λ_1 = S^1×{*}×{1}.
As in the introduction, we think of the Hopf link in L(p,1)
as being obtained by (-p)-surgery along the spine of V_1.
Slightly changing the notation from the introduction, we write
μ_1',λ_1' for meridian and longitude of the solid torus
V_1' reglued in place of V_1, so that the gluing
prescription becomes
μ_1'⟼ pμ_1-λ_1, λ_1'⟼μ_1.
Given a Legendrian Hopf link L_0⊔ L_1 in L(p,1)
with _(L_i)=_i+1/p, we can choose V_0,V_1'
(sic!) as standard neighbourhoods of L_0,L_1, respectively.
This means that ∂ V_0 is a convex surface with two dividing
curves of slope 1/_0 with respect to (μ_0,λ_0);
the slope of ∂ V_1' is 1/_1 with respect to
(μ_1',λ_1').
Now, on T^2×[0,1] we measure slopes on the T^2-factor with
respect to (μ_0,λ_0).
So we are dealing with a contact structure
on T^2×[0,1] with convex boundary, two dividing curves
on either boundary component, of slope s_0=1/_0 on
T^2×{0}, and of slope s_1=-p-1/_1 on T^2×{1}, since
_1μ_1'+λ_1'=_1(pλ_0-μ_0)+λ_0=
(_1p+1)λ_0-_1μ_0.
Recall that a contact structure on T^2×[0,1]
with these boundary conditions is called minimally twisting
if every convex torus parallel to the boundary has slope between s_1
and s_0.
The following proposition covers all possible pairs (_0,_1),
possibly after exchanging the roles of L_0 and L_1.
Up to an isotopy fixing the boundary, the number N=N(_0,_1)
of tight, minimally twisting contact structures
on T^2×[0,1] with convex boundary, two dividing curves
on either boundary component of slope s_0=1/_0 and
s_1=-p-1/_1, respectively, is as follows.
(a) If _0,_1<0, we have N=_0_1(p-1).
(b) If _0=0 and _1≤ 0, then N=|_1-1|.
(c) If _0=0, _1≥ 1:
(c1) N(0,1)=2.
(c2) N(0,2)=3.
(c3) For all _1>2, we have N(0,_1)=4.
(d) If _0,_1>0:
(d1) N(1,1)=p+3.
(d2) For all _1>1, we have N(1,_1)=2(p+2).
(d3) For all _0,_1>1, we have N=4(p+1).
(e) If _0<0, _1>0:
(e1) For all _0<0 and _1>1, we have
N=2|_0|p.
(e2) For all _0<0, we have
N(_0,1)=|_0|(p+1).
So that we can use the classification of tight contact structures on
T^2× [0,1] due to Giroux <cit.> and Honda <cit.>,
we normalise the slopes by applying
an element of Diff^+(T^2)≅(2,) to T^2×[0,1]
such that the slope on T^2×{0} becomes s_0'=-1,
and on T^2×{1} we have s_1'≤ -1. If s_1'<-1,
the number N is found from a continuous fraction expansion
s_1'=r_0-1r_1-1r_2-⋯-1r_k
=:[r_0,…,r_k]
with all r_i<-1 as
N=|(r_0+1)⋯(r_k-1+1)r_k|,
see <cit.>.
The vector [ x; y ] stands for
the curve xμ_0+yλ_0, with slope y/x.
§.§ Case (a)
We have
[ 0 -1; 1 -_0+1 ][ _0; 1 ]
=[ -1; 1 ]
and
[ 0 -1; 1 -_0+1 ][ -_1; p_1+1 ]
=[ -p_1-1; -_1-p_0_1+p_1-_0+1 ],
which implies
s_1'=_0-1+_1/p_1+1=
_0-1-1-p-1_1.
For _1<-1 we read this as [_0-1,-p,_1]; for
_1=-1 and p>2, as [_0-1,-p+1]; for _1=-1
and p=2, as [_0].
In all three cases this gives N=|_0_1(p-1)|.
§.§ Case (b)
For _0=0 and _1≤ 0 we use the transformation
[ -p -1; p+1 1 ][ 0; 1 ]
=[ -1; 1 ]
and
[ -p -1; p+1 1 ][ -_1; p_1+1 ]
=[ -1; 1-_1 ].
This gives s_1'=-1+_1, whence N=|_1-1|.
§.§ Case (c)
For t_0=0 and t_1≥ 2 we have
[ -(p+1) -1; p+2 1 ][ 0; 1 ]
=[ -1; 1 ]
and
[ -(p+1) -1; p+2 1 ][ -_1; p_1+1 ]
=[ _1-1; -2_1+1 ].
A continued fraction expansion for s_1'=(-2_1+1)/(_1-1) is given by
[-3,-2,-2,…, -2__1-2].
Hence, for _1>2 we get
N=|(-2)(-1)⋯ (-1)(-2)|=4; for _1=2 we have N=3.
For _1=1 we work instead with the transformation
[ -2p -1; 1+2p 1 ][ 0; 1 ]
=[ -1; 1 ]
and
[ -2p -1; 1+2p 1 ][ -1; p+1 ]
=[ p-1; -p ],
which gives
s_1'=-p/p-1=
[-2,-2,…, -2_p-1],
whence N=2.
§.§ Case (d)
For _0>0 and _1>0 we compute
[ -p -1+p_0; 1+p 1-(1+p)_0 ][ _0; 1 ]
=[ -1; 1 ]
and
[ -p -1+p_0; 1+p 1-(1+p)_0 ][ -_1; p_1+1 ]
=[ -1+p_0+p^2_0_1; 1-_0-_1-p_0-p_0_1-p^2_0_1 ].
Hence,
s_1'=-1-_0+_1+p_0_1/-1+p_0+p^2_0_1.
For _0=1, the continued fraction expansion is
s_1'=[-2,-2,…, -2_p-1, -(p+3),
-2,-2,…, -2__1-1].
Thus, N=p+3 for _1=1, and N=2(p+2) for _1>1.
For _0,_1>1 we have the continued fraction expansion
s_1'=[-2,-2,… ,-2_p-1, -3,
-2,-2,…, -2__0-2, -(p+2),
-2, -2,…, -2__1-1],
whence N=4(p+1).
§.§ Case (e)
For _0<0 and _1>0 we use the transformation
[ -1 _0-1; 2 1-2_0 ][ _0; 1 ]
=[ -1; 1 ]
and
[ -1 _0-1; 2 1-2_0 ][ -_1; p_1+1 ]
=[ -1+_0+_1-p_1+p_0_1; 1-2_0-2_1+p_1-2p_0_1 ].
Then
s_1'=-2-1+p_1/-1+_0+_1-p_1+p_0_1
=[-2, _0-1,-(p+1),-2,-2,…,-2__1-1].
For _1>1 this yields N=2|_0|p; for
_1=1 we get N=|_0|(p+1).
§ COMPUTING THE INVARIANTS FROM SURGERY DIAGRAMS
Except for the case (c1) discussed in terms of contact cuts in
Section <ref>, we are going to describe the
Legendrian realisations of the Hopf link in L(p,1)
as front projections of a Legendrian link in a contact surgery
diagram for (L(p,1),ξ) involving only contact (± 1)-surgeries.
Here we briefly recall how to
compute the classical invariants from such a presentation;
for more details see <cit.>.
Write M for the linking matrix of the surgery diagram,
with the surgery knots K_1,…,K_n given auxiliary orientations,
and (K_j,K_j) equal to the topological surgery framing.
The extended linking matrix of a Legendrian knot L_i in this
surgery presentation is
M_i=([ 0 (L_i,K_1) ⋯ (L_i,K_n); (L_i,K_1) ; ⋮ M ; (L_i,K_n) ]).
§.§ Thurston–Bennequin invariant
Write _i for the Thurston–Bennequin invariant of L_i
as a Legendrian knot in (S^3,), that is, before performing
the contact surgeries. Then, in the surgered contact manifold, one has
_(L_i)=_i+ M_i/ M.
§.§ Rotation number
Write _i for the rotation number of L_i before the surgery.
With
:=((K_1),…,(K_n))
and
_i:=((L_i,K_1),…,(L_i,K_n))
we have
_(L_i)=_i-⟨,M^-1_i⟩.
§.§ The d_3-invariant
The surgery diagram describes a 4-dimensional handlebody X with
signature σ and Euler characteristic χ=1+n. Let
c∈ H^2(X) be the cohomology class determined by c(Σ_j)=
(K_j), where Σ_j is the oriented surface made up
of a Seifert surface for K_j and the core disc of the
corresponding handle. Write q for the number of contact (+1)-surgeries.
Then the d_3-invariant is given by the formula
d_3(ξ)=1/4(c^2-3σ-2χ)+q,
where c^2 is computed as follows: find the solution vector
of the equation M=; then c^2=^M=⟨,⟩.
The signature σ can be computed from the linking matrix
corresponding to the surgery diagram; more efficiently, one can usually
compute it using Kirby moves as described in <cit.>.
§ HOPF LINKS IN TIGHT L(P,1)
For given values of _0,_1<0, we have
_0_1(p-1) explicit realisations in (L(p,1),) as
shown in Figure <ref>. Here the numbers k,k_i and
ℓ,ℓ_i refer to the exterior cusps, so that
the surgery curve K has =-k-ℓ+1 (and _i=-k_i-ℓ_i+1),
and to obtain L(p,1)
by a contact (-1)-surgery on K we need k+ℓ=p. The
_i can take any negative value. With L_0, L_1 both oriented
clockwise, the Hopf link is positive,
and in this case we have _i=ℓ_i-k_i.
The values of the classical invariants as claimed in
Theorem <ref> (a) now follow easily from the formulas in
Section <ref>, with _i=_i, _i=_i,
and =(K). For the d_3-invariant we observe
that σ=-1, χ=2, and c^2=-^2/p, whence
d_3=-1/4(1+^2/p).
There are no realisations in (L(p,1),) with one of the
_i being non-negative, since Legendrian rational unknots
in (L(p,1),) satisfy _i<0 by
<cit.> or <cit.>.
This proves part (a) of Theorem <ref>.
The values of the d_3-invariant found above exhausts
all possibilities for the d_3-invariant of the tight contact structures
on L(p,1); see <cit.>.
§ LEGENDRIAN HOPF LINKS VIA CONTACT CUTS
In this section we use contact cuts to find Legendrian realisations
of Hopf links in L(p,1) for the case (c1).
§.§ L(p,1) as a cut manifold
We first want to give a topological description of L(p,1) as a
cut manifold in the sense of Lerman <cit.>. We start
from T^2×[0,1]=S^1× S^1×[0,1] with coordinates
x,y∈ S^1=/ and z∈[0,1]. Collapsing the first S^1-factor
in S^1× S^1×{0} is equivalent to attaching a solid torus
to T^2×[0,1] along T^2×{0} by sending the
meridian of the solid torus to S^1×{*}×{0}. This, of course,
simply amounts to attaching a collar to a solid torus, and the meridian
of this `fattened' torus is μ_0:=S^1×{*}×{1}. As longitude
we take the curve λ_0:={*}× S^1×{1}.
Now, as described in Section <ref>,
L(p,1) is obtained from this
solid torus by attaching another solid torus, whose meridian is glued
to the curve pλ_0-μ_0. Equivalently, we may collapse
the foliation of T^2×{1} by circles in the class pλ_0-μ_0.
Consider the p-fold cover (/)×(/p)× [0,1]→
(/)×(/)×[0,1]. Set λ̃_0:={*}×
(/p)×{1}. The foliation of (/)×(/p)× [0,1]
by circles in the class λ̃_0-μ_0 descends to the
foliation defined by pλ_0-μ_0. This exhibits L(p,1)
as a _p-quotient of L(1,1)=S^3. Beware that in <cit.>
we used the description of S^3 as L(1,0) in the cut construction,
so for p=1 one has to take this into account when comparing
the discussion here with the results in <cit.>.
§.§ Contact structures from contact cuts
A contact structure on T^2×[0,1] will descend to the cut manifold
L(p,1) if, at least near the boundary, it is invariant under the
S^1-action whose orbits on the boundary are collapsed to a point,
and if the S^1-action is tangent to the contact structure along the
boundary.
We define a∈ (0,π/2) by the condition tan a=p. For
ℓ∈_0, consider the contact form
α_ℓ:=sin((a+ℓπ)z) x+
cos((a+ℓπ)z) y
on T^2×[0,1]. This 1-form is invariant under the flows of both
∂_x and ∂_y. Along T^2×{0}, we have
∂_x∈α_ℓ; along T^2×{1},
the vector field -∂_x+p∂_y is in α_ℓ.
Thus, α_ℓ descends to a contact form on L(p,1).
By <cit.>, adapted to the description of S^3
as L(1,1), the lift of α_0 to S^3 is the standard tight contact
structure . Indeed, the map (x,y)↦ (x+y,y) sends this lifted
contact structure on T^2× [0,1], up to isotopy rel boundary and
compatibly with the cuts on the boundary, to the one shown in <cit.>
to define on S^3=L(1,0).
This means that
α_0 is the unique (up to diffeomorphism)
universally tight contact structure on L(p,1). The contact structure
α_ℓ+1 is obtained from α_ℓ by
a π-Lutz twist. In particular, α_2 is obtained from
α_0 by a full Lutz twist, and hence is homotopically
equivalent to the universally tight contact structure;
see <cit.>. The surgery description for
the universally tight contact structure on L(p,1) is a contact
(-1)-surgery on a (p-2)-fold stabilised standard Legendrian
unknot (with =-1 and =0) in (S^3,), all stabilisations
having the same sign. This contact structure has d_3-invariant equal
to (3p-p^2-4)/4p and Euler class ±(p-2) (in terms of the
natural cohomology generator); see <cit.>
and <cit.>.
§.§ A Legendrian Hopf link in (L(p,1),α_2)
The contact planes in α_2 on T^2×[0,1] have
slope 0 with respect to (∂_x,∂_y) at z=0.
As z increases to z=1, the contact planes twist (with decreasing slope)
for a little over 2π, until they reach slope -p for the third time
(and after passing through slope ±∞ twice), similar to
Figure 18 in <cit.>.
Define z_0<z_1 in the interval [0,1] by the conditions
a+2π z_0=π/2 and
-sin((a+2π)z_1)+(p+1)cos((a+2π)z_1)=0.
This means that at z=z_0 the contact planes are vertical for the
first time, and T^2×{z_1} is the second torus (as z increases
from 0 to 1) where the characteristic foliation has slope -(p+1).
Consider the link L_0⊔ L_1 made up of a (0,1)-curve
on the torus {z=z_0} and a (-1,p+1)-curve on {z=z_1}.
This link is topologically isotopic to the one made up of
a (0,1)-curve on {z=0} and a (-1,p+1)-curve on {z=1}.
Since these respective curves have intersection number ± 1 with
the curves we collapse at either end,
(0,1)∙ (1,0)=-1, (-1,p+1)∙ (-1,p)=1,
they are isotopic to the spine of the solid tori attached at either end,
and so they constitute a Hopf link.
A Seifert surface Σ_0 for pL_0, regarded as p times the spine
of the solid torus attached at z=0, is given by the following pieces:
- a helicoidal annulus between pL_0 and a (-1,p)-curve on {z=0},
- an annulus between this (-1,p)-curve on {z=0} and the same
curve on {z=1},
- the meridional disc attached to the latter curve at z=1.
Then
L_1∙Σ_0=(-1,p+1)∙(-1,p)=-p + (p+1)=1.
Notice that ∂_y is positively transverse to Σ_0
on T^2×[0,1], so our calculation gives the correct sign
of the intersection number. Thus, L_0⊔ L_1 is a positive
Hopf link.
Both components L_0 and L_1 of this Legendrian Hopf link are
loose, since the contact planes twist by more than π in either interval
(z_0,1) and (0,z_1), so the cut produces an overtwisted disc.
The Legendrian Hopf link L_0⊔ L_1 in (L(p,1),α_2)
is exceptional.
Arguing by contradiction, suppose there were an overtwisted disc in the
complement of the link. This disc would persist in the complement of the
transverse link L_0'⊔ L_1' obtained by pushing L_0 a little
towards z=0, and L_1 towards z=1.
In T^2×[0,1], this link is transversely isotopic
to the link made up of a (0,1)-curve on {z=0} and a (-1,p+1)-curve
on {z=1}. In the cut manifold L(p,1), this gives a transverse isotopy
(and hence an ambient contact isotopy)
from L_0'⊔ L_1' to the transverse link made up of the collapsed
boundaries (T^2×{0})/S^1 and (T^2×{1})/S^1.
The complement of the latter, however, is
contactomorphic to (T^2× (0,1),α_2), which is tight, as it
embeds into the standard tight contact structure on ^3.
§.§ Computing _
A Legendrian push-off of L_0 is simply a parallel (0,1)-curve
L_0' on { z=z_0}. The topological isotopy from pL_0 to p times
the spine of the solid torus attached at z=0 can be performed in the
complement of L_0'. So the rational Thurston–Bennequin invariant
(as defined in <cit.>) of L_0 is given by
_(L_0)=1/pL_0'∙Σ_0=
1/p(0,1)∙(-1,p)=1/p.
For L_1 we argue similarly. A Legendrian push-off L_1' is given by a
parallel (-1,p+1)-curve on {z=z_1}. The isotopy of L_1 to the spine
of the solid torus attached at z=1 can be performed in the complement
of L_1', so _(L_1) can be computed as L_1'∙Σ_1,
where Σ_1 is a Seifert surface for p times that spine,
made up of the following pieces:
- a helicoidal annulus between pL_1 and a curve on
{z=1} in the class
p(-1,p+1)-(p+1)(-1,p)=(1,0)
(see below for an explanation of this choice),
- an annulus between this (1,0)-curve on {z=1} and the same
curve on {z=0},
- the meridional disc attached to the latter curve at z=0.
For the first constituent of this Seifert surface, notice that the p-fold
covered spine of a solid torus can be joined by an annulus to any
simple curve on the boundary in a class pλ+kμ, k∈,
where λ is any longitude on the boundary, and μ the meridian.
Or, more directly, simply observe that in our case
μ=(-1,p), and (1,0)∙ (-1,p)=p.
The vector field ∂_y is positively transverse to Σ_1
on T^2× [0,1], so we obtain
_(L_1)=1/pL_1'∙Σ_1=1/p(p+1)
=1+1/p.
§.§ Frames for α_ℓ
A frame for α_ℓ on T^2×[0,1], compatible with the
orientation defined by α_ℓ, is given by
∂_z and
X_ℓ:=cos((a+ℓπ)z)∂_x-
sin((a+ℓπ)z)∂_y.
This frame does not descend to a frame of the contact structure
on L(p,1).
At z=0 we have X_ℓ=∂_x. If we think of the cut at
z=0 as being defined by attaching a solid torus, with meridian
being sent to the x-curves, the vector field ∂_z is
outward radial along the boundary of the solid torus, and X_ℓ=∂_x
is positively tangent to the meridional curves. It follows that a frame
of α_ℓ that extends over the cut at z=0 is given by
cos(2π x)∂_z-sin(2π x)X_ℓ and sin(2π x)∂_z+cos(2π x)X_ℓ;
cf. <cit.>.
At z=1, where we collapse the flow lines of -∂_x+p∂_y, we
have
X_ℓ=±(cos a ∂_x-sin a ∂_y)=
∓cos a (-∂_x+p ∂_y),
depending on ℓ being even or odd.
If we think of the cut again as attaching a solid torus, now ∂_z
is inward radial along the boundary of the solid torus, and X_ℓ
is tangent to the meridional curve (positively for ℓ odd,
negatively for ℓ even). So the frame that extends over the cut
at z=1 is
cos(2π x)∂_z+sin(2π x)X_ℓ and
-sin(2π x)∂_z+cos(2π x) X_ℓ for ℓ even,
and
cos(2π x)∂_z-sin(2π x)X_ℓ and sin(2π x)∂_z+cos(2π x) X_ℓ for ℓ odd.
Thus, only for ℓ odd do we have a global frame.
§.§ Computing _
We now look again at the Legendrian Hopf link L_0⊔ L_1
in (L(p,1),α_2).
For the computation of the intersection number L_1∙Σ_0
between L_1 and a Seifert surface for pL_0 we had the freedom
to isotope L_0 (topologically) in the complement of
L_1 to the spine of the solid torus
attached at z=0. When we want to compute _(L_0), we need to
work with a Seifert surface for the original Legendrian L_0.
This means that we need to work with the Seifert surface Σ_0
made up of the following pieces:
- a p-fold covered straight annulus between pL_0, i.e.the (0,p)-curve on the torus {z=z_0}, and the p-fold covered spine
of the solid torus attached at z=0,
- a helicoidal annulus between p times the spine
and a (-1,p)-curve on the torus {z=0},
- an annulus between this (-1,p)-curve on {z=0} and the same
curve on {z=1},
- the meridional disc attached to the latter curve at z=1.
This surface is not embedded, but the computation of rotation numbers
is homological, so this is not a problem.
Our aim is to find a frame of α_2 defined over Σ_0.
We begin with the frame defined over T^2×[0,1]
by cos(2π x)∂_z+sin(2π x)X_2
(and its companion defining the correct orientation), which is the
one that extends over the cut at z=1. In particular, this frame
is defined over the third and fourth constituent of Σ_0, and we need
to extend it over the first two pieces of Σ_0.
Write (/)× D^2 with coordinates (y;r,θ) for the solid torus
attached at z=0. We pass to the p-fold covers
(/)^2× [0,1]⟶ (/)^2×[0,1],
(x,y,z)⟼ (x,py,z)
and
(/)× D^2⟶ (/)× D^2,
(y;r,θ)⟼ (py;r,θ),
where the lifted pieces of Σ_0 are embedded:
a straight annulus between the (0,1)-curve on {z=z_0}
and the spine of the solid torus, plus a helicoidal annulus between
the spine and the (-1,1)-curve on the boundary of the solid torus.
For the following homotopical considerations, we may think
of α_2 as being extended over that solid torus
as the constant horizontal plane field.
Along the (-1,1) curve on {z=0}, parametrised as /∋ t↦
(x(t),y(t),0)=(-t,t,0), the frame we are considering is
cos(2π t)∂_z-sin(2π t)∂_x.
In the cylindrical coordinates (y;r,θ) on the solid torus
this translates into the frame
cos(2π t)∂_r-sin(2π t)∂_θ
along the curve t↦ (y(t),θ(t))=(t,-2π t) on {r=1}.
Next, we translate this into Cartesian coordinates (u,v) on
the D^2-factor. With r∂_r=u∂_u+v∂_v
and ∂_θ=u∂_v-v∂_u, and the curve in question
being (u(t),v(t))=(cos 2π t,-sin 2π t), this gives the frame
[ cos(2π t)(cos(2π t)∂_u-sin(2π t)∂_v)-; sin(2π t)(cos(2π t)∂_v+sin(2π t)∂_u) = cos(4π t)∂_u-sin(4π t)∂_v. ]
This formula defines the extension of the frame over the helicoidal annulus
and the part of the straight annulus inside the solid torus.
The intersection of the straight annulus with the torus {z=0}
is the (0,1)-curve, parametrised as t↦ (x(t),y(t))=(0,t),
and if we take this curve to be given by { u=1,v=0} (so that
∂_u=∂_z and ∂_v=∂_x), the frame is now
written as
cos(4π t)∂_z-sin(4π t)∂_x,
which extends over the annulus between the (0,1)-curve on {z=0}
and that on {z=z_0} (i.e. the p-fold cover of L_0) as
cos(4π t)∂_z-sin(4π t)X_2.
Notice that at z=z_0 we have X_2=-∂_y. The
orientation of α_2 is defined by (∂_z,X_2), so
the frame makes two negative rotations with respect to the
tangent vector ∂_y=-X_2 of the p-fold
covered L_0. We conclude that _(L_0)=2/p.
Next we compute _(L_1) in an analogous fashion. We now use the frame
cos(2π x)∂_z-sin(2π x)X_2 on T^2× [0,1], which is the
one that extends over the cut at z=0. The cut we perform at z=1
corresponds to the attaching of a solid torus (/)× D^2
using the gluing map
μ={0}×∂ D^2⟼ (-1,p) and λ=(/)×{1}⟼ (-1,p+1).
Note that with respect to the orientation defined by (∂_x,∂_y),
the intersection number of meridian and longitude is
μ∙λ=(-1,p)∙(-1,p+1)=-1,
which is what we want, since ∂_z is the inward normal
of the solid torus along its boundary, so this boundary is oriented
by (∂_y,∂_x). Notice also that the tangent direction
of μ coincides with -X_2.
The Legendrian knot L_1 on {z=z_1} is a (-1,p+1)-curve, so the
parallel curve on {z=1} is in the class of λ. The relevant parts
of the Seifert surface Σ_1 for pL_1 in the p-fold cover, i.e.the lift with respect to the map
(/)× D^2⟶ (/)× D^2,
(s;r,θ)⟼ (ps;r,θ),
are the following:
- a straight annulus between the lifted longitude and the spine,
- a helicoidal annulus between the spine and the (1,0)-curve on
{z=1}; recall that pλ-(p+1)μ=(1,0).
At z=1, the frame cos(2π x)∂_z-sin(2π x)X_2
we have chosen equals
cos(2π x)∂_z+sin(2π x)cos a(-∂_x+p∂_y).
Along the (1,0)-curve, parametrised as t↦ (x(t),y(t))=(t,0),
this frame is
cos(2π t)∂_z+sin(2π t)cos a(-∂_x+p∂_y)=
-cos(2π t)∂_r+sin(2π t)∂_θ.
Translated into Cartesian coordinates (u,v) on the D^2-factor of
the solid torus, the curve (or rather its projection to
the D^2-factor) becomes
(u(t),v(t))=(cos 2π(p+1)t,-sin 2π(p+1)t),
and translating the frame into Cartesian coordinates as above, we obtain
[ -cos(2π t)(cos(2π(p+1)t)∂_u-
sin(2π(p+1)t)∂_v)+; sin(2π t)(cos(2π(p+1)t)∂_v+
sin(2π(p+1)t)∂_u) =; -cos(2π(p+2)t)∂_u+sin(2π(p+2)t)∂_v. ]
This formula defines the extension of the frame over the helicoidal
annulus and the straight annulus inside the solid torus.
Along the intersection of the straight annulus with
the boundary of the solid torus, given again by {u=1,v=0},
and further on the annulus between pλ=p(-1,p+1) and pL_1,
this frame extends as
cos(2π(p+2)t)∂_z-sin(2π(p+2)t)X_2,
since at (u,v)=(1,0) the vector ∂_u is the
outward normal -∂_z, and ∂_v points in
meridional direction, which is identified with -X_2.
This frame makes p+2 negative rotations with respect
to the tangent direction X_2 of L_1, which yields
_(L_1)=(p+2)/p=1+2/p.
Since we may flip the orientations of L_0 and L_1 simultaneously,
this gives us in total two realisations, with invariants
(_(L_0),_(L_0))=(1/p,±2/p),
(_(L_1),_(L_1))=(1+1/p,
±(1+2/p)).
For p=1, this accords with the case (1,± 2) and (2,± 3)
discussed in <cit.>.
§ DETECTING EXCEPTIONAL LINKS
Before we turn to the Legendrian realisations of Legendrian
Hopf links in L(p,1) in terms of contact surgery diagrams, we discuss
how to establish that a given Hopf link is exceptional, and how to decide
whether the individual components are loose or exceptional.
First one needs to verify that the contact structure given
by the surgery diagram is overtwisted.
If the d_3-invariant differs from that of any of the tight
structure (see Remark <ref>), this
is obvious. If the d_3-invariant does match that of a tight
structure, overtwistedness can be shown by exhibiting
a Legendrian knot in the surgered manifold that violates the
Bennequin inequality <cit.> for Legendrian knots
in tight contact 3-manifolds. In all our examples, one of L_0 or L_1
will have this property. Alternatively, one can appeal to
the classification of Legendrian rational unknots in L(p,1) with
a tight contact structure <cit.>. If the invariants
of L_0 or L_1 do not match those listed there (in particular,
_(L)=+1/p with a negative integer),
then the contact structure
must be overtwisted. Again, this covers all cases (b)–(e).
Secondly, we need to establish that the contact structure on the
link complement L(p,1)∖(L_0⊔ L_1) is tight. The method we use is
to perform contact surgeries on L_0 and L_1, perhaps also on
Legendrian push-offs of these knots, such that the
resulting contact manifold is tight. If there had been an overtwisted
disc in the complement of L_0⊔ L_1, this would persist
after the surgery.
Here we rely on the cancellation lemma from <cit.>,
cf. <cit.>, which says that a contact
(-1)-surgery and a contact (+1)-surgery along a Legendrian knot
and its Legendrian push-off, respectively, cancel each other.
For instance, if by contact (-1)-surgeries on L_0 and L_1
we can cancel all contact (+1)-surgeries in the surgery diagram,
and thus obtain a Stein fillable and hence tight contact 3-manifold,
the Legendrian Hopf link will have been exceptional.
To determine whether one of the link components is loose,
we sometimes rely on the classification of exceptional rational unknots
in L(p,1) given in <cit.>:
Up to coarse equivalence, the exceptional rational unknots in
L(p,1) are classified by their classical invariants
_ and _. The possible values of
_ℚ are +1/p with ∈_0. For =0, there is a
single exceptional knot, with _=0. For =1, there are p+1
exceptional knots, with
_∈{-1, -1+2/p, -1+4/p, …,
-1+2p/p}.
For ≥ 2, there are 2p exceptional knots, with
_∈{±(-2+2/p),
±(-2+4/p),…,
±(-2+2p/p)}.
In a few cases, the invariants of one link component
equal those realised by an exceptional rational unknot,
but we detect looseness by computing the d_3-invariant
and observing, again comparing with <cit.>,
that it does not match that of an exceptional realisation.
§ EXCEPTIONAL HOPF LINKS
In this section we find exceptional Legendrian realisations of the
Hopf link, except case (c1)—which is covered by
Section <ref>—, in contact surgery diagrams for L(p,1).
This completes the proof of Theorem <ref>.
§.§ Kirby diagrams
We begin with some examples of Kirby diagrams of the Hopf link
that will be relevant in several cases of this classification.
The proof of the following lemma is given by the Kirby moves in the
corresponding diagrams.
(i) The oriented link L_0⊔ L_1 in the surgery diagram
shown in the first line of Figure <ref>
is a positive or negative Hopf link in L(p,1), depending
on k being even or odd.
The same is true for the links shown
in the first line of Figure <ref> and <ref>,
respectively.
(ii) The oriented link L_0⊔ L_1 shown in Figure <ref>
is a positive Hopf link in L(p,1).
(iii) The oriented link L_0⊔ L_1 in the first line
of Figure <ref>
is a positive or negative Hopf link depending on k_0 and k_1
having the same parity or not.
§.§ Legendrian realisations
§.§.§ Case (b)
The |_1-1| realisations with _0=0 and _1≤ 0
are shown in Figure <ref>. Here k and ℓ denote the
number of stabilisations, with k+ℓ=-_1, so that
_1=_1-1.
The linking matrix M of the surgery diagram is the
((p+1)× (p+1))-matrix
[ 0 -1 -1 ⋯ -1; -1 0 -1 ⋯ -1; -1 -1 0 ⋯ -1; ⋮ ⋮ ⋮ ⋱ ⋮; -1 -1 -1 ⋯ 0 ].
The determinant of this matrix is M=-p.
The extended linking matrix for L_i, i=0,1, is the
((p+2)× (p+2))-matrix built in the same fashion as M, so that
(M_i) =-p-1.
It follows that
_(L_0)=-1+p+1/p=1/p
and
_(L_1)=_1-1+p+1/p=_1+1/p.
Since =0, we get
_(L_0)=0 and
_(L_1)∈{_1, _1+2,…,-_1-2,-_1}.
For the calculation of d_3, we observe that c^2=0, χ=p+2,
and σ=p-1. Thus,
d_3=1/4(0-2(p+2)-3(p-1))+p+1=1/4(-5p-1)+p+1=3-p/4.
By Theorem <ref>, there are no exceptional realisations
of a rational unknot in L(p,1) with _ equal to that of L_1,
so L_1 is loose. The component L_0 is exceptional, as can
be seen by performing surgery on it. A contact (-1/p+2)-surgery
on L_0 has the same effect as taking p+2 Legendrian push-offs of L_0
and doing a (-1)-surgery on each of them. This cancels the
(+1)-surgeries in the diagram and hence produces a tight contact
3-manifold.
§.§.§ Case (c2)
In this case we have exactly three Legendrian realisations.
The left-hand side of Figure <ref>, where the
rational rotation numbers of L_0 and L_1 will be shown to be
non-zero, gives two realisations (one with L_0,L_1 oriented as shown,
the second with orientations flipped simultaneously.
The right-hand side, where the rotation numbers are
zero, gives the third one.
We begin with the left-hand side.
The linking matrix for the surgery diagram, ordering the surgery knots
from top to bottom, and with all surgery knots oriented clockwise,
is the ((p+3)× (p+3))-matrix
M=[ -2 -1 -1 ⋯ -1; -1 0 -1 ⋯ -1; -1 -1 0 ⋯ -1; ⋮ ⋮ ⋮ ⋱ ⋮; -1 -1 -1 ⋯ 0 ],
with M=p.
The extended linking matrices for L_0 and L_1 are
M_0= ([ 0 -1 -1 ⋯ -1; -1 ; -1 M ; ⋮ ; -1 ]),
with M_0=1+p, and
M_1= ([ 0 3 1 ⋯ 1; 3 ; 1 M ; ⋮ ; 1 ]),
with M_1=1+5p. This yields
_(L_0)=-1+1+p/p=1/p and _(L_1)=-3+1+5p/p=2+1/p,
so we are indeed in the case (_0,_1)=(0,2).
Further,
=(2,0,0,…, 0_p+2), _0=(-1,-1,…, -1_p+2), _1= (3,1,…,1_p+2).
A simple calculation gives
M^-1_0=(-1/p,1/p,…,1/p) and
M^-1_1=(-(2p+1)/p,1/p,…,1/p).
Hence
_(L_0)=0-2·-1/p=2/p and _(L_1)=-2+2·2p+1/p=2+2/p.
The second realisation is given by reversing the orientations of
both L_0 and L_1; this changes the sign of the _(L_i)
(as is best seen by also changing the orientations of the surgery curves,
although the choice of orientation here does not matter).
For the diagram on the right, the calculation of _(L_i)
does not change, but now we have =0 and _1=0,
whence _(L_0)=_(L_1)=0. This yields the third realisation.
For the calculation of the d_3-invariant we observe that both diagrams
in Figure <ref> give σ=p-1 and χ=p+4.
The diagram on the right has =0, which yields
d_3=(7-p)/4. For the diagram on the left,
the solution of M= is =(-2-2/p,2/p,…,2/p)^.
Then c^2=⟨,⟩=-2(2+2/p) and d_3=
(3p-p^2-4)/4p.
In either case, the contact structure is overtwisted by the observations in
Section <ref>.
A contact (-1)-surgery along L_1 and a contact (-1/p+2)-surgery
along L_0 cancels all (+1)-surgeries, so the link is exceptional.
Both components are loose, since the classical invariants do not
match those of Theorem <ref>.
§.§.§ Case (c3)
Two of the four Legendrian realisations in this case are shown in
Figure <ref>, with L_0 and L_1 both oriented
clockwise or counter-clockwise for k even, L_1 having the opposite
orientation of L_0 for k odd; cf. Figure <ref>.
The other two are obtained by
flipping the shark with surgery coefficient -1.
The invariants are shown in
Table <ref>. We omit the calculations of the invariants
in this and the remaining cases. The arithmetic is lengthy but
not inspiring; the detailed calculations are available from
the authors upon request.
The contact structure is overtwisted, because we have rational unknots
with _i≥ 0. A contact (-1)-surgery on L_1 and a contact
(-1/p+2)-surgery on L_0 will cancel all contact (+1)-surgeries,
so the link is exceptional.
Except for (_0,_(L_0))=(0,0),
the invariants of L_0 and L_1 are not realised by
exceptional rational unknots, see Theorem <ref> or,
for a better overview, <cit.>; hence these knots are loose.
So is L_0 in the first line of Table <ref>, since
the value of the d_3-invariant does not match that of an exceptional
realisation; again see <cit.>. This is the first
instance where we need the d_3-invariant to detect looseness.
§.§.§ Case (d1)
The p+3 Legendrian realisations with invariants as listed
in Theorem <ref> are shown in Figure <ref>.
The surgery knot of which L_1 is a push-off is a (p+2)-fold
stabilisation of the standard Legendrian unknot (compare with
the topological surgery framing in Figure <ref>);
placing the stabilisations left or right gives the p+3 choices.
Flipping the orientations of both L_0 and L_1 has the same
effect as exchanging the number of stabilisations on the left-
and right-hand side, so this does not give any new choices.
The argument for the link being exceptional is as in the preceding case.
According to <cit.>,
the classical invariants are realised by exceptional rational unknots,
but only in the overtwisted contact structure with d_3=(3p-^2)/4p,
so the link components here are loose.
§.§.§ Case (d2)
The 2(p+2) Legendrian realisations are shown in Figure <ref>,
with L_0,L_1 given the same orientation for k even, the opposite, for
k odd. The factor 2 comes from simultaneously exchanging
the orientations of L_0 and L_1.
The factor p+2 comes from the placement of p+1
stabilisations on the surgery knot near the bottom of the diagram
(compare with Figure <ref>). We write
∈{-p-1,-p+1,…, p-1,p+1}
for the p+2 possible rotation numbers of this surgery knot.
The invariants are listed in Table <ref>.
The remaining arguments are as before.
§.§.§ Case (d3)
The 4(p+1) Legendrian realisations are shown in Figure <ref>.
For k_0,k_1 of equal parity, we give L_0 and L_1 the same
orientation; for k_0,k_1 of opposite parity, the opposite
orientation. A factor 2 comes from exchanging the orientations
simultaneously, a factor 2 from the two diagrams, and a factor p+1
from placing p stabilisations left or right on the surgery knot
at the centre of each diagram, cf. Figure <ref>.
The rotation number of this surgery knot is denoted by
∈{ -p, -p+2,…, p-2,p}.
The invariants are shown in Table <ref>.
As before, suitable contact surgeries along L_0,L_1
and comparison with <cit.> show that the link
is exceptional and the components loose.
§.§.§ Case (e1)
Figure <ref> shows |_0|(p+1) Legendrian realisations.
Placing the |_0|-1 stabilisations left or right gives |_0| choices.
The surgery knot is the standard Legendrian unknot with p
stabilisations, which gives p+1 choices. Flipping the orientations
of L_0,L_1 simultaneously is the same as flipping the whole
diagram (and the left/right stabilisations), so this does not
give any additional realisations. Write
∈{-p,-p+2,…, p-2,p}
for the rotation number of the surgery knot. A straightforward calculation
then gives _0=_0 (and hence the right number
of realisations) and the other classical invariants as listed
in Theorem <ref>, with _0:=_0.
The d_3-invariant takes the value (3p-^2)/4p.
Since _1>0, the contact structure is overtwisted.
A contact (-1)-surgery on L_1 gives (S^3,), so L_1 is
exceptional. On the other hand, L_0 must be loose, since _0<0.
§.§.§ Case (e2)
The 2|_0|p realisations for this case are shown in
Figure <ref>, with L_0 and L_1 both oriented
clockwise or counter-clockwise for k odd, L_1 having the opposite
orientation of L_0 for k even; cf. Figure <ref>.
Here L_0 is an unknot with
_0<0 (which will again turn out to equal _0);
this gives us |_0| choices distinguished by
_0:=_0∈{_0+1,_0+3,…,-_0-3,-_0-1}.
The topmost surgery knot has Thurston–Bennequin invariant -p;
this gives us p choices distinguished by the rotation number
∈{-p+1,-p+3,…,p-3,p-1}.
The factor 2 in the number of choices comes from simultaneously
flipping the orientations of L_0 and L_1.
One then computes that
_(L_0)=±(_0++1/p), _(L_1)=±(_1-1++1/p),
and
d_3=1/4(3+2-^2-1/p).
The argument for showing that L_0 is loose, and L_1 exceptional,
is as in case (e1).
10
baet12
K. Baker and J. Etnyre,
Rational linking and contact geometry,
Perspectives in Analysis, Geometry, and Topology,
Progr. Math. 296
(Birkhäuser Verlag, Basel, 2012), 19–37.
chke
R. Chatterjee and M. Kegel,
Contact surgery numbers of Σ(2,3,11) and L(4m+3,4),
.
dige04
F. Ding and H. Geiges,
A Legendrian surgery presentation of contact 3-manifolds,
Math. Proc. Cambridge Philos. Soc.
136 (2004), 583–598.
dgs04
F. Ding, H. Geiges and A. I. Stipsicz,
Surgery diagrams for contact 3-manifolds,
Turkish J. Math.
28 (2004), 41–74.
geig08
H. Geiges,
An Introduction to Contact Topology,
Cambridge Stud. Adv. Math. 109
(Cambridge University Press, Cambridge, 2008).
geon15
H. Geiges and S. Onaran,
Legendrian rational unknots in lens spaces,
J. Symplectic Geom.
13 (2015), 17–50.
geon20
H. Geiges and S. Onaran,
Legendrian Hopf links,
Quart. J. Math. 71 (2020), 1419–1459.
giro00
E. Giroux,
Structures de contact en dimension trois et bifurcations des
feuilletages de surfaces,
Invent. Math.
141 (2000), 615–689.
hond00I
K. Honda,
On the classification of tight contact structures I,
Geom. Topol.
4 (2000), 309–368;
erratum: Geom. Topol.
5 (2001), 925–938.
lerm01
E. Lerman,
Contact cuts,
Israel J. Math.
124 (2001), 77–92.
|
http://arxiv.org/abs/2409.02423v1 | 20240904040530 | Accelerating Large Language Model Training with Hybrid GPU-based Compression | [
"Lang Xu",
"Quentin Anthony",
"Qinghua Zhou",
"Nawras Alnaasan",
"Radha R. Gulhane",
"Aamir Shafi",
"Hari Subramoni",
"Dhabaleswar K. Panda"
] | cs.DC | [
"cs.DC",
"cs.AI"
] |
Accelerating Large Language Model Training with Hybrid GPU-based Compression
Lang Xu
The Ohio State University
Columbus, Ohio
[email protected]
Quentin Anthony
The Ohio State University
Columbus, Ohio
[email protected]
Qinghua Zhou
The Ohio State University
Columbus, Ohio
[email protected]
Nawras Alnaasan
The Ohio State University
Columbus, Ohio
[email protected]
Radha Gulhane
The Ohio State University
Columbus, Ohio
[email protected]
Aamir Shafi
The Ohio State University
Columbus, Ohio
[email protected]
Hari Subramoni
The Ohio State University
Columbus, Ohio
[email protected]
Dhabaleswar K. (DK) Panda
The Ohio State University
Columbus, Ohio
[email protected]
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
IEEEtran
|
http://arxiv.org/abs/2409.02465v1 | 20240904062822 | DetectiveQA: Evaluating Long-Context Reasoning on Detective Novels | [
"Zhe Xu",
"Jiasheng Ye",
"Xiangyang Liu",
"Tianxiang Sun",
"Xiaoran Liu",
"Qipeng Guo",
"Linlin Li",
"Qun Liu",
"Xuanjing Huang",
"Xipeng Qiu"
] | cs.CL | [
"cs.CL"
] |
UAV-Mounted Movable Antenna: Joint Optimization of UAV Placement and Antenna Configuration
Xiao-Wei Tang, Yunmei Shi, Yi Huang, and Qingqing Wu
Xiao-Wei Tang, Yunmei Shi, Yi Huang ({xwtang, ymshi, and huangyi718b}@tongji.edu.cn) are with the Department of Information and Communication Engineering, Tongji University, Shanghai, China.
Qingqing Wu ([email protected]) is with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
3 September 2024
======================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
With the rapid advancement of Large Language Models (LLMs), long-context information understanding and processing have become a hot topic in academia and industry. However, benchmarks for evaluating the ability of LLMs to handle long-context information do not seem to have kept pace with the development of LLMs. Despite the emergence of various long-context evaluation benchmarks, the types of capability assessed are still limited, without new capability dimensions. In this paper, we introduce DetectiveQA, a narrative reasoning benchmark featured with an average context length of over 100K tokens. DetectiveQA focuses on evaluating the long-context reasoning ability of LLMs, which not only requires a full understanding of context but also requires extracting important evidences from the context and reasoning according to extracted evidences to answer the given questions. This is a new dimension of capability evaluation, which is more in line with the current intelligence level of LLMs. We use detective novels as data sources, which naturally have various reasoning elements. Finally, we manually annotated 600 questions in Chinese and then also provided an English edition of the context information and questions. We evaluate many long-context LLMs on DetectiveQA, including commercial and open-sourced models, and the results indicate that existing long-context LLMs still require significant advancements to effectively process true long-context dependency questions.
§ INTRODUCTION
The development of Large Language Models (LLMs) <cit.> has had a remarkable surge in recent years. The ability to long-context understanding and processing is important in the development of LLMs <cit.>. This capability is essential for some basic tasks that require a deep understanding of lengthy documents, such as information extraction from long documents, summarization for long documents, or translation for long documents. With this capability, LLMs can be applied to a wider range of scenarios (e.g., legal cases, academic field, or financial field). In addition to the rapid application of LLMs to these basic tasks, the development of LLMs has also allowed researchers to see the dawn of AGI (artificial general intelligence). Both academia and industry are exploring the application of LLMs in broader fields, such as advanced intelligent agents, robotics, and so on <cit.>. This kind of application requires LLMs to have a stronger understanding and reasoning ability over the long documents. This type of reasoning ability is the key to inferring the next action from the context information, rather than the basic capability of extracting the action directly from the context information. Just like humans, they can deduce from all their previous experiences what actions they are going to perform next. This is a more advanced capability, a new dimension of capability that needs to be evaluated for rapidly evolving LLMs.
Many long-context evaluation benchmarks for long-context LLMs have emerged with the development of long-context LLMs. To quickly fill in the gaps in long-context evaluation, some new benchmarks choose to integrate and transform large sets of existing datasets <cit.>. However, the context length of these datasets is basically less than 20K, which makes it difficult to meet the evaluation requirements of the current long-context LLMs. Besides, to extend the length of context, some benchmarks carry out data source re-selection and new question annotation <cit.>. However, these newly annotated datasets tend to focus on a few basic evaluation dimensions of the old benchmarks, such as information retrieval, summarization, code completion, and multi-hop reasoning. Although these benchmarks cover many application scenarios of long-context LLMs, it is difficult to measure the performance of LLMs in more advanced intelligent applications, such as advanced intelligent agents or robotics.
To fill this gap, we introduce DetectiveQA, a novel benchmark that evaluates the performance of long-context LLMs on long-context questions. Unlike existing benchmarks, DetectiveQA takes care of both longer context information and longer context dependencies. Moreover, it introduces the assessment of narrative reasoning capability, a new dimension of assessment. We choose orthodox school novels as our data source, which not only have long context lengths, but also have complex plots and character relationships. In order to construct DetectiveQA, we have deeply analysed and annotated these novels, extracted a large number of long contextual questions, and provided detailed answers. These questions cover a variety of genres, including character relationships, plot development, motivation analysis, etc., aiming to comprehensively assess the capability of long context language models in narrative reasoning.
The proposal of DetectiveQA aims to provide a new evaluation tool for the research and application of long context language models, to help researchers gain a deeper understanding of the performance and limitations of long context language models, as well as to support the advancement of long context language models in the field of more advanced intelligent applications.
In summary, our contributions are threefold.
* We present the first long context narrative reasoning dataset, which helps to better evaluate the model's reasoning ability for complex problems in narrative contexts.
* We have designed rich evaluation metrics that take into account data contamination issues and decoupling of long text capabilities, which helps to better analyse the performance of the model.
* We have extensively evaluated the long-context reasoning capabilities of current large-scale language models, and have clarified the challenges that the key capability of narrative reasoning faces in the development of current large-scale language models.
§ RELATED WORK
Long-Context Evaluation Benchmark
Regarding datasets, NarrativeQA <cit.>, QuALITY <cit.>, and TriviaQA <cit.> provide datasets for document information retrieval, a longer dataset was constructed by providing document information. These datasets are useful for evaluating the information retrieval capabilities of large language models.QuALITY <cit.>'s evaluation criterion provides a unique evaluation perspective by using multiple-choice accuracy instead of BLEU or ROUGE-L. Meanwhile, HotpotQA <cit.> evaluates the model's multi-hop inference ability by using multiple related progressively inferenceable facts, while ELI5 <cit.> constructs datasets with longer responses by collecting quizzes and answers that are understandable to children as young as 5 years old and in which the model is required to respond according to the corresponding attempts to make inferences.
There are many benchmarks that also focus on measuring long text, Long Range Arena <cit.> designed six classification tasks on longer input, CAB <cit.> designed seven tasks for a more comprehensive measurement of skills. SCROLLS <cit.> and its extension ZEROSCROLLS <cit.> include documents from a variety of domains and also propose a variety of tasks, including query-based summarization, multi-hop problem analysis, sentiment aggregation, and sorting of book chapter summaries. LongBench <cit.> covers multilingual measures and focuses mainly on long text comprehension in the large language model. L-EVAL <cit.> contains both open and closed tasks and comes in at 30K in length.
The datasets and Benchmarks used in these evaluations were relatively long at the time and placed some demands on the models' inference capabilities. However, the input lengths were still relatively short, no more than 50K.
To identify measurement issues in lengthy texts, datasets such as Needle in a Haystack <cit.> and the bABILONG <cit.> dataset have reached orders of magnitude of 100K pages. Needle in a Haystack <cit.> evaluates model performance at various lengths and insertion depths, while bABILONG <cit.> creates a multi-hop inference problem by inserting more facts into the data in the presence of very long contextualized inputs.
Meanwhile, InfiniteBench <cit.> has focused on the need for longer text reviews, building a set of over 100k reviews aimed at testing the model's five key capabilities for longer text: retrieval, maths, code, QA, and summaries.
Nowadays, the need for input support for very long text and model inference capabilities is becoming more prominent due to the development of large language models. In the past, datasets often lacked in terms of length, and inference was more of an information extraction task, where the answer would appear in the relevant document.
Therefore, we have constructed the only dataset with an input length of 100K that requires complex narrative reasoning to obtain answers that are not explicitly stated in the text. Additionally, we propose a more comprehensive evaluation criterion to assess the model's narrative reasoning ability.
§ DETECTIVEQA
In this section, we introduce DetectiveQA, a benchmark dataset to test the long-context reasoning ability of language models.
§.§ Data Sources
A representative data to study language models' ability to handle long contexts is books, among which detective novels are a category that contains intensive reasoning-related content.
Therefore, we consider detective novels as promising candidates to be data sources of our benchmark.
Nevertheless, we find that a large volume of detective novels take the attractiveness of storytelling in the first place at the sacrifice of the strictness of reasoning processes.
Fortunately, we find a group of detective novels categorized as orthodox school <cit.>.
These novels are dedicated to entertaining readers keen on solving puzzles by ensuring that the reader has the same number of evidences as the detective in the novel, being ideal data sources that satisfy our need for rigorous reasoning.
Therefore, we collect orthodox detective novels as sources of long context and use questions related to the puzzles in the novels to test the language models.
Other consideration on data sources are their lengths and languages.
A smooth gradient of difficulty helps to differentiate models at varying levels of proficiency.
Therefore, we collect orthodox detective novels with lengths ranging from 100K to 250K words.
Additionally, we only collect the Chinese and English versions given the language background of the researchers and data annotators.
§.§ Desiderata of Data Annotation
Our data are largely question-answer pairs, as shown in Figure <ref>.
To ensure the efficacy of our data in evaluating the long-context reasoning ability of language models, we make several design decisions on the annotations on both questions and answers.
Questions.
Each question is composed of a long context from the detective novels we collect and a multiple-choice question about the context.
For the context, we truncate the novel till the paragraph that writes the answer to the question in order to avoid answer leakage.
And we allow multiple questions for a novel to improve the utilization of the books.
For the questions, we design them as multiple-choice questions to ease extracting answers from model outputs, similar to the practice in many prominent benchmarks for large language models <cit.>.
We also require the question to center around the detective's reasoning about the cases.
In this way, we exclude questions that are too trivial to require understanding or extracting evidences from a long context[Here is an example of overly trivial questions: a novel mentions that a character is six years old when her sister is born, and then the question is how many years she is older than her sister.].
Answers.
We require the answers to contain reference solutions with decomposed steps.
This helps fine-grained evaluation of the stepwise correctness of the model outputs.
Typically, the reasoning steps are either evidences drawn from the context or inferences from these evidences.
The two kinds of steps correspond to the ability of language models to understand long contexts and perform reasoning, respectively.
Therefore, we further differentiate them in our data annotations to facilitate disentangled evaluations of the two aspects.
An exemplary data sample is in Figure <ref>. In summary, a data entry contains the following items:
* A multiple-choice question with four candidate options, attaching a long novel as its context.
* The answer option and the reasoning process in the form of a list containing evidences and inferences.
* Evidence position corresponding to reasoning. Each indicates the passage in which each evidence appears in the text, and if the corresponding position is an inference rather than a evidence then it is labeled -1.
* Answer position representing where the answer appears in the novel. It is used to truncate the text to ask the question.
§.§ Annotation Procedure
Human Annotations. A naive approach to annotating the data is to employ human labor.
We can hire workers to read the novels, write down the questions, and reference answers.
However, such a process is intolerably cumbersome due to the requirement of reading long novels <cit.>.
As recorded It takes a median of around 3.5 hours for our annotators to complete reading a 100K-long novel.
There is a relatively large drain on annotators' time and mental energy, making it difficult to scale up the size of the datasets.
As a consequence, we seek an alternative approach to enable efficient data annotations.
Annotation with AI Assistance.
Our solution is to leverage existing language models that have the eminent long-context capability to assist the human annotator.
Our key insight is that, with the full detective novels, finding the reasoning questions as well as their answers can be treated as an information extraction problem, a much simpler task that large language models have achieved plausible performance <cit.>.
Hereby, we design a workflow to decompose the data annotation procedure into steps of information extraction tasks and employ Claude 2, a leading long-context large language model, for assistance.
* First, we enter the full novel and use the model the extract inferences drawn by the detectives in the novels. This forms the reasoning chain in our data. To help human annotators check the extraction later, we prepend indices to the paragraphs of the novel and require the model to output where the drawn inferences are located.
* Then, with the extracted inferences, we require the model to seek the positions where the evidences mentioned in the inferences lie.
* Finally, we use models to synthesize questions for each of the extracted reasoning chains.
Given the 100k limit of Claude 2, for novels exceeding the lengths, we decompose them into chunks.
We draw reasoning chains (the first step) and ask questions (the third step) in each chunk, respectively.
For gathering the mentioned evidences (the second step), we query all the chunks for each reasoning chain.
After these model calls, human annotators only need to make certain revisions by checking the model outputs with the related paragraphs[To help annotators gain a coherent understanding of the novel content, which we find helpful for their efficiency in checking the AI annotations, we also use the models to provide summarization of the novels.], without reading the full novels and thinking about the reasoning progress.Due to human calibration,we ensuring the precision and rationality of the annotation. Although human annotators are still required to proofread the content, this somewhat mitigates the larger overhead of purely manual annotation.
§.§ Statistics
In total, DetectiveQA contains 1200 question-answer pairs, among which 308 are from human annotations and 892 are annotated with the assistance of AI.
We detailed the statistics of the data samples as follows.
Table <ref> presents data statistics for part of novels annotated through both human annotation and AI-assisted annotation. While the AI-assisted annotations increased in question length, the manual annotations produced more detailed evidences and reasoning sections and slightly less coverage than the AI-assisted annotations. Overall, the difference in quality between manual and AI-assisted annotations was minimal, suggesting that the use of AI-assisted manual annotation is feasible.
Context lengths.
We show the distribution of context lengths in Figure <ref>.
The length of our queries ranges from 4K to over 250K words, with an average of 96K words.
The average length approaches the supported context length of most competent large language models:100k words.
Such a scale of context length makes our approach sufficient to cover the length of the context window provided by most models, thus providing a sufficiently long measurement scheme for long text capabilities.
Context coverage.
We introduce a coverage factor to quantify the “global” nature of a question. Formally, the coverage factor is defined as the length of the context from the earliest evidence to the answer location as a percentage of the total contextual input length.
Depicted in Figure <ref>, our question coverage can be broadly categorized into three bands: 10 percent, 50 percent, and 100 percent.
The substantial contextual coverage poses a significant challenge to the model's comprehension and information-gathering abilities from lengthy articles, rendering our dataset an effective assessment of the model's proficiency in reading and comprehending extensive texts.
Statistics of the reasoning.
Table <ref> reveals that answers in our dataset contain substantial information in the narrative reasoning . Additionally, the labeled questions exhibit a high demand for a broad span of evidences, necessitating the exploration of a relatively lengthy contextual span in the text to derive the answer.
§ EXPERIMENTS
In this section, we evaluate prevailing large language models supporting long context on DetectiveQA to benchmark their capabilities in long-context narrative reasoning.
§.§ Experimental Setups
§.§.§ Experimental Settings
To explore the key elements for long-context reasoning, namely understanding long documents, extracting cue information, and reasoning about actions or responses based on cue information, our experiments employ three distinct settings for both human manual annotation and AI-assisted annotation.
Question+Context.
This fundamental setup includes a multiple-choice question for the model to answer, requiring the model to provide its response and reasoning process for the novel content until the answer is found in the text. This tests the model's abilities in long text comprehension, cue information extraction, and reasoning simultaneously.
Question-Only.
In this setup, we investigate whether the model, during pre-training, has been exposed to the corresponding novel content. We present the model with a question-only query, providing only the name and author of the novel along with single-choice questions. The model is then expected to output both the answer and the corresponding reasoning process.
Question+evidence.
In this configuration, we input the cue part of the human annotation into the model along with a multiple-choice problem. This cue section is akin to the result of a gold search of the article for the given problem. This setting isolates the model's ability to comprehend long articles and extract information, essentially testing the model's reasoning ability alone.
§.§.§ Metrics
Our methodology for the evaluation contains two aspects of evaluation metrics.
Answer accuracy.
Similar to previous evaluation based on multiple-choice questions <cit.>, we provide the model a question with four annotated options and require the model to output a letter corresponding to the selected option.
At this point, we calculate the percentage of correctly answered questions as the score.[Since we are using data annotated on detective novels whose content the model may have seen during pre-training, resulting in high model scores, we discuss the influence of potential data contamination in Appendix <ref>.]
Reasoning metric.
To support the reliability of the model's answer decisively, it is imperative that the output not only provides an answer but also includes an explanation supporting that answer.
To this end, we examine how many of the annotated evidences are included in the model's output reasoning process, and then score the question based on the percentage of annotated evidences out of the total number of evidences. The average score across all questions represents the model's reasoning evaluation score on the dataset.
For this containment relationship we will use GPT4 review, ask GPT4 to give the contained evidences, and count the number.The specific prompt we used can be found in the Appendix <ref>.
§.§.§ Models
We conducted experiments using both open-source and closed-source models, focusing on selecting LLMs that support long text inputs that are capable of supporting input lengths of 100K or more.
Our choice of models prioritises dialogue-enabled LLMs that can process at least 100K long texts[We also did quite a lot experiments on the model with input limmited to 32K or less, more results can be found in the Appendix<ref>.] in order to extract meaningful information from the text. Our selection covers two broad categories: closed-source models, including GPT-4, Claude3, and Kimi, which are known for their robustness and long text support; and open-source models, such as chatGLM3-6B <cit.>, and IntermLM2-7B <cit.>.Model-specific information will be displayed in the Table <ref>.
§.§ Main Results
We present the final experimental results in Table <ref>. It can be seen that the current closed-source models generally score higher than the open-source models. It can also be seen that the long text review of narrative reasoning generally has room for improvement for the current models.
Secondly,by comparing the model's scores in the question-only setting with those in the Question + Context setting, we can measure the degree of data contamination by analyzing the model's win rate for responses. Based on the data in Table <ref>, most models have win rates of over 60% or higher, suggesting that data contamination is not a significant problem for these models.
§.§ Analysis
We then did some analytical studies on the data.
GPT4 rubric reasoning is valid
Validity by manually evaluating the 100 reviews output from GPT4, see this task as a judgment question of whether the evidences are contained or not, to classify the evidences into two types of contained/not contained, and finally for all the results to calculate the Kappa coefficient, this coefficient and judgment both have an accuracy of 92% or more, so this should be valid.
The dataset's problem is Challenging
Models such as InternLM2 for finding a needle in a haystack full of pairs and chatGLM3 perform well on infinite-bench retrieval tasks but still fall short of the leading long-document models (GPT4, Kimi, etc.) on our dataset.
Decoupling long text capabilities
By analysing the performance of the model in the given context and with the given cues we can analyse the following Table <ref>, where both correct refers to the percentage of all questions answered correctly in both the question+context setting and the question+evidence setting, and only with context refers to the percentage of all questions answered correctly in the question+context setting but incorrectly in the question+evidence setting, and only with evidence refers to the percentage of questions answered incorrectly in the question+context setting but correctly in the question+evidence setting.
Models with high scores in both correct indicate a strong ability to combine both narrative reasoning and long text comprehension. If BOTH CORRECT is not high and correct only in quesiont+evidence is high, the model is still deficient in long text comprehension but strong in narrative reasoning.
We can see that in most of the models, the accuracy of question+evidence settings is much higher, and we will make some case study in the Appendix <ref> for the cases where giving the context can be done correctly, but giving the evidence can't be done correctly.
§ CONCLUSION
We introduced DetectiveQA to test the models' ability to reason narratively over long contexts, the first benchmark for narrative reasoning with an average context length of 100k. We challenged the models' ability to reason over long texts as well as narrative reasoning using detective novels, the real-world texts. For each model, our test gives two scores (answer accuracy and reasoning score) in three settings. With a rich experimental setup, we can deeply analyze the performance of the model and find that the current model still faces challenges in long text comprehension, information extraction and narrative reasoning.
We hope that our dataset will facilitate future improvements in model reasoning ability, leading to more robust AI applications and the highest machine intelligence.
§ LIMITATIONS
Our dataset only serves as an evaluation benchmark on long-context reasoning ability, while how to improve the model capability remains an open question. Meanwhile, our benchmark contains only data from detective novels and mainly serves narrative reasoning. More diverse scenarios can be included in the future.
§ ETHICS STATEMENT
We are committed to ensuring that DetectiveQA is used only for academic and scientific purposes, and therefore we have rigorously copyright-checked all of the reasoning novels used in Detective's annotations to ensure that the individual novels are not designed to create copyright problems in non-commercial areas. Through these screening tools, we aim to respect the principle of ‘fair use’ under copyright protection and ensure that our project navigates within legal and ethical boundaries in a responsible manner.
§ PROMPT TEMPLATE
In our Reasoning Metric test, we mentioned the use of GPT4 for the number of contained leads, and we used the following prompt template <ref>
This template asks enough questions to get usable responses without adding additional samples for a few shots to help answer.
§ IMPLEMENTATION DETAILS
The actual experiments also require the processing of the inputs and outputs, such as text truncation, answer alignment, or a special two-step answering method to obtain the final answer. Below we describe the processing details in the following three experiments.
Text Truncation
We adopt a tail truncation approach for text handling in each model, wherein we truncate input questions from the end. Differing from the approach in InfiniteBench <cit.>, our dataset's nature leads us to believe that evidences pertinent to inference problems are more likely to be found near the end of the text rather than in the initial chapters of the novel. Thus, we employ a tail truncation method, retaining the longest relevant text at the end of the string as input.
Answer Alignment
As our output consists of two components – the answer and the reasoning process, we require the model's output to be in the format "answer": "x", "reasoning": "xxx" for easy storage as a dictionary using JSON. However, not all models consistently follow these instructions, leading to difficulties in loading as a dictionary. In response, we identified a specific pattern and developed a standardized response alignment script. This script enables converting responses in a particular format to dictionary form, ensuring the validity of the responses. The impact of this approach will be discussed in Appendix <ref>.
Special Two-stage Answer
In the case of the Qwen-7B model, we observed significant challenges in adhering to instructions for answer output. However, as our testing focuses on assessing reasoning abilities for a dataset where instruction adherence is not a primary criterion, we implemented a unique two-phase answering method. Initially, the model is tasked with providing an answer to the question, followed by a separate prompt for generating the corresponding reasoning process.
In LWM, the situation is different, perhaps because we did not train multiple rounds of dialogue, we still can not get the answer using two rounds of dialogue, so in order to test we first let the model to give the inference process and then directly after the model's output, we add ‘So the answer is’ and get the next output's logits of the next output, and look for the one with the highest probability among the ABCDs as the multiple choice answer. This setup allows models that cannot follow the instructions to complete the evaluation of this dataset, increasing the usability of the dataset and improving the range of models that can be evaluated.
§ ANSWER ALIGNMENT
§.§ Examples of non-compliance with Output Rules
Here are some examples of error output formats from a wide variety of models in Figure <ref>
§.§ Changing Invalid Rate
After our alignment work, the inefficiency of the answer is much reduced, and the exact change is shown in Figure <ref>.
§ EXPERIMENT ON 32K MODELS
We have done quite a lot experiments under the 32K input length model, and we can find that the experimental results of the 32K model are all relatively unsatisfactory.
The experimental results are shown in Table <ref>
But a larger number of parameters would allow the model to make full use of the information in the 32K text and its own reasoning power to mitigate the problem.
§ CASE STUDY
Such cases are usually the result of incorrect output from the model, where the model suggests the correct answer in the analysis but gives the wrong conclusion or option at the end.The Figure <ref> shows the given sample.We can see that instability in the model's reasoning ability can lead to errors in the final result.
§ INDIVIDUAL MODEL PERFORMANCE
We derived nine metrics each for each of the different models based on the two types of annotation types, which are the answer accuracy (Acc) reasoning metric (RM) and answer invalid rate(IR) for each of the three questioning settings: Question-Only(simp), Question+Context(deta) and Question+evidence(clue), and the results are shown in Figure <ref>
From this figure, you can clearly see the various scores of the model in each setting, which can help you to analyze data contamination <ref>, comprehensible decoupling, and so on. At the same time, the inefficiency of the answers also has some reference value, you can modify the answer method <ref> or use the answer alignment <ref> to reduce the inefficiency so that the test is more effective.
§ NO OPTIONS SETTING
Since our questions are presented in the form of multiple choice questions, and the requirement for model answers is to provide only one of the letters of ABCD, there is a possibility that the answer will be correct by randomly outputting one of the four letters, which affects the reliability of assessment.
Therefore, we add a setting, the question in this setting contains the author of the novel, the name of the novel, and the question, but does not give the corresponding options, and then in the output let the model directly give the text answer instead of ABCD, and at the same time give the reasoning process.
Observation of the responses obtained for these questions shows that the responses obtained by the model without options have some correct answers, but most of them are still incorrect and even hallucinate in cases where the novel content is not given.
|
http://arxiv.org/abs/2409.03555v1 | 20240905141554 | Unified Framework for Neural Network Compression via Decomposition and Optimal Rank Selection | [
"Ali Aghababaei-Harandi",
"Massih-Reza Amini"
] | cs.LG | [
"cs.LG",
"cs.CV"
] |
Communication-Assisted Sensing Systems: Fundamental Limits and ISAC Waveform Design
Fuwang Dong, Member, IEEE, Fan Liu, Senior Member, IEEE, Yifeng Xiong, Member, IEEE,
Yuanhao Cui, Member, IEEE, Wei Wang, Senior Member, IEEE, Shi Jin, Fellow, IEEE
(Corresponding author: Fan Liu.)
Part of this paper was presented at IEEE International Symposium on Information Theory (ISIT), 2024 <cit.>.
Fuwang Dong, and Fan Liu are with the School of System Design and Intelligent Manufacturing, Southern University of Science and Technology, Shenzhen 518055, China. (email: {dongfw, liuf6}@sustech.edu.cn).
Yifeng Xiong, and Yuanhao Cui are with the School of Information and Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China. (email: {yifengxiong, cuiyuanhao}@bupt.edu.cn).
Wei Wang is with the College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin 150001, China. (email: [email protected]).
Shi Jin is with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China. (e-mail: [email protected]).
September 9, 2024
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Despite their high accuracy, complex neural networks demand significant computational resources, posing challenges for deployment on resource-constrained devices such as mobile phones and embedded systems. Compression algorithms have been developed to address these challenges by reducing model size and computational demands while maintaining accuracy. Among these approaches, factorization methods based on tensor decomposition are theoretically sound and effective. However, they face difficulties in selecting the appropriate rank for decomposition. This paper tackles this issue by presenting a unified framework that simultaneously applies decomposition and optimal rank selection, employing a composite compression loss within defined rank constraints. Our approach includes an automatic rank search in a continuous space, efficiently identifying optimal rank configurations without the use of training data, making it computationally efficient. Combined with a subsequent fine-tuning step, our approach maintains the performance of highly compressed models on par with their original counterparts. Using various benchmark datasets, we demonstrate the efficacy of our method through a comprehensive analysis.
§ INTRODUCTION
In recent years, deep learning has revolutionized various scientific fields, including computer vision and natural language processing <cit.>. Complex neural networks, with often millions or even billions of parameters, have achieved unprecedented levels of accuracy across a diverse array of tasks. However, this exceptional performance comes at a cost in terms of computational resources. The immense size of these state-of-the-art neural networks presents a challenge for their deployment on devices and platforms with limited resources, such as mobile phones, edge devices, and embedded systems <cit.>. The storage, memory, and processing requirements of these models often prove to be unfeasible or excessively costly, thus limiting their practicality and accessibility.
Recent research have introduced a range of compression algorithms aimed at addressing challenges related to cost-effectiveness, scalability, and real-time responsiveness <cit.>. These studies can be classified into four primary categories of approaches, all of which effectively reduce a model's size and computational demands while preserving its accuracy. One of the most straightforward methods is pruning, which involves identifying and removing insignificant weights from the model <cit.>. Quantization, on the other hand, focuses on reducing the precision of numerical values in the model, typically transitioning from 32-bit floating-point numbers to lower bit-width fixed-point numbers <cit.>. Knowledge distillation is another technique where a smaller model, often termed the “student”, is trained to mimic the behavior of a larger model, known as the “teacher” <cit.>. This knowledge transfer results in the creation of a smaller model capable of approximating the performance of the larger one. Lastly, factorization methods partition the weight matrices or tensors of a neural network into smaller matrices or tensors, effectively reducing the number of parameters in the model <cit.>. While factorization techniques have proven to be effective and efficient in reducing model size, a significant challenge lies in the selection of the appropriate rank for the decomposition process.
Non-uniqueness in tensor rank is a critical challenge in tensor decomposition research. Most tensor decomposition problems, particularly CP decomposition, are NP-hard, as noted by <cit.>. This non-uniqueness arises from ambiguities in the factorization and scaling process, allowing different decompositions to produce the same original tensor. Finding the ideal rank remains an ongoing research topic, and determining multiple tensor ranks for the weights of different layers in deep neural networks is not well suited for conventional hyperparameter selection techniques like cross-validation. As a result, it is standard practice to choose a single rank for all decompositions within a task based on a compression rate. However, this simplification can lead to significant performance degradation specially in complex models.
Recent studies propose automated methods for determining the ranks of decomposition <cit.>. However, many of these approaches, including reinforcement methods, greedy search algorithms, and SuperNet search, can be computationally expensive and time-consuming, particularly for large models and datasets. Moreover, the effectiveness of automated rank selection methods often depends on the choice of hyperparameters, such as learning rates or regularization parameters, which can be challenging to tune. Additionally, none of these existing approaches cover a wide range of search space, preventing the achievement of ideal compression rates.
In this paper, we address these challenges by introducing a unified framework named Optimal Rank Tensor decOmpoSition (), which simultaneously tackles both the decomposition and optimal rank selection tasks. This is achieved through the utilization of a composite compression loss within specified rank constraints. Also, when we combine this rank search with a subsequent fine-tuning step, our experiments demonstrate that the performance of the resulting highly compressed model remains on par with the original model. Overall, the contributions of this paper is summarized as follows:
* allows to achieve maximum compression rates by covering all ranks in the search space through a simple and efficient multi-step search process that explores ranks from low to high resolution.
* The proposed search method involves an automatic rank search in a continuous space, which efficiently identifies the optimal rank configurations for layer decomposition without requiring data, making it computationally efficient.
* We perform a comprehensive analysis of the various components of our approach, highlighting its efficacy across various benchmark datasets. we achieved improvement in some experiments specifically improvement in all metrics in the case of ResNet-18, while in another experiment we had competitive results. Moreover, our method speeds up the search phase compared to other related work.
To our knowledge, this is the first effort to use bilevel optimization to find the optimal ranks of a pre-trained model.
§ RELATED WORKS
Decomposition techniques, despite their apparent simplicity, have garnered considerable attention in the field of deep learning, particularly in natural language processing (NLP) <cit.>. They provide an efficient means to fine-tune large language models, offering notable advantages over alternative methods such as integer-based compression <cit.>, knowledge distillation <cit.>, and gradient-based pruning <cit.>. Despite their straightforward nature, decomposition approaches serve as a robust compression tool with a high compression rate and relatively lower computational cost. Their application extend beyond NLP, finding utility in computer vision models <cit.>. However, selecting the appropriate rank for compressing deep neural models using decomposition techniques is an NP-hard challenge <cit.>. Research in this domain can be categorized into two main approaches.
First approach is a rank-fixed setting, where the ranks of layers are determined based on a predefined compression rate target. Employ approach for the pre-trained mode was initially explored by <cit.>, which employed a low-rank loss to substitute the weights of convolution layers with their low-rank approximations. Specifically, two main low-rank approximation methods, namely CP and Tucker decomposition, are utilized to break down layers of the pre-trained models <cit.>. Moreover, <cit.> discovered unstability of fine-tuning after CP decomposition and address this issue by introducing a stability term into the decomposition process.
In addition, some researchers design decomposed models to train from scratch. <cit.> demonstrated the compression of deep models through redesign convolution and fully connected layers with a tensor train. Furthermore, LSTM models, which face computational challenges in vision tasks, seek to mitigate by employing tensor train and tensor ring decomposition<cit.>. <cit.> use the potential of tensor rings to compress the model by replacing layers with its equal decomposition of the tensor ring. From another perspective,<cit.> proposed a novel rank constraint training scheme to the train model to achieve inherently low-rank layers. With attention to the fact that transformers have huge parameters, <cit.>
tensorized pre-trained language models with matrix factorization and tensor decomposition.
However, a technical analysis of the rank-fixed setting approach uncovers certain challenges. Firstly, selecting the appropriate rank for layers relies on human expertise, introducing a potential bottleneck. Secondly, there is a lack of interpretable patterns between layer ranks, resulting in a disconnect among chosen ranks across layers. These limitations occasionally lead to accuracy drops or insufficient compression rates.
The second approach involves determining the optimal ranks by framing the optimization problem based on the ranks of layers. <cit.> introduce an iterative method to gradually decrease the ranks of the layers in each step of the search. In particular, the discrete nature of the rank search prompts researchers such as <cit.> to employ discrete search algorithms such as reinforcement learning and progressive search to identify optimal ranks. In addition, <cit.> compress the model by imposing constraints on ranks and budget, employing an iterative optimization strategy. Furthermore, in exploring the ranks of transformer layers, <cit.> propose a two-step search process through evolutionary search techniques to determine the optimal rank.
Moreover, recent studies <cit.> aim to utilize a continuous search space to determine optimal ranks, as our proposed approach. There are three main differences between our method, , and the theirs. First, these approaches search for ranks through the training of a large decomposed SuperNet from scratch, incurring substantial computational expenses. whereas, we utilize a pre-trained network to ascertain optimal layer ranks. Secondly, we propose a straightforward and effective loss function that operates independently of the data, thus speeding up our search process for large models. Finally, we leverage engineering insights to partition our search space into smaller search steps, encompassing a wider range of ranks, while, they limit their exploration to a constrained search space of ranks, hindering to achieve high compression rate.
§ BACKGROUND AND PRELIMINARIES
In this section, we present notation and background relevant to our approach.
Notation. We represent indices using italicized letters, sets with italic calligraphic letters and tensors as multidimensional arrays with bold calligraphic script letters. For two-dimensional arrays (matrices) and one-dimensional arrays (vectors), we use bold capital letters and bold lowercase letters, respectively.
Decomposition. Tensor decomposition is a method that transforms a multi-dimensional array of data into a series of lower-dimensional tensors, thereby reducing both the data size and computational complexity. Singular value decomposition (SVD) stands as the most basic form of decomposition, typically applied to two-dimensional matrices. the generalized concept of SVD is used for general tensors, known as tensor decomposition. The prevalent tensor decomposition techniques encompass canonical polyadic (CP), Tucker, tensor train (TT) and tensor ring (TR) decomposition <cit.>.
Without loss of generality, this paper explores CP and TT for tensor decomposition. CP decomposition entails breaking down a given weight tensor W∈ℝ^I_1× I_2×⋯× I_N into a series of rank-one tensors which can mathematically be represented as follows:
Ŵ^(R)(i_1,i_2,...,i_N) = ∑_r=1^R_̌1^(r)(i_1) _̌2^(r)(i_2) ⋯_̌N^(r)(i_N)
=_1,_2,...,_N
Where Ŵ^(R) represents the decomposition of W of rank R, and ^̌(r)_n are vectors of the same size as the dimension I_n of W, and, _̌n^(r)(i_n) represents the i_n^(th) element of the r^th factor vector for the n^th mode. _1,_2,...,_N is the Kruskal operator which represents the tensor constructed from the matrices _1,_2,...,_N where each matrix _i contains the vectors ^̌(r)_i as its column.
TT decomposition also decomposes a tensor into smaller tensors with dimensions connected like a chain to each other. This decomposition mathematically can be represented as follows:
Ŵ^(R_1,…,R_N-1)(i_1,i_2,...,i_N) = ∑_j_1=1^R_1⋯∑_j_N-1=1^R_N-1G_1(i_1,j_1)
G_2(j_1,i_2,j_2) ⋯G_N(j_N-1,i_N)
where the tuple (R_1,R_2,…,R_N-1) represents the rank of the TT decomposition, and 𝒢_k are the TT cores with sizes R_k-1× I_k× R_k, and R_0=R_N=1.
A visualization of these methods for a three-dimensional tensor is provided in Figure <ref>.
Layer decomposition representation. In our study, we focus on the decomposition of convolutional layers, which, when stacked, form a tensor. Given a convolutional layer with a weight tensor W∈R^b× h× w× c, the forward process for an input tensor X∈R^k_1× k_2× k_3 can be expressed as:
Y =∑_i_1=0^k_1-1∑_i_2=0^k_2-1∑_i_3=0^k_3-1W(t,x+i_1,y+i_2,z+i_3) X (i_1,i_2,i_3)
Specifically, we investigate how the weight tensor of a convolutional layer can be decomposed into multiple smaller convolution operations. We utilize CP and TT decompositions, as detailed in the following formulations.
* CP decomposition:
Y =∑_r=1^R_t(t,r)( ∑_i_1=0^k_1-1_x(x+i_1,r) (∑_i_2=0^k_2-1_y(y+i_2,r)
(∑_i_3=0^k_3-1_s(z+i_3,r) X (i_1,i_2,i_3)) ) )
* TT decomposition
Y =∑_r_1=1^R_1∑_r_2=1^R_2G_t(t,r_1) (∑_i_1=0^k_1-1∑_i_2=0^k_2-1G_y(r_1,x+i_1,y+i_2,r_2)
(∑_i_3=0^k_3-1G_s(i_3,r_2) X (i_1,i_2,i_3)) )
§ OPTIMAL RANK TENSOR DECOMPOSITION
Given a pre-trained neural network with n hidden layers and convolutional weights (W_i)_i=1^n, our objective is to achieve a low-rank decomposition of these weights with the smallest possible ranks and which can be formulated as the following optimization problem:
W^min ℒ_d(W^) s.t. R_1,…, R_nmin∑_i=1^n R_i
where ℒ_d(.) is a decomposition loss function, and W^= {Ŵ^(R_1)_1,…, Ŵ^(R_n)_n } is the set of decompositions to be found with ={R_1,…,R_n} the set of ranks, and Ŵ^(R_i)_i the decomposition of rank R_i corresponding to the convolutional weight W_i of layer i. Note that in state-of-the-art compression models using decomposition techniques, the ranks of the decompositions for each convolutional weight are typically treated as hyperparameters and are fixed. In contrast, our study proposes to automatically determine these ranks along with the decompositions.
§.§ Problem Formulation
In Eq. <ref>, the constraint on the ranks is to select the lowest possible rank for each tensor decomposition while maintaining minimal decomposition error. To achieve this, we replace each layer i∈{1,…,n} of the network with a super layer, which contains decomposed weights of various ranks from a given set _i, denoted as (Ŵ_i^(r))_r∈_i, corresponding to the convolutional weight of that layer, W_i. The structure of this super layer is illustrated in Figure <ref> (left). To make continuous equation (<ref>), we associate a probability p_i^(r) with each decomposition of rank r in layer i based on a learnable parameter α_i^(r). This probability is computed as p_i^(r)=softmax(α_i^(r)), which plays a role in the optimization process by serving as a coefficient for the rank. Inspired by <cit.>, the rank constraint in equation (<ref>) is formalized by the following loss:
_R =∑_i=1^n(∑_r∈_i p_i^(r) r )^β
Where β∈[0,1] is a hyperparameter. Traditional methods often rely on using data for model compression via tensor decomposition techniques <cit.>. In contrast, our approach optimizes without directly using data. Instead, we minimize the mean-square decomposition error (MSDE), measured by the Frobenius norm, between the convolutional tensor weights of the pre-trained model and their corresponding tensor approximations, as defined by equation (<ref>) for CP decomposition or (<ref>) for TT decomposition. This method aims to accelerate the convergence of the optimization process by simplifying the objective to a weight-based measure rather than data-driven metrics. After determining the decompositions, we fine-tune the model using data to mitigate approximation errors and preserve performance. Our total loss for a model with n layers is as follows:
ℒ_T = ∑_i=1^n[𝒲_i-∑_r∈_i p_i^(r)𝒲̂^(r)_i^2_F + γ(∑_r∈_i p_i^(r) r)^β]
For all layers, the decomposition loss in the left term of equation (<ref>) is designed to minimize the approximation error within their corresponding super-layers, while the rank loss in the right term aims to maximize the probability of selecting the optimal super-layer for each layer. This combination introduces a trade-off, where the objective of finding the best approximations competes with the goal of selecting the lowest possible ranks. The hyperparameter γ expresses this trade-off. In the next section, we will present an approach that examines all possible ranks within a broad search space for an optimal compression rate.
§.§ Rank Search Space
We define the rank search space as a collection of rank sets {_1,…,_n}, where each set _i for i∈{1,…,n} contains the rank values to be explored for decomposing the weight tensor of layer i.
To efficiently explore the entire search space and find the optimal rank, we propose a multi-step search strategy that divides the space of possible ranks into smaller regions and progressively refines the search resolution. The process begins with an initial pruning phase, where specific ranks are selected at regular intervals for each layer, reducing the initial number of ranks under consideration. At this stage, promising candidate ranks are identified, narrowing down those most likely to be optimal.
Following this stage, a new search space is generated, focused around the selected ranks (SR) from the previous step, but with smaller intervals between them. This allows for a more precise examination of ranks in a concentrated region. The process is repeated, with the intervals between ranks gradually decreasing in each iteration. This progressive refinement continues until the interval between ranks is reduced to one, ensuring that all possible ranks within the search space have been explored.
Figure <ref> depicts this process for one layer. The initial search space spans a broad interval with a larger step size. After the first iteration, the process narrows down to a more focused interval around the selected rank, reducing the step size by dividing it by 10. Following the second iteration, the search space is further refined to a smaller, more precise interval with a step size of one for the final selection. To facilitate this process, a function named the rank search space function (RSS-function) is proposed. This function generates a search rank set for each layer. It takes the lower and upper bounds for the ranks, as well as a step value and the number of layers as inputs, and produces a list of ranks that define the search rank set for each layer at each iteration of the algorithm. The pseudocode of this function is provided in the supplementary material.
§.§ Tensor Decomposition and Rank Exploration
The minimization of the total loss (<ref>) is achieved by updating tensor weights Ŵ_i^(r) of each layer i∈{1,…,n} and the probability parameters α_i^(r) for different ranks in the search rank set r∈_i corresponding to layer i.
Most of the computational cost of optimizing this loss function is concentrated on the decomposition component, which minimizes the squared error between the decomposed layer weights and the original layer weights. By using stochastic gradient descent (SGD) instead of the classical alternating least squares (ALS) method, we can expect fast convergence, enabling to repeat this process multiple times. Each iteration of our optimization includes the following two steps:
Weight update: Ŵ^(r)_i Ŵ^(r)_i-η_w ∇_Ŵ_i^(r) (ℒ_T)
Probability update: α^(r)_i α^(r)_i-η_α^(r)_i∇_α^(r)_i (ℒ_T)
Following parameter updates, the step size is reduced by a factor of 10 to define a more refined rank search space within the previous one, enabling fast convergence to the final search rank sets with a step size of one. Note that another way to reduce the computational cost of searching is by focusing on a smaller and sampled search space. This approach minimizes memory usage by considering only a subset of possible ranks during the search phase
<cit.>. However, our approach explores a broader search space, examining all possible ranks and allows to achieve an optimal compression rate. For each layer i, the tenor decomposition Ŵ_i^(r) of rank r corresponding to the highest probability p_i^(r) = softmax(α_i^(r)) enables reaching a smaller approximation error according to the total loss function (<ref>). For the next iteration, the selected rank SR_i for each layer i is the one that achieves the highest probability. The lower and upper bounds, Lb_i and Ub_i for defining the new rank search set are then set at a distance of half the step size around the selected rank SR_i.
Weight and probability updates, along with the search of the rank spaces, continue until the step size between the ranks of the final rank spaces is equal to one.
ruled
§.§ Final Decomposition and Fine-Tuning
The optimal selected ranks for the decomposition of tensor weights for each layer, denoted as ^⋆={SR_1,…,SR_n} are then determined from these final sets. With these optimal ranks, tensor weights are found by minimizing the decomposition loss:
ℒ_d(W^^⋆) = ∑_i=1^nW_i-Ŵ_i^(SR_i)^2_F
To reduce the approximation error and match closely the accuracy of the original network, we perform several epochs of feature fine-tuning through distillation using the training data D. In this approach, the original network serves as the teacher model and the decomposed model acts as the student network. The pseudocode for the overall procedure is presented in Algorithm <ref>.
§ EXPERIMENTS
We evaluated on the CIFAR-10 <cit.> and ImageNet <cit.> datasets, testing it across various state-of-the-art deep neural networks. To demonstrate the generality of our approach, we applied both CP and TT decompositions. For TT decomposition, given the small filter dimensions in the convolutional layer, we performed decomposition in two dimensions and, due to computational resource constraints, assumed the two TT ranks to be equal. In the search phase, the initial lower and upperbounds for all layers were set to {10,100},{10,300}, {10,50} and {100,500} for ResNet-20, VGG-16, MobileNetV2 and ResNet-18, respectively, also for initial step size we set to {10, 10, 10, 100} for these models respectively. We used the standard SGD optimizer with Nesterov momentum set to 0.9, and hyperparameters α and β set to 0.2 and 0.6, respectively. The initial learning rates were 0.1 for CIFAR-10 and 0.01 for ImageNet, both scaled down using CosineAnnealingLR. For fine-tuning, we recovered accuracy over 10 epochs, using the same optimizer and learning rate schedule as in the search phase.
§.§ Main Results
For the initial evaluation, we tested on the CIFAR-10 dataset using ResNet-20 and VGG-16 models, with the results presented in Table <ref>. yields competitive results compared to state-of-the-art (SOTA) methods. For ResNet-20, with CP decomposition achieves 1.24% and 1.52% better performance in reduction of FlOPs and parameters, compated to the HALOC <cit.> method, while in accuracy, our approach with TT decomposition shows a 0.08% improvement. Additionally, with the VGG-16 model, we achieved significant compression rates and accuracy. Our approach with CP decomposition resulted in an 85.23% reduction in Flops, and a 98.6% reduction in parameters, while our approach with TT decomposition outperformed others in accuracy by 0.04%.
The results on the ImageNet dataset are presented in Table <ref>, where we evaluated using ResNet-18 and MobileNetV2 models. For ResNet-18, our appraoch with CP decomposition yielded competitive results, while when using TT decomposition it outperformed the others, achieving state-of-the-art performance across all metrics, including Top-1, Top-5, reduction in FLOPs, and in parameters. In the case of MobileNetV2, we did not achieve high performance with the CP method. This is likely due to MobileNetV2's reliance on depthwise convolution, a low-rank convolution that does not benefit significantly from decomposition in certain dimensions. However, with TT decomposition demonstrated superior compression results, achieving 1.86% and 2.31% greater reductions in FLOPs and parameters, respectively, along with competitive Top-1 and Top-5 accuracy. Our results underscore the importance of selecting the appropriate decomposition method based on the model's complexity. Our experiments indicate that TT decomposition is more effective for compressing higher-complexity models, such as those trained on the ImageNet dataset, while CP decomposition excels in compressing lower-complexity models, like those classically used on CIFAR-10.
§.§ Analysis and Discussion
To gain deeper insights, we obtained additional results from the ResNet-18 experiment on the ImageNet dataset. Figure <ref> illustrates the ranks for both CP and TT decompositions, highlighting that CP ranks are generally larger than those of TT due to the inherent differences in their decomposition methods. Additionally, Figure <ref> presents the behaviour of ℒ_T and ℒ_R during the two-step search process. The graph of ℒ_R shows greater fluctuations during the first search phase compared to the second, as the variance between ranks decreases as the search progresses. Meanwhile, the graph of ℒ_T remains relatively stable, reflecting its composition of two distinct loss components. We also conducted an additional analysis on the effects of γ and β in the Eq <ref>, with the results provided in the supplementary materials.
§.§ Searching Time
To compare the speed of with HALOC, we measured the speed of ResNet-20 and VGG-16 on the CIFAR-10 dataset. In this experiment, we applied CP decomposition for both methods, using the same rank search space and training each model for 100 epochs. includes the time required for pretrained model as well as the search time, while HALOC involves training a SuperNet to find the optimal ranks. Table <ref> presents the search times for both methods. demonstrated faster rank search in both models, resulting in a more efficient overall performance compared to HALOC.
§ CONCLUSION
In this paper we presented an approach for deep neural-networks compression via decomposition and optimal rank selection. Our solution has two key features. It takes into account all layers during the optimization process, aiming for high compression without sacrificing accuracy by identifying the optimal rank pattern across all layers. This approach leverages the observation that different layers contribute variably to the model's inference, allowing for smaller ranks in less critical layers and determining the most effective rank pattern for each. Furthermore, to achieve a high compression rate, we explore a broad range of ranks, addressing the significant memory challenges of this extensive exploration with a multistage rank search strategy. This strategy facilitates comprehensive exploration while ensuring efficient memory usage.
§ APPENDIX
§.§ Rank Search Space function
The pseudocode of the the rank search space function is given in Algorithm <ref>. The function takes the lower and upper bounds for the ranks of each layer, denoted as Lb={Lb_1…,Lb_n} and Ub={Ub_1,…,Ub_n}, along with the step size, s, and the number of layers, n. For each layer, it then generates a list of ranks that define the search rank set. Each rank set ℛ_i contains m real values that are evenly spaced by the step s within the range from Lb_i to Ub_i.
ruled
§.§ Effect of γ and β on model performance
According to the definition of the total loss ℒ_T in Eq. (8) in the paper, higher values of γ and β shift the balance between the decomposition loss and the rank constraint more towards the latter. This results in the optimization process selecting lower ranks, and vice versa.
This trend is illustrated in Figure <ref>, which displays the accuracy of the compressed model (a), the compression in terms of Flops (b) and the compression rate in terms of the percentage of fewer parameters for different values of γ and β for ResNet-20 using the CP decomposition. When β and γ are set to very high values, corresponding to larger selected ranks, the accuracy of the compressed model is low, while the compression in terms of Flops and the number of parameters is high. Conversely, when γ and β are set to very low values, corresponding to lower selected ranks, these trends are reversed.
|
http://arxiv.org/abs/2409.02815v1 | 20240904153349 | Development of the Multichannel Pulsed Ultrasonic Doppler Velocimeter for the measurement of liquid metal flow | [
"Ding-Yi Pan",
"Yi-Fei Huang",
"Ze Lyu",
"Juan-Cheng Yang",
"Ming-Jiu Ni"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
Development of the Multichannel Pulsed Ultrasonic Doppler Velocimeter]Development of the Multichannel Pulsed Ultrasonic Doppler Velocimeter for the measurement of liquid metal flow
1]Ding-Yi Pan
1]Yi-Fei Huang
1]Ze Lyu
[2]Juan-Cheng [email protected]
[1]Ming-Jiu [email protected]
[1]School of Engineering Science, University of Chinese Academy of Sciences,
Beijing, 101408, PR China
[2]State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, PR China
In the present study, by adopting the advantage of ultrasonic techniques, we developed a Multichannel Pulsed Ultrasonic Doppler Velocimetry (MPUDV) to measure the two-dimension-two-component (2D–2C) velocity fields of liquid metal flow. Due to the specially designed Ultrasonic host and post-processing scheme, the MPUDV system can reach a high spatiotemporal resolution of 50 Hz and 3 mm in the measurement zone of 192 × 192 mm^2. The flow loop contains a cavity test section to ensure a classical recirculating flow was built to validate the accuracy of MPUDV in velocity field measurement. In the initial phase of the study, water with tracer particles was selected as the working liquid to ensure the velocity field measurements by the well-developed Particle Image Velocimetry (PIV). A comparison of the data obtained from the PIV and MPUDV methods revealed less than 3% differences in the 2D-2C velocity field between the two techniques during simultaneous measurements of the same flow field. This finding strongly demonstrates the reliability of the MPUDV method developed in this paper. Moreover, the ternary alloy GaInSn, which has a melting point below that of room temperature, was selected as the working liquid in the flow loop to validate the efficacy of the MPUDV in measuring 2D-2C velocity fields. A series of tests were conducted in the cavity test section at varying Reynolds numbers, ranging from 9103 to 24123. The measurements demonstrated that the MPUDV could accurately measure the flow structures, which were characterized by a central primary circulation eddy and two secondary eddies in the opaque liquid metal. Furthermore, comprehensive analyses of the velocity data obtained by the MPUDV were conducted. It was found that the vortex center of the primary circulating eddy and the size of the secondary eddies undergo significant alterations with varying Reynolds numbers, indicating the influence of inertial force on the flow characteristics in the recirculating flow. It is therefore demonstrated that the current MPUDV methodology is applicable for the measurement of a 2D-2C velocity field in opaque liquid metal flows.
[
*
September 9, 2024
=====================
§ INTRODUCTION
As a good choice for the next generation of power plants, nuclear fusion has received plenty of attention from many countries and international communities. The fusion experimental reactors (e.g. ITER, CFETR) have been established for some fundamental studies to pave the way for commercial fusion reactors<cit.>. However, numerous challenges must be overcome before achieving continuous power generation from fusion reactors. The blanket, which is an important component to realizing the tritium breeding, converting energy, effective heat transfer, et al., is crucial for the nuclear fusion system<cit.>. Specifically, the concept of a liquid metal blanket is one of the best selections due to the advantages of flowing liquid metal such as low-pressure operation and high thermal conductivity. Currently, the European Union, the United States and China have carried out designs for liquid metal blankets, such as HCLL (European Union)<cit.>, DCLL (United States)<cit.>, and COOL (China)<cit.>. However, in the Tokamak system, the strong magnetic field that must be adopted to confine the burning plasma introduces the magnetohydrodynamic (MHD) effects in the liquid metal via the induced Lorentz force when flowing across the magnetic lines. Moreover, the large temperature difference in the liquid metal blanket due to the release of nuclear heat may also influence the liquid metal flow by the introduced buoyancy force. Therefore, the design of a liquid metal blanket requires considering the flow characteristics of liquid metal, particularly under the influence of the MHD effects and large temperature gradients<cit.>. The corresponding velocity measurement method for the liquid metal flow is necessary. Moreover, measuring the velocity of liquid metal flow is also crucial in various industrial applications, such as in metallurgy and materials processing<cit.>. The high temperature and opacity of liquid metals present unique measurement challenges, making using conventional techniques, such as Laser Doppler Velocimetry (LDV) and Particle Image Velocimetry (PIV), unsuitable. Consequently, measuring velocities within liquid metals has become one of the most formidable challenges in both fundamental studies and industrial applications.
Several specialized methods have been developed for measuring the liquid metal flow, which can be summarized as invasive and non-invasive methods. As invasive techniques, the force reaction probes and potential difference probes can obtain the local velocity at a measured point while having certain inherent limitations, as outlined by Molokov<cit.>. The hot-wire anemometry employs a heated wire to quantify the relationships between the flow velocity and the cooling effect of flowing liquid metal on resistance. However, it is hindered by notable drawbacks, including contamination and corrosion, as well as intricate calibration procedures<cit.>. Among non-invasive techniques, Contactless Inductive Flow Tomography (CIFT) can provide the reconstruction of three-dimensional (3D) flow fields<cit.>, but it is not suitable for turbulent flows<cit.>. In contrast, dynamic neutron radiography requires manual tracer particles and has yet to be demonstrated in obtaining the three-dimensional velocities of liquid metal flow<cit.>. In light of the advantages of ultrasonic waves such as the ability to penetrate the opaque liquid metal and the Doppler effects when meeting moving particles, Takeda<cit.> developed the Ultrasonic Doppler velocimetry (UDV) during the 1980s-1990s to measure the velocity distribution along the ultrasonic transmission line in liquid metal flow. Afterward, plenty of developments in the UDV method have been achieved, therefore the UDV method has successfully applied to experimental measurements of various liquid metals, including liquid mercury<cit.> and liquid sodium<cit.>, liquid gallium<cit.> and gallium-indium-tin alloy<cit.>. Moreover, the UDV method has been demonstrated to obtain the velocity of magnetic fluids under external magnetic field conditions<cit.>. Additionally, the UDV method has also been applied to two-phase flow systems, studying bubble motion behavior and mechanisms<cit.>, as well as liquid-metal jet driven by bubbles<cit.>.
However, the UDV has mostly been used to obtain the 1D velocity in some simple flows such as pipe flow<cit.>. For cases with complex flow, the obtained 1-D velocities are not enough to describe the main characteristics of flow, therefore the two-dimensional velocity distribution is necessary. Franke et al.<cit.> presented a novel pulsed-wave ultrasound Doppler system using an array of 25 transducer elements to investigate liquid metal flow, achieving the two-dimensional flow mapping measurement of liquid metals and further tested the capabilities of this measurement system in the study of unsteady liquid metal flows<cit.>. Müller et al.<cit.> adopted seventeen ultrasonic transducers for the reconstruction of 2D jet flow. Nauber et al.<cit.> introduced a multidimensional Ultrasound Array Doppler Velocimeter (UADV) to obtain 2D-2C velocity fields. Maader et al.<cit.> have combined the pulsed Doppler method with the phased array technique, presented the Phased Array Ultrasound Doppler Velocimeter (PAUDV), confirmed the feasibility of the phased array sensors on measuring the flow field of GaInSn at room temperature. Furthermore, The PAUVP method was applied to measure the two-dimensional velocity field at the outlet of a double-bend pipe where the flow was under turbulent conditions. The accuracy of the PAUVP method has been validated by comparing its measured velocity profiles with those obtained using the well-established Particle Image Velocimetry (PIV) method<cit.>. Recently, Tiwari<cit.> proposed a line selection method for optimizing the position of ultrasonic transducers in 2D flows based on data-driven techniques. However, the currently developed ultrasonic velocity measurement systems suffer from a limitation in the temporal resolution of two-dimensional velocity measurements due to the adoption of a time-division multiplexing (TDM) scheme<cit.>, which is the main purpose of the present study.
A classical flow model is utilized to examine the flow transitions with varying velocities. The selected flow configuration is that of a cavity flow with recirculation. This is employed to demonstrate the effectiveness of the present-developed methodology for the extraction of velocity data. Moreover, cavity flow can reflect changes between various complex flow phenomena at different Reynolds (Re) numbers, such as multi-scale vortices, secondary flow, complex three-dimensional flow, unstable laminar flow, transition flow and turbulent flow. Accordingly, this configuration represents an optimal validation platform for the precision of measurement systems and a traditional physical model for investigating intricate fluid flows. Experimental studies typically employ lid-driven methods to induce fluid flow within the square cavity, aiming to investigate the variations of internal flow with Re and Spanwise Aspect Ratio (SAR)<cit.>, as well as the transition from laminar steady-state to unsteady flow<cit.>. Similarly, a tube-driven approach is employed in this study. The experimental measurements of the cavity flow, in conjunction with a comparison between ultrasonic measurements and PIV, serve to validate the precision of the two-dimensional ultrasonic measurement system. Furthermore, experimental observations were conducted to investigate the variations in the GaInSn flow pattern within a square cavity (with an aspect ratio of 1:1 and a spanwise aspect ratio of 0.91:1) at different Re numbers.
The structure of the article is as follows. Sect.<ref> introduces the measurement system. In Sect.<ref>, the experimental setup and measurement technique are described. In Sect.<ref>, the measurement accuracy of the MPUDV system is verified and the velocity measurement results of liquid metal are analyzed. In Sect.<ref>, the results of the experiment are presented, along with a discussion of their applications.
§ MEASERMENT SYSTEM
The ultrasonic technique is considered an effective method for material tests, including non-destructive testing. It can also be used to measure the flow velocity due to the Doppler effects. In the present study, we develop the Multichannel Pulsed Ultrasonic Doppler Velocimetry (MPUDV) system which consists of ultrasonic hosts, linear array ultrasonic sensors (LAUSs) and the PC. The well-established principles of pulsed ultrasonic Doppler, as outlined by Baker<cit.>, were employed to measure fluid flow. Using the MPUDV, the two-dimension-two-component (2D–2C) velocity fields of liquid metal can be realized. The basic architecture of the MPUDV system is shown in figure <ref>, which will be described in more detail in the following sections.
§.§ Ultrasonic host
The ultrasound host has 64 discrete operational ultrasound channels with a fast switching time of 5 ns. As shown in figure <ref>, the ultrasonic host assumes responsibility for the overall control of the system, in addition to the processing of echoes, the computation of velocity fields and their subsequent visualization. The remaining excitation units, amplification units, sensor control units, acquisition units and field-programmable gate array (FPGA) modules collectively constitute the hardware components of the ultrasound host. After configuring channel operation schemes and sending parameter settings to the core control unit of the system, the host PC is responsible for organizing the measurement process. The control unit manages pulse excitation and communication with the PC. Other units simultaneously enter data acquisition and related preparation tasks, thus achieving collaborative coordination among different functional units. The arbitrary function generators (AFG) within the excitation unit generate burst signals for the pulse excitation used to stimulate the ultrasonic sensor. It also controls key ultrasonic parameters, including excitation frequency, pulse repetition frequency (PRF) and other related factors. These signals are subsequently sent to the amplification unit, where the Radio Frequency (RF) amplifier increases the amplitude of the burst signals to a higher voltage level, meeting the requirements for the operation of the sensor's piezoelectric elements. The amplified signals are then transmitted to the sensor control unit which can simultaneously control all ultrasound channels, following the configured channel operation scheme. It controls the amplified pulse excitation transmitted to the respective channels to ensure the flexibility of ultrasound measurements and personalized working schemes. By driving the Transmitting/Receiving (TR) switch, the burst signals received by the corresponding piezoelectric elements of the ultrasonic sensor are separated from the echo signals. It is then amplified using the amplifier with an exponential gain to compensate for the increasing attenuation of the echo signals as they travel through the fluid over time (Time Gain Compensation, TGC). Subsequently, the amplified signal is conveyed through the receive control module to the acquisition unit. The received echo signals by the sensor control unit, after gain and filtering, undergo digitization and recording using the equipped analog-digital converter (ADC) card (12-bit, 25 MSamples/s). Once converted into digital signals, it is then transmitted to the FPGA, where real-time signal processing is conducted using custom algorithms. The processed signal is transmitted to the control unit through optical cables. The control unit then acquires digital IQ-demodulation of Doppler signals<cit.> and conducts offline digital signal processing in MATLAB to obtain the velocity component along a single direction within the flow field. The considerable number of ultrasound channels gives rise to the generation of a substantial quantity of echo data for long-term experimental measurement. This, in turn, places considerable demand on the signal processing module.
§.§ Linear array ultrasonic sensor
The Linear array ultrasonic sensor (LAUS) is an important part of the MPUDV system. Figure <ref> illustrates the configuration of LAUS and the corresponding photograph. In contrast to conventional ultrasound Doppler measurement systems, the MPUDV system employs two LAUSs arranged vertically, as shown in Fig.<ref>, wherein each piezoelectric element of a single array sensor functions as both a transmitter for the ultrasound pulses and a receiver for the echo signals. This kind of arrangement allows the realization of one velocity component in a single direction within the two-dimensional flow (2D-1C) at the same time with the help of an ultrasonic host illustrated in <ref>. Moreover, to overcome the disturbances from two directional sensors, the MPUDV system uses two LAUSs with emission frequencies of 8MHz and 6MHz, respectively. In figure <ref>, each LAUS comprises 64 piezoelectric elements with a size of 2.95 × 10 mm^2, which can be independently operated. The spacing between adjacent piezoelectric elements is 0.05 mm. As a result, the center spacing of the 64 measurement lines is 3.0 mm. Two LAUSs are orthogonally arranged to span the measurement plane of 192 × 192mm^2. The flow field can be reconstructed by measuring the velocity components at each intersection of the transducer arrays (2D-2C). For piezoelectric elements with the emission frequency f_e, the number of cycles in a pulse N_b, the axial resolution in a fluid with a sound velocity of c is expressed by<cit.>
Δ x = N_b c/2 f_e
Furthermore, the length L_n of the near-field region of the sensor can be estimated by
L_n = kw^2/8λ
Where k is a factor, w is piezoelectric element width (w = 2.95 mm), and λ is the wavelength of the ultrasonic pulse.
§.§ Measurement method
The ultrasonic host receives echoes from tracer particles in liquid metal. The echoes contain all the necessary information to reconstruct the velocity profile along the ultrasound beam direction. We assume that the tracer particles follow the fluid flow, therefore the velocity of the particles can reflect the local velocity of the fluid elements. The velocity v along the ultrasound beam can be obtained by estimating the Doppler frequency shift f_d:
v = cf_d/2f_e
The measuring depth d is calculated based on the time offset t between the transmission and reception of ultrasound within the fluid:
d = tc/2
The echo signal is divided equally into several sections, called gates, to obtain the velocity distribution along the ultrasonic line. Each gate corresponds to an axial distance along the ultrasound beam from the sensor, and the spacing between gates is usually matched with the axial spatial resolution. The corresponding velocity at each gate position can be obtained through the phase shift of the subsequent echo signals, which corresponds to the displacement of scattering particles in the fluid between two consecutive pulses relative to the sensor's position<cit.>.
The digital signal processing of MPUDV is implemented on both the FPGA module and the PC, thereby enabling the implementation of bespoke pre- and post-processing algorithms, as well as the generation of flow field visualizations following the specific requirements of the measurement task. Figure <ref> shows the procedure of ultrasonic signal processing. The sampled echo signals are first subjected to bandpass filtering to remove noise, retaining only the signal components near the emission frequency f_e.
The filtered echo signals are processed using the IQ-demodulation and then the Delay and Sum (DAS) algorithm<cit.> for beamforming. They are multiplied by cos(2π f_e t) to obtain the in-phase signal (I-component, I(t)). Simultaneously, the bandpass-filtered ultrasound echo signals are multiplied by sin(2π f_e t) to obtain the quadrature signal (Q-component, Q(t)). Afterward, the in-phase signal I(t) and quadrature signal Q(t) are subjected to low-pass filtering separately to eliminate high-frequency components resulting from IQ demodulation. Finally, the velocity estimation is performed using autocorrelation algorithms<cit.>, providing the velocity and direction information along the ultrasound beam.
According to the theory of UDV, the allowable maximum measured velocity can be estimated by system parameters using the equation below:
v_max = cf_prf/4f_e
§.§ Sensor scheme and post-processing
Traditional two-dimensional ultrasonic velocity measurement typically employs a sequential time-division multiplexing (TDM) scheme<cit.>. By controlling the minimum spacing and time difference between the piezoelectric elements, and having each piezoelectric element alternately transmit and receive according to the measurement scheme, sound field overlap is avoided to achieve higher spatial resolution. TDM scheme helps reduce ultrasonic crosstalk between piezoelectric elements to a certain extent. However, for the complete two-dimensional flow field, the measurement lines corresponding to velocity components in the same direction are generated in an alternating manner, lacking simultaneity. Furthermore, increasing the number of piezoelectric elements extends the measurement time, causing a significant delay between the first and last measurement lines. This considerable decrease in temporal resolution makes the technique unsuitable for turbulent flows requiring high temporal resolution.
In the present study, as shown in figure <ref>, we can utilize the measurement data from all piezoelectric elements on a single ultrasonic sensor and integrate and process this data to obtain the velocity components along all measurement lines in a single direction by optimizing the algorithm. Therefore, the adjacent piezoelectric elements in the same direction can work simultaneously without influencing one another. On the other hand, by arranging ultrasonic array sensors with different emission frequencies in mutually orthogonally directions, completely distinct ultrasonic signals are produced. This guarantees that upon reception of the echo signal by the sensor, a matching of the ultrasonic frequency is made by the characteristics of the sensor itself. In this way, the alternating operation observed in perpendicular array sensors, as in the sequential time-division multiplexing scheme, is replaced. As a result, all piezoelectric elements in mutually perpendicular ultrasonic arrays work simultaneously. This significantly reduces the time wasted on alternate measurements and greatly improves the temporal resolution, while maintaining the spatial resolution of two-dimensional velocity measurements.
Subsequently, the collected echo data is processed offline using digital signal processing techniques, resulting in the acquisition of the velocity component along a single direction within the flow field. In figure <ref>, it can be observed that two LAUSs are orthogonally arranged to obtain the two-dimensional flow field in the plane. However, ultrasound parameters impact the axial resolution, leading to discrepancies between measurement points along each measurement line and the grid point positions. To address this, a post-processing algorithm was developed to make the necessary adjustments. This algorithm allows for the recombination of a 64 × 64 two-dimensional velocity vector field (2D-2C). The resulting vector field combines all the measurement data and is used for flow field visualization. For a specific experimental setup, the positions of all array sensors are fixed and aligned. Subsequently, the measurement volume is overlapped with the sensor positions to form an equidistant grid (3.0 × 3.0 mm^2). All measurement lines are combined based on their respective geometric positions and orientations. Data interpolation is performed along the axial direction and algorithm parameters are adjusted for accurate velocity synthesis.
§ EXPERIMENTAL SYSTEM
To validate the applicability of the MPUDV system, we built an experimental system to measure the velocity distribution in the cavity flow. As shown in figure <ref>, the experimental system consists of a mechanical pump, flowmeter and a rectangular cavity made of 8 mm thick acrylic glass as a test cross-section. A circulating flow is kept in the experimental system by the mechanical pump. Regarding the rectangular cavity which is adopted as the test section, it has a lateral width L = 220 mm along the bottom tube direction, a vertical height D = 220 mm, and a spanwise width B = 20 mm, resulting in an aspect ratio of 1:1 for the rectangular cavity and a spanwise aspect ratio of 0.91:1. The cross-section of the bottom tube is a 20 × 20 mm^2 rectangle. The test section is connected to two square tubes with a size of 20 × 20 mm^2. In the inlet part, the square tube with a length of 300 mm is used to generate a uniform jet flow driving the flow inside the test section. Here a honeycomb-like orifice-plate structure of approximately 60 mm in length is inserted to ensure the full development of the flow inside the square tube. It should be noted that the outlet square tube is only compatible with the flow loop. The mechanical pump is responsible for driving the flow circulation of the entire experimental system, with a flow rate ranging from 0.035 to 2.700 L/min. In the flow system, water and GaInSn are used separately as working fluids in this system for different purposes.
The LAUSs arranged in the X and Y directions are mechanically fixed in close contact with the top and sides of the square cavity to measure the internal flow field, see Fig.<ref>(b). Table <ref> shows the ultrasonic measurement parameters.
As an initial step, the measurement accuracy of the MPUDV system is validated through the utilization of Particle Image Velocimetry (PIV), which enables the simultaneous acquisition of flow field data. The PIV system consists of a high-speed camera, continuous laser source, and image acquisition system. The high-speed camera used in this system is the Revealer 5KF10-M series, capable of capturing images up to a maximum resolution of 1280 × 860 pixels (with a pixel size of 13.7 μm) and achieving a maximum frame rate of 3600 fps. It was positioned directly in front of the measurement area within the cavity. In the experiment, images were captured at a size of 860 × 860 pixels, covering an area of 220 × 220 mm^2. The capture rate was set at 50 fps to match the highest flow rate in the experiment. Additionally, velocity profiles were obtained and processed from 2000 images for each experimental condition, resulting in a total duration of 40 s. The 532 nm continuous laser (model SM-SEMI-10M) with a power of 10 W, was employed. The laser was emitted from the left side of the cavity, directly towards the center, to measure the axial plane of the fluid. The tracer particle model is MV-H0520, primarily composed of glass (a mixture of sodium silicate, sodium carbonate, and silicon dioxide, sintered). Its refractive index is 1.33, with a particle size of 5-20 μm. The density ρ_p=1.05 g/cm^3, which is close to that of water, ensures good tracking capability. The image analysis and processing were performed using PIV Lab in Matlab software. By employing autocorrelation analysis on successive frames of PIV images, the mean particle velocity within the interrogation region can be calculated, as can be regarded as the velocity field.
§ RESULTS AND DISCUSSION
§.§ Validation of MPUDV system
We carried out the validation experiments by filling the experimental system shown in figure <ref> with water which contains tracer particles. According to the previous publication, the flow inside the cavity test section should be in a quasi-two-dimensional (Q2D) state with approximately zero velocity component along the z direction. A change in the flow rate results in a corresponding variation in the Re numbers of the experiments, which range from 9103 to 24123. The definition of Re number in the present square cavity is (ρ U_bL)/μ, where U_b is the average fluid velocity within the square tube, L is the lateral width of the cavity, ρ is the fluid density, and μ is dynamic viscosity coefficient of the fluid.
§.§.§ Visulization of the water flow
The flow field measurement area is restricted within the range of -96 mm ≤ X ≤ 96 mm and -96 mm ≤ Y ≤ 96 mm, allowing for a clear observation of the rotating flow within the square cavity.
Figure <ref> illustrates the time-averaged velocity vector field in 2D-2C as measured by two distinct methods: the well-developed PIV method and the MPUDV method developed in the present paper. It can be seen clearly that the velocity vectors obtained by PIV and MPUDV are nearly the same. Furthermore, the velocity vectors distributed along a measured line, Y = -37.5 mm, are plotted in figure <ref> to clarify the differences in velocity measurement by PIV and MPUDV.
Therefore, the experimental results indicate that the two measurement techniques exhibit excellent consistency in velocity measurements of the two-dimensional time-averaged flow field within the square cavity, with good agreement in both magnitude and direction. Furthermore, it can maintain a comparable level of measurement accuracy to PIV. Subsequently, by calculating the relative errors (|V_MPUDV-V_PIV |/V_MPUDV) of velocity measurements at all grid points and then averaging these results, it is demonstrated that the time-averaged velocity measurement error is maintained at less than 3%.
§.§.§ Quatitatively comparisons in velocities
Since the PIV system and MPUDV system can work simultaneously by the time synchronizer, the comparison of instantaneous velocity is possible in the present study. Thus, to further validate the MPUDV method, we compare the values of velocity varying with Re and time from MPUDV and PIV at a typical measurement point, X = 73.5 mm, Y = 73.5 mm, displayed in figure <ref>. The MPUDV system is capable of accurately capturing the subtle variations in the time-averaged velocity of the square cavity internal flow field at different Re numbers, exhibiting millimeter-level precision. The velocity results obtained from both measurement techniques still exhibit high consistency in their temporal variations, validating the time accuracy of the MPUDV system. Similarly, by calculating and averaging the relative error at each moment (200 s in total), compared with the well-established PIV measurement system, the relative measurement error of the MPUDV system is within 3% at high temporal resolutions, proving for precise measurement of the instantaneous flow field.
§.§ Liquid Metal Measurement
As illustrated in the previous section, the present MPUDV system can obtain the 2D-2C velocity field of the water flow with high accuracy as compared to the data from the PIV system. Therefore, we have confidence that the MPUVP system can be effectively applied to measuring two-dimensional velocity fields in liquid metals. We then changed the working liquid in the flow system from water to the liquid metal, GaInSn, and systematically measured the liquid metal cavity flow in the same range of Re = 9103-24123. The two-dimensional velocity profile inside the square cavity, the size of secondary eddies and the vortex core position with changing Re were obtained and shown in detail in the following subsections. Regarding the tracer particles in liquid metal, the oxide particles are in suspension by nature for the MPUDV measurements.
§.§.§ Visulization of liquid metal flow
The 2D-2C flow field of liquid metal flow measured by MPUDV at two typical Re numbers are plotted in figure <ref>. It can be seen that the flow strength is increased with the increase of the Re number, while a large vortex always exists in the cavity. However, it is challenging to discern the secondary eddies in four corners only based on the information presented in figure <ref>.
We then analyze the time-averaged velocity along the horizontal centre line and the upper part of the vertical centre line of the cavity flow field at Re = 12147, 17296, 22757, plotted in figure <ref>. The X, Y components of velocity were non-dimensionalized using the average flow velocity of the bottom tube U_b. It can be seen from the velocity distribution that the vortex core remains on the right side of the cavity with higher velocities in the right and top regions of the cavity. With the increase in Re, the formation of secondary eddies leads to noticeable changes in the velocity distribution within the cavity. This is reflected in a lower center velocity along the horizontal center line and higher velocities near the cavity boundaries. The location of the peak velocity on the superior aspect of the vertical centre line undergoes a notable displacement towards the upper boundary of the cavity. Moreover, the overall velocity is susceptible to alteration as a consequence of the displacement of the vortex core. The details of vortex positions are illustrated in the following sections.
§.§.§ Characteristics of secondary eddy
Regarding the cavity flow, the most striking feature is the variety of secondary eddies when the Re number increases to a certain value. Therefore, we utilized the MPUDV system for corresponding studies. Experimental observations were conducted to study the variations in the size of the Downstream Secondary Eddy (DSE) situated in the upper right region of the cavity and the Upstream Secondary Eddy (USE) in the upper left region. The lateral and longitudinal sizes of secondary eddies were nondimensionalized using the cavity lateral width L and vertical height D. The objective of this study was to demonstrate the full range of high-resolution measurement capabilities of the MPUDV system.
In figure <ref>, with the increase of Re, the longitudinal size Y_d of the DSE gradually decreases, while the lateral size Y_d shows a trend of first increasing and then decreasing. The evolution of both the lateral size X_u and the longitudinal size Y_u of the USE remains generally consistent. The difference is that the X_d of the DSE is smaller than the Y_d, while it's the opposite for the USE. Additionally, USE tends to form at higher Re compared to DSE, and the overall variation trends are consistent with the PIV measurement results in water, but due to differences in physical properties, the vortex size in GaInSn is larger, see Fig <ref>(a), (b). Furthermore, by analyzing the ratio of the lateral and longitudinal sizes of DSE, USE, see Fig. <ref>(c). In GaInSn and water, the evolution of secondary eddies can be roughly divided into two stages. During the initial stage of gradual formation of the secondary eddies, X_d variation of DSE tends to be faster, while USE tends to undergo longitudinal size Y_u changes. Subsequently, X_d/Y_d and X_u/Y_u remain almost stable, reflecting synchronized changes in the lateral and longitudinal size.
§.§.§ Displacement of vortex core
The evolution of secondary eddies inside the square cavity is closely related to the continuous displacement of the primary circulation eddy. After being non-dimensionalized, the trend of the vortex core position (VCP) of the primary circulation eddy changing with Re was repeatedly measured, see Fig.<ref>(a). An increase in Re leads to a deviation of the vortex center position from the lower-right to the upper-right of the cavity, see Fig.<ref>(b). Starting from Re=14565, due to the gradual formation of USE, the movement speed of the vortex core significantly accelerates for a certain period, resulting in changes in the velocity distribution within the square cavity. While the specific vortex core position in GaInSn and water do not completely coincide, the overall movement patterns remain generally consistent.
§ CONCLUSION AND OUTLOOK
In the present study, we developed the Multichannel Pulsed Ultrasonic Doppler Velocimetry (MPUDV) system to obtain the 2D-2C velocity field for opaque liquid metal with high temporal and spatial resolution. The main conclusion can be summarized as follows:
By combining the principles of pulsed ultrasonic Doppler and the array sensors, utilizing two orthogonally linear arranged ultrasonic arrays, it is possible to drive up to 128 piezoelectric elements. This allows for simultaneous measurement of two-dimensional velocity components at 64 × 64 grid points across a measurement plane of 192 × 192 mm^2, enabling the capture of flow phenomena and reconstruction of two-dimensional velocity profiles. In comparison to the existing ultrasonic measurement systems, the algorithms and the methodology of operating the sensors at different frequencies simultaneously can achieve a higher temporal resolution. The present MPUDV was validated by the well-developed Particle Image Velocimetry (PIV) in a flow system filled with water. The results demonstrated that the velocity data obtained from the two measurement systems exhibited high consistency, with the MPUDV system demonstrating accuracy within 3% for both time-averaged velocities and transient changes.
Furthermore, using the same experimental system with GaInSn as a substitute for water, we apply the MPUDV to measure the 2D-2C velocity field in opaque liquid metal. The experimental validation of cavity flow using GaInSn as the working fluid demonstrated the excellent applicability of MPUDV for measuring liquid metal flows. The experimental results indicate that the flow within the cavity flow exhibits quasi-two-dimensional behavior. With the increase of Re, two secondary eddies (DSE and USE) gradually form within the cavity. The shifting of the central vortex core position leads to different evolution trends in DSE and USE, causing the velocity distribution to become more concentrated towards the boundaries of the cavity. The consistent variation, similar to the PIV measurement results in water, further emphasizes the reliability of the MPUDV system.
The MPUDV has considerable flexibility in measurement schemes and experimental setups, allowing easy adaptation to various measurement experiments. In addition, the present measurement system has a high temporal resolution, which makes it possible to visualize the liquid metal flow in turbulent flow conditions. To further develop the MPUDV system, It is necessary to extend the operating temperature of ultrasonic sensors to allow measurements in very high-temperature environments. This will be of significant importance for research related to the liquid metal blanket for nuclear fusion.
Acknowledgements
The authors gratefully acknowledge support from the National Key Research and Development Program of China (no. 2022YFE03130000), NSFC (nos 51927812, 52176089, 52222607), and the Young Talent Support Plan of Xi’an Jiaotong University.
|
http://arxiv.org/abs/2409.03600v1 | 20240905145941 | TCDiff: Triple Condition Diffusion Model with 3D Constraints for Stylizing Synthetic Faces | [
"Bernardo Biesseck",
"Pedro Vidal",
"Luiz Coelho",
"Roger Granada",
"David Menotti|"
] | cs.CV | [
"cs.CV"
] |
TCDiff: Triple Condition Diffusion Model with
3D Constraints for Stylizing Synthetic Faces
SIBGRAPI Paper ID:
===========================================================================================
[
979-8-3503-7603-6/24/$31.00 2024 IEEE]
§ ABSTRACT
A robust face recognition model must be trained using datasets that include a large number of subjects and numerous samples per subject under varying conditions (such as pose, expression, age, noise, and occlusion).
Due to ethical and privacy concerns, large-scale real face datasets have been discontinued, such as MS1MV3, and synthetic face generators have been proposed, utilizing GANs and Diffusion Models, such as SYNFace, SFace, DigiFace-1M, IDiff-Face, DCFace, and GANDiffFace, aiming to supply this demand.
Some of these methods can produce high-fidelity realistic faces, but with low intra-class variance, while others generate high-variance faces with low identity consistency.
In this paper, we propose a Triple Condition Diffusion Model (TCDiff) to improve face style transfer from real to synthetic faces through 2D and 3D facial constraints, enhancing face identity consistency while keeping the necessary high intra-class variance.
Face recognition experiments using 1k, 2k, and 5k classes of our new dataset for training outperform state-of-the-art synthetic datasets in real face benchmarks such as LFW, CFP-FP, AgeDB, and BUPT. Our source code is available at: .
§ INTRODUCTION
In recent years, the availability of large face recognition datasets containing thousands of real faces, such as CASIA-WebFace <cit.>, VGGFace2 <cit.>, MS1MV3 <cit.>, WebFace260M <cit.>, and Glint360K <cit.>, has contributed to remarkable advancements in face recognition across various challenging domains, including pose, age, occlusions, and noise.
With such data, deep neural networks trained with sophisticated angular margin loss functions, such as SphereFace <cit.>, CosFace <cit.>, ArcFace <cit.>, CurricularFace <cit.>, MagFace <cit.> and AdaFace <cit.>, have achieved impressive performances on different benchmarks.
However, datasets of this nature present critical ethical, annotation, and bias problems <cit.>.
Furthermore, the long-tailed distribution of samples in many datasets poses additional challenges, necessitating careful network architecture and loss function design to ensure the robustness of model generalization.
These challenges also make it difficult to explore facial attribute influences like expression, pose, and illumination. In contrast, learning-based face recognition models encode facial images into fixed-dimensional embedding vectors, enabling various tasks like identification and verification.
While publicly available datasets have driven recent progress, they come with associated problems.
Synthetic datasets offer a potential solution, providing privacy benefits, virtually unlimited data generation, and control over demographic characteristics.
This contrasts with real-world datasets, which are constrained by privacy regulations and representational biases.
Due to such advantages, synthetic faces in face recognition have attracted attention for their potential to mitigate privacy concerns and long-standing dataset biases, such as long-tail distributions and demographic imbalances. Recently, face recognition competitions using synthetic faces have been held, such as FRCSyn <cit.> <cit.> and SDFR <cit.>, showing the increasing interest of the research community on this topic.
Generating synthetic faces and manipulating attributes such as pose, expression, age, noise, and occlusion with high visual fidelity and identity consistency is a challenging task due to the ill-posed nature of representing 3D objects in 2D planes. Therefore, 3D facial constraints might improve model learning and reduce facial inconsistencies.
In this paper, we propose a Triple Condition Diffusion Model (TCDiff) to stylize a synthetic identity face with real style attributes from real faces, such as pose, expression, age, noise, occlusion, shadow, hair, etc., with 2D and 3D consistency constraints, aiming to enhance intra-class identity consistency.
Fig. <ref> presents an overview of our method and the constraints computed with identity image (X_id), style image (X_sty), and stylized image (X̂_0).
Our experimental results show that enhancing intra-class identity consistency improves synthetic dataset quality when training face recognition models with few classes.
§ RELATED WORK
Face synthesis has emerged as a prominent area of research, driven by advancements in deep generative models like GANs <cit.> and Diffusion Models <cit.>. Such methods excel at generating high-quality facial images with unique identities. However, they often lack the intra-class variance necessary to train powerful face recognition models. In this regard, recent approaches explore such variance in real-face datasets, mixing synthetic and real images to generate multiple samples from the same synthetic subject.
SynFace <cit.> proposes a Mixup Face Generator designed to create synthetic face images with different identities. To mitigate the domain gap between the synthetic and real face data, the method incorporates a Domain Mixup module regularized by an angular margin loss. In contrast, SFace <cit.> trains a StyleGAN2-ADA <cit.> using identity labels as conditional constraints.
These constraints are similarly regularized by an angular margin loss.
DigiFace-1M <cit.> employs the 3DMM-based model FaceSynthetics <cit.> to generate multiple synthetic faces, varying their expression and pose parameters. The rendering pipeline from FaceSynthetics further enhances flexibility by allowing modifications in the background and illumination settings in the images. After generating original images, data augmentation techniques including flipping, cropping, adding noise, blurring, and warping are applied to improve the face recognition performance. Despite the 3D consistency in images, this dataset has limitations in appearance due to the intrinsic synthetic texture of faces.
GANDiffFace <cit.> combines the strengths of GANs and Diffusion Models to produce realistic faces while incorporating intra-class variance. Initially, synthetic faces are generated using StyleGAN3 <cit.> trained on the FFHQ dataset <cit.>, and grouped based on by extracted facial attributes such as pose, expression, illumination, gender, and race. Support Vector Machine (SVM) classifiers are trained to distinguish each group. The normal vectors concerning the resulting separation hyperplanes are used as directions to edit facial attributes of faces in latent space. Despite achieving realistic appearances for the generated synthetic faces, there remains a gap in the evaluation performance using real face testing benchmarks of face recognition models, trained on their synthetic dataset.
IDiffFace <cit.> uses a Diffusion Model trained on the FFHQ <cit.> real face dataset to create new synthetic faces. These faces are generated using a U-Net-based architecture enriched with residual and attention blocks to encourage the model to improve the intra-class variance generation ability. To prevent overfitting and ensure diverse outputs, they also propose a Contextual Partial Dropout (CPD) technique.
DCFace <cit.> introduces an approach to minimize the distribution gap between synthetic and real face datasets by employing a diffusion model. This model integrates visual constraints to transfer the stylistic characteristics of real faces onto synthetic faces, thereby enhancing the intra-class variance. Initially trained on the CASIA-WebFace <cit.> real face dataset, the model utilizes statistics derived from intermediate features of images, assuming these contain style information to be transferred to any other identity. Despite these efforts, some artifacts and identity inconsistencies persist in the synthetic output.
Fig. <ref> illustrates some samples generated by the aforementioned methods. Each row corresponds to samples of distinct synthetic subjects, while columns contain different samples of the same subject. While the limited number of images may not be sufficient to provide an accurate visual analysis, they offer some initial intuitions about their characteristics, such as intra-class variance and identity consistency.
§ FACE STYLE TRANSFER
Image style transfer is a technique that generates novel images by merging the content of one image with the stylistic elements of another <cit.>. This process leverages Convolutional Neural Networks (CNNs) pre-trained on image classification tasks to extract hierarchical intermediate feature representations. The content of an image is captured by the higher layers of the network, which encode the image's structure and objects, while the style is represented by the correlations between feature maps in the lower layers, known as Gram matrices. To achieve style transfer, the technique minimizes a loss function that combines the content loss, which measures the difference between content representations of the original and generated images, and style loss, which quantifies the difference between style representations of the original and generated images. By iteratively adjusting a white noise image based in this combined loss function, the network gradually synthesizes the final output that maintains the content of one image while adopting the style of another.
In the face recognition field, a person's identity can be expressed mainly by facial parts such as eyes, nose, lips, eyebrows, etc., and their spatial position in the face. Meanwhile, style is related to facial pose, expression, age, noise, occlusion, color, etc <cit.>. Achieving a perfect disentanglement of identity and style representation remains a significant challenge in deep learning. Existing style transfer methods aim to manage this tradeoff depending on the final goal <cit.>.
§ PROPOSED APPROACH
For face style extraction, we adopt the proposed model E_sty of DCFace <cit.>, which uses intermediate feature maps I_sty∈ℝ^C × H × W extracted with a pre-trained and fixed weights face recognition model F_s from a given input face image X_sty, where C, H and W are the number of channels, height, and width of the feature maps, respectively. Each feature map is divided into a grid k × k and each element I_sty^k_i∈ℝ^C ×H/k×W/k is mapped on the mean and variance of I_sty^k_i as
Î^k_i = BN(Conv(ReLU(Dropout(I_sty^k_i)))),
μ_sty^k_i = SpatialMean(Î^k_i), σ_sty^k_i = SpatialStd(Î^k_i),
s^k_i = LN((W_1 ⊙μ_sty^k_i + W_2 ⊙σ_sty^k_i ) + P_emb),
E_sty(X_sty) := s = [s^1, s^2, s^k_i, ..., s^k × k, s'],
where s' corresponds to Î_sty^k_i being a global feature, where k = 1. P_emb∈ℝ^50 × C is a learned position embedding <cit.> that makes the model learn to extract patches styles according to their locations in X_sty . BN and LN are BatchNorm <cit.> and LayerNorm <cit.> operations.
The face style embedding E_sty extracted from X_sty is then applied to an identity image X_id using a U-Net denoising diffusion probabilistic model (DDPM) <cit.> face mixer ϵ_θ(X_t, t, E_id(X_id), E_sty(X_sty)), whose architecture is illustrated in Fig. <ref>. X_t is a noisy version of X_sty at time-step t and E_id is a face recognition model ResNet50 <cit.> responsible for extracting a discriminant identity embedding.
Given an identity image X_id and a style image X_sty, a new stylized image X̂_0 is obtained as
X̂_0 = (X_t - √(1 - α̅)ϵ_θ(X_t, t, X_id, X_sty)) / √(α̅_t).
where α̅_t is a pre-set variance scheduling scalar <cit.>.
To train the face mixer ϵ_θ, we first employ the mean squared error (MSE) loss L_MSE between style image X_sty and stylized image X̂_0
L_MSE = 1/MN∑_i=1^M∑_j=1^N( X_sty(i,j) - X̂_̂0̂(i,j) )^2
to enforce the model to preserve relevant style features of X_sty in X̂_̂0̂. Additionally, to balance identity and style features of X_id and X_sty in X̂_0, we also employ an identity loss L_ID through cosine similarity (CS)
L_ID = - γ_t CS(F(X_id), F(X̂_0))
- (1-γ_t)CS(F(X_sty), F(X̂_0))
where γ_t ∈ℝ | 0 ≤γ_t ≤ 1.
To enhance intra-class identity consistency when stylizing synthetic faces, we also propose to add a 3D facial shape loss L_3D
L_3D = √(∑ (x_id^3D - x̂_0^3D)^2)
which computes the Euclidean Distance between the 3DMM <cit.> shape feature vectors x_id^3D and x̂_0^3D, obtained from X_id and X̂_0. Due to the lack of large datasets containing both 2D and 3D scanned representations of real facial, we obtained 3D Morphable Model (3DMM) coefficients during training using the 3D face reconstruction model MICA <cit.>.
The shape of a face in a 3DMM representation is described by the positions of a set of 3D vertices S = (x_1, y_1, z_1, x_2, y_2, z_2, …, x_n, y_n, z_n)^T ∈ℝ^3n. These vertices form a mesh that captures the geometric structure of the face. Mathematically, the shape vector S can be expressed as a linear combination of a mean shape S̅ and a set of shape basis vectors S_i:
S = S̅ + ∑_i=1^nα_i S_i,
where α_i are the shape coefficients that determine how much each basis vector S_i contributes to the final shape. The shape vector S consists of the 3D coordinates of all vertices in the mesh.
Similarly, the face texture is described by a vector T = (R_1, G_1, B_1, R_2, G_2, B_2, …, R_n, G_n, B_n)^T ∈ℝ^3n, which is a linear combination of a mean texture T̅ and a set of texture basis vectors T_i:
T = T̅ + ∑_i=1^mβ_i T_i,
where β_i are the texture coefficients that control the contribution of each texture basis vector T_i. The texture vector T consists of the RGB color values for each vertex in the mesh.
The basis vectors S_i and T_i, for shape and texture, are derived from a Principal Component Analysis (PCA) over a set of real 3D face scans, which allows representing new faces within the training set variance.
To obtain the 3DMM facial shape coefficients x_id^3D and x̂_0^3D from identity face X_id and stylized face X̂_̂0̂, the MICA <cit.> method uses the face embedding produced by a state-of-the-art 2D face recognition network <cit.> as input to a small mapping network z = M(ArcFace(I)) ∈ℝ^300. Therefore, x_id^3D = M(X_id) and x̂_0^3D = M(X̂_̂0̂).
Finally, our total loss function L_T is defined as
L_T = L_MSE + λ_id L_ID + λ_3D L_3D,
where λ_id and λ_3D are scaling parameters to balance the importance of 2D and 3D facial identity constraints.
§ EXPERIMENTS
This section presents our experimental setup, datasets, obtained results, and qualitative analysis. To fairly evaluate the robustness of our proposed TCDiff face mixer, we adopted the same protocol of DCFace <cit.> by using the real faces dataset CASIA-WebFace <cit.> as the training set and a grid 5 × 5 for style feature extraction. Our model was trained for 10 epochs with batch=16 using AdamW Optimizer <cit.> with the learning rate of 1e-4 on one GPU NVIDIA GeForce RTX 3090. We set λ_ID=0.05 and varied λ_3D={0.001, 0.005, 0.01, 0.05} to analyse the impact of 3D consistency constraints.
After training TCDiff, we selected the same 10k distinct synthetic identities of DCFace <cit.>, which were generated using the publicly released unconditional DDPM <cit.> trained on FFHQ <cit.>. Each synthetic identity was stylized with 50 randomly chosen real faces of CASIA-WebFace <cit.>, resulting in a new synthetic dataset of 500k images. Fig. <ref> shows 1 synthetic face image X_id, 16 real faces X_sty from CASIA-WebFace, and their corresponding 16 new samples of X_id.
We choose ResNet50 <cit.> backbone and ArcFace <cit.> loss as Face Recognition (FR) model to evaluate the quality of the proposed synthetic dataset in cross-dataset scenarios using seven different datasets in face verification (1:1) task: LFW <cit.> CFP-FF <cit.>, CPLFW <cit.>, CFP-FP <cit.>, AgeDB <cit.>, CALFW <cit.>, and BUPT-CBFace <cit.>.
These datasets are commonly applied in FR to validate or test models. Each dataset contains a verification protocol consisting of face pairs labeled as genuine (same person) or impostor (different person).
LFW (6k pairs) and CFP-FF (7k pairs) protocols are mainly focused on frontal face verification, representing controlled scenarios. Otherwise, CPLFW (6k pairs) and CFP-FP (7k pairs) contain faces with more varied poses to simulate in-the-wild scenarios. AgeDB (7k pairs) and CALFW (6k pairs) focus on comparing faces with large age differences, while BUPT-CBFace (8k pairs) contains the same number of pairs of 4 distinct ethnic groups: Asian, Caucasian, African, and Indian. All protocols were split into 10-fold containing 50% of genuine and 50% of impostor pairs. Following the cross-validation method, we use 9 folds to select the best threshold and 1 for the final test.
§ DISCUSSION
In this section, we present a qualitative and quantitative analysis of the results obtained with the synthetic datasets generated with DCFace <cit.> and our proposed model TCDiff for the face recognition task.
Even with few samples, we can visually observe in Fig. <ref> that synthetic faces stylized by DCFace <cit.> have low intra-class consistency, compared to its corresponding original synthetic identity X_id. For instance, the stylized male faces 0, 8, and 10 seem to belong to distinct identities.
The same happens with the stylized female faces 2 and 9.
Otherwise, our face mixer ϵ_θ tends to preserve identity regions, such as eyes, nose, and mouth of original X_id in stylized faces to enhance the intra-class consistency imposed by L_3D when λ_3D varies from 0.001 to 0.01.
Stylized faces are completely degraded when λ_3D = 0.05 and this setting was ignored in our experiments.
Such a qualitative analysis is quantitatively confirmed in Fig. <ref>, where intra-class cosine similarities are presented.
Blue bars show the distributions of all ∑_i=0^M C^N_i_2 similarities, where M is the number of classes and N_i is the number of samples of the i-th class, while orange bars show the distributions of the mean intra-class similarities.
One can observe higher similarities in faces stylized with our model TCDiff, indicating a higher intra-class consistency due to the shape of eyes, nose, lips, skin color, pose, expression, and facial accessories.
The quality of a face dataset for FR task is assessed not just by the images themselves, but by the performances of FR models trained on it. Results in Table <ref> show that enhancing intra-class consistency improves synthetic datasets quality when training with 1k, 2k, and 5k classes. We hypothesize that such an identity consistency improves inter-class separability when the number of distinct identities are low.
However, this improvement is surpassed when training with 10k classes, indicating that the inter-class variability is also an important property of synthetic datasets for face recognition.
ResNet50 performed slightly better on dataset CFP-FF <cit.> when training with stylized images by our model TCDiff (λ_3D=0.001), due to the greater existence of frontal faces in such a dataset.
§ CONCLUSION AND FUTURE WORK
In this work we propose TCDiff, a face style transfer trained with 2D and 3D facial constraints, aiming to improve the quality of synthetic datasets for face recognition. By increasing the importance of 3D constraints, our model can preserve identity features of the input synthetic face to be stylized, which enhances intra-class identity consistency.
This behavior contributes to increasing the quality of small synthetic datasets and might be explored in the future for more classes, as the interest in this field is growing due to the advantages of synthetic data. As a future step, facial expression and pose constraints might be added to the face styler model, aiming to balance better the importance of identity and style features in newly generated samples.
§ ACKNOWLEDGMENT
This work was supported by
a tripartite-contract, i.e., unico - idTech, UFPR (Federal University of Paraná), and FUNPAR (Fundação da Universidade Federal do Paraná).
We thank the Federal Institute of Mato Grosso (IFMT), Pontes e Lacerda, for supporting Bernardo Biesseck, and also thank the National Council for Scientific and Technological Development (CNPq) (# 315409/2023-1) for supporting Prof. David Menotti.
ANONYMOUS.
IEEEtran
|
http://arxiv.org/abs/2409.02711v1 | 20240904134919 | Creating a Gen-AI based Track and Trace Assistant MVP (SuperTracy) for PostNL | [
"Mohammad Reshadati"
] | cs.AI | [
"cs.AI"
] |
Gen-AI based MVP: SuperTracy
M. Reshadati
Vrije Universiteit Amsterdam IT E-commerce at PostNL
[email protected]
Creating a Gen-AI based Track and Trace Assistant MVP (SuperTracy) for PostNL
Mohammad Reshadati
September 9, 2024
=============================================================================
§ ABSTRACT
The developments in the field of generative AI has brought a lot of opportunities for companies, for instance to improve efficiency in customer service and automating tasks. PostNL, the biggest parcel and E-commerce corporation of the Netherlands wants to use generative AI to enhance the communication around track and trace of parcels. During the internship a Minimal Viable Product (MVP) is created to showcase the value of using generative AI technologies, to enhance parcel tracking, analyzing the parcel's journey and being able to communicate about it in an easy to understand manner. The primary goal was to develop an in-house LLM-based system, reducing dependency on external platforms and establishing the feasibility of a dedicated generative AI team within the company. This multi-agent LLM based system aimed to construct parcel journey stories and identify logistical disruptions with heightened efficiency and accuracy. The research involved deploying a sophisticated AI-driven communication system, employing Retrieval-Augmented Generation (RAG) for enhanced response precision, and optimizing large language models (LLMs) tailored to domain specific tasks.
The MVP successfully implemented a multi-agent open-source LLM system, called SuperTracy. SuperTracy is capable of autonomously managing a broad spectrum of user inquiries and improving internal knowledge handling. Results and evaluation demonstrated technological innovation and feasibility, notably in communication about the track and trace of a parcel, which exceeded initial expectations. These advancements highlight the potential of AI-driven solutions in logistics, suggesting many opportunities for further refinement and broader implementation within PostNL’s operational framework.
< g r a p h i c s >
§ ACKNOWLEDGEMENTS
My project was greatly facilitated by my supervisors, Banno Postma from VU Amsterdam and Jochem Roth from PostNL. I thank Banno Postma for his guidance, feedback and flexibility throughout the process. I thank the Gen-AI team for welcoming me with open arms to PostNL. My special thanks to Jochem Roth for this internship opportunity, his guidance and the trust he had in me to provide the MVP and present it for the Directors. I greatly value Chris Scholtens' expertise in clarifying PostNL's logistical network and Nick Smith for providing logistics data. I express my gratitude to Jasper Bosma for his critical evaluation of the SuperTracy project, as well as to his team members Antoine Donkers, Stefan de Jong, Sebastiaan van de Koppel, Kees-Jan de Gee, Floris Groot, Robbert-Jan Joling, and Michael Street for their essential evaluations.
§ INTRODUCTION
The developments in the field of generative AI has brought a lot of opportunities for companies. For instance, the logistics and postal sector has seen significant improvements in efficiency of customer service through the integration of AI technologies. As one of the leading postal and logistics companies in the Netherlands, PostNL has embraced these advancements to enhance the internal and external communication around track and trace of parcels. This thesis explores the development and implementation of a generative AI-based multi-agent large language model system designed to facilitate parcel-related inquiries at PostNL. By leveraging the capabilities of generative AI, this system aims to streamline communication, improve customer satisfaction and to help internal agents to understand the journey of the parcel in an easy manner.
§.§ PostNL
PostNL is the biggest mail, parcel and E-commerce corporation of the Netherlands, with
operations among the Benelux, Germany, Italy and the United Kingdom. It has a rich history of 225 years in The Netherlands. On an average weekday, 1.1 million parcels and 6.9 million letters are delivered throughout the Netherlands by PostNL. This huge amount is achieved by the existence of 37 sorting centers, 11.000 letter boxes, 903 Automated Parcel Lockers 5.795 retail locations and approximately 33.500 employees <cit.>.
The business of PostNL is high-over divided in 3 sections: parcels, mail and Cross Border Solutions (CBS). The scope of this research is parcels and E-commerce. The end-to-end process of parcel delivery can be quite complex, but very efficient at the same time. The key elements in the value chain are the following <cit.>:
* Collect In this first step the parcels are collected from the customers. In this process, the expectations of the customer have to be matched by timely pick-up and processing.
* Sort The second step is sorting and processing the parcels, based on destination and specific customer and consumer needs. An efficient sorting helps to ensure delivery to the right location on time.
* Deliver The final step is delivery, which is the moment of connection within the sender and receiver, and the final delivery of the parcel. As the delivery men are in each street in The Netherlands everyday, this gives space for additional societal services, like identifying loneliness in households and working together with charity organizations.
The execution of these vital steps are heavily influenced by technological developments.
The digital transformation that PostNL is going through is an important initiative to stay on top of these innovations <cit.>. Digitalisation helps in developing the core activities to provide smart E-commerce solutions to improve its competitive position. Logistics has developed from being a pure service providing activity, to becoming a key driver of digital and societal change. Technologies such as the Internet of Things (IoT), autonomous driving, big data and are inseparably intertwined with logistics. Recently, generative AI has emerged as another significant technology to contribute to advancements in this industry.
§.§ Problem Statement
The value chain of parcel delivery is described in the section above in a high-over and abstract way. However, the value chain is more complicated, depending on variables like customer and consumer preferences, the size of the parcel, the time of the year, the volume of parcels, which PostNL services are being executed and more. Throughout this process, logistic events are registered, also called 'waarnemingen' in Dutch. These logistic events represent diverse situations that can appear based on the variables. The logistic events are a code starting with a letter followed by 2 numbers. There are 400 unique logistic events. These individual events together form logistic event sequences, which describe the journey of the parcel from the moment of acceptance in the PostNL network up to delivery. There is a lot of variation possible in these sequences, resulting in hundreds to thousand different sequence possibilities. For instance, a package with a specific barcode during its journey can exhibit the following sequence of 'waarnemingen':
[A01, A98, A95, B01, G03, V06, A04, K50, B01, A96, J01, J40,
A19, J05, A19, H01, J30, B01, J17, B01, J01, J01, J40, A19, A19, J05, I01]
Each code in the sequence above has a descriptive meaning and a contextual explanation. One challenge is the diverse combination of codes in sequences, where the presence or absence of a code can be due to a mistake in the logistical operational process.
Within PostNL there are logistic business experts who can interpret these sequences and explain it to others. They know the context of the operational process behind the events. Certain combinations of logistic events in sequences can imply implicit knowledge that can only be understood and explained by these experts. They also know the full meaning of a logistic event that might not be well documented. These experts use an internal system called 'Tracy' where for the barcode of each parcel, you can look up the logistic events, along with description of the event, timestamps, where they happen (locations), source-system, and some customer and consumer information. For external communication with consumers on the status of the parcel, a small selection of these events are shared through the PostNL app notifications, through email or through the track-and-trace page on the PostNL website.
PostNL wants to see if Gen-AI can be used to make sense of these complex logistic event sequences in a easy to understand manner. You could say an elevated version of Tracy, with the name 'SuperTracy'. Such a system could be used internally for business use-cases, or externally for consumer facing communication about the status of the parcel.
§.§ Research Goal
The goal of this research is to explore the potential for forming a dedicated approach towards generative AI research, development, and engineering at PostNL, by inspiring business stakeholders on its potential. There are several sub-goals that contribute to this:
* Exploring valuable business use-cases solved by Gen-AI: In order to get dedication and interest of business stakeholders, it has to be shown that there are valuable use-cases that can be solved through Gen-AI. One of the use-cases is to improve the efficiency and accuracy of communication around parcel track and trace within the logistical ecosystem of PostNL through SuperTracy.
* Using in-house solutions: The MVP of SuperTracy is aimed to establish the viability of developing an entirely new in-house generative generative AI based system, eliminating reliance on external AI platforms like ChatGPT API's or Amazon Bedrock services. Doing so will decrease the costs and make the usage of Generative AI more approachable.
* Creating a MVP for the business stakeholders: Showing the value of a Gen-AI based solution to a business problem, requires a MVP. This unfolds in having a technical sound product, but also an evaluation to show the value. The evaluation has to show that SuperTracy can mimic parcel experts at least, or even enhance the understanding of parcel journeys and effectively identify and communicate issues in the logistics and delivery processes. The MVP should be suitable to show to business stakeholders to see the value.
§ LITERATURE STUDY
§.§ Generative AI and ChatGPT
Generative AI refers to a category of artificial intelligence algorithms that can generate new data or content that is similar to the data it was trained on. Unlike traditional AI, which typically focuses on identifying patterns and making decisions based on existing data, generative AI can create new, original content, such as text, images, music, and even code. This capability is powered by models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and transformer-based models like GPT (Generative Pre-trained Transformer). <cit.>.
ChatGPT is a notable example of a transformer-based generative AI model developed by OpenAI, which is specifically designed for conversational tasks. It has attracted worldwide attention for its capability of dealing with challenging language understanding and generation tasks in the form of conversations <cit.> <cit.>. This has also inspired businesses to make use ChatGPT AI systems, as it can have various efficiency gains. ChatGPT can automate various business tasks such as content creation, customer service, and data analysis, leading to improved productivity and cost savings. Also the model's ability to understand natural language and provide human-like responses can enhance customer engagement and satisfaction <cit.>.
§.§ Transformer Architectures and Large Language Models
The way ChatGPT works is through sophisticated natural language processing (NLP) techniques within massive computational infrastructures that result in the fluent human like responses. The workings of this rely on neural transformer models and Large Language Models (LLMs). Transformer based models are excellent at processing longer sequences of data–like text–by using self-attention processes that enable the model to focus on different areas of the input by learning long-range dependencies in text. Self-attention is a mechanism that allows the model to learn the importance of each word in the input sequence, regardless of its position <cit.>.
LLMs are constructed using the transformer architecture. LLMs are a type of AI model trained on massive amounts of text data to understand and generate human-like language. They have a large number of parameters, often in the billions, which enable them to capture intricate patterns in language. This scaling up has allowed LLMs to understand and generate text at a level comparable to humans <cit.>.
So transformers help the system focus on the important parts of the input text, understanding the context and meaning. LLMs use this understanding to predict and generate the next word in a sentence, making the conversation flow naturally.
§.§ Open-sourced and closed-sourced LLMs
ChatGPT is a closed-source general purpose-chatbot. The closed source LLMS are also called 'commercial' or 'Proprietary' LLMs. In general, closed-source LLM models perform well across diverse tasks, but they fail to capture in-depth domain-specific knowledge <cit.>. The same holds for other closed-source LLM models like other OpenAI GPT model families or Claude <cit.>. The usage of LLMs for specific use-cases can sometimes seem unattainable, due to the lack of transparency, high cost and energy consumption, usage limits, and adherence to terms of service.
The recent emergence of highly capable open-source LLMs such as LLAMA 3, T5, MADLAD, and GEMMA 2 allow researchers and practitioners at large to easily obtain, customize, and deploy LLMs in more diverse environments and for more diverse and specific use cases <cit.>.
§.§ Making LLMs suitable for specific tasks through fine-tuning
There is a irresistible necessity from enterprises for fine-tuning LLMs to get them trained on proprietary domain knowledge. Fine-tuning is the process of continuing the training of an already pre-trained model on a new dataset that is typically smaller and task-specific <cit.>. This allows the model to adjust its weights and parameters to better fit the nuances of the new data and the specific requirements of the target task. Though there is an option to use OpenAI (open-source) models to solve most of the use-cases, there is a high demand for domain specific LLMs due to data privacy and pricing concerns as mentioned earlier. The data of an enterprise can stay on premise as the LLMs are also present on premise. In-house development ensures this. Fine-tuned LLMs provide quality and custom feel to the stakeholder and also has low latency in displaying the results.<cit.>
§.§ Making LLMS suitable for specific tasks through Retrieval Augmented Generation
When a LLM model that is not fine-tuned is used for domain specific tasks and asked to handle queries beyond its training data or current information, hallucinations can happen <cit.>. As fine-tuning can be done to make LLMs suitable for specific use-cases, another approach next to or instead of fine-tuning can be to use Retrieval-Augmented Generation (RAG) architecture. RAG enhances LLMs by retrieving relevant document chunks (in real time) from external knowledge bases through semantic similarity calculation <cit.>. So RAG retrieves additional data, and augments it to the existing knowledge of the LLM based on semantic similarity. In fine-tuning, the weights of the existing parameters of the LLM get adjusted to the learned knowledge, but vectors are not added. Therefore to keep the knowledge base updated, fine-tuning would be computationally expensive.
§.§ Enhancing the input of an LLM-based systems through prompt engineering
The input that is given to an LLM is also important for the desired output. That input is also called a Prompt. Prompt engineering is the process of designing and refining input queries, or “prompts,” to elicit desired responses from LLMs <cit.>. In the literature, prompt engineering is usually explained in two different context. Prompt engineering can apply to how a user of an LLM based system can phrase its desired task to the model the best way<cit.> <cit.>, or prompt engineering can refer to the effective way of responding of the LLM-based system, which can be determined by the developer <cit.>. In the latter case, various prompt engineering techniques are available to guide the model effectively. Well known methods are few-shot prompting, chain-of-thought prompting and self-consistency <cit.>.
§.§ Enhancing the performance of LLM-based systems through Quantization
The performance of open-source models depend on the hardware and the available computational resources that are being used. Significant challenges can be faced when attempting to leverage the full potential of transformer models in cases where memory or computational resources are limited. Because the advancements in transformer performance are accompanied by a corresponding increase in model size and computational costs <cit.>. Floating-point post-training quantization techniques can be used to enable the compression of transformers to face the challenges of limited computational resources. <cit.>. This approach enables the effective use of LLMs on hardware with constrained computational capabilities while maintaining the high quality of generative AI services. Additionally, it reduces the computational resources required for training and executing models <cit.>, resulting in cost savings.
§.§ Logistic event prediction through sequence to sequence prediction by T5
In the previous sections, the transformer architecture and LLMS like GPT-3 and GEMMA have been discussed. These models have demonstrated remarkable capabilities in various NLP tasks due to their ability to understand and generate human-like text. Transformer-based models are very versatile and diverse, making each of them suitable for different tasks. T5 (Text-to-Text-Transfer Transformer), developed
by Google, is a versatile language model that is trained in a ”text-to-text” framework <cit.>. The key innovation of T5 is the formulation of all tasks as text generation problems. This means that every task, including text classification, summarizing, translation, and question answering, is cast into a text-to-text format. For example, instead of training T5 to answer questions directly, it is trained to generate the complete answer given the question and relevant context. <cit.>
For the prediction of future logistic events in the sequence as shown in the Problem Statement, The T5 can be used. Transformers are highly suitable for this task due to their ability to handle sequential data and capture long-range dependencies through self-attention mechanisms <cit.>. The T5 model excels in sequence prediction tasks and can be fine-tuned on specific datasets to improve accuracy <cit.>. Mathematically, the transformer architecture uses an encoder-decoder structure where both components utilize self-attention and feed-forward neural networks. The self-attention mechanism computes representations for each element in the sequence by considering the entire sequence context, enhancing the model's capability to predict the most likely sequence of logistic event codes. This mechanism can be described by the attention function, which maps a query and a set of key-value pairs to an output, computed as a weighted sum of the values, where the weights are derived from the query and corresponding key <cit.>.
§.§ LLMs and Multi-Agent systems
So far various methods to make LLMs suitable for specific tasks have been discussed, such as fine-tuning and prompt engineering. As intelligent agents also focus on specific tasks, researchers have started to leverage LLMs to construct AI agents <cit.>. LLMs can be employed as the brain or controller of these agents and expand their perceptual and action space. These LLM based agents can exhibit reasoning and planning abilities through earlier discussed prompt engineering techniques like Chain-of-Thought.
Based on the capabilities of the single LLM based agent, LLM-based Multi-Agents have been proposed to leverage the collective intelligence and specialized profiles and skills of multiple agents. Compared to systems using a single LLM-powered agent, multi-agent systems offer advanced capabilities by specializing LLMs into various distinct agents, each with different capabilities, and by enabling interactions among these diverse agents to simulate complex real-world environments effectively <cit.>.
§ THE SOLUTION
§.§ Data and knowledge Discovery
To make LLMs suitable for domain specific tasks, the models have to be fine-tuned on relevant PostNL data. Several interviews have been done with experts of logistic events and data warehouse engineers to identify the existing datasets and to understand the operational process behind the data. Also the workings and background data of Tracy have been investigated. This resulted in the selection of appropriate datasets to build the solution with:
* Collo data: The 'Collo' dataset is a well known dataset in PostNL. It contains all the barcodes of parcels, along with all the logistic events that are registered throughout the journey of the parcel, from the moment of acceptance in the network up to delivery. This is an extensive data set, with 159 columns, with each row representing an event or the current state of a parcel.
* Abbreviations: Abbreviations are used a lot in PostNL, resulting in PostNL specific terminology which is widely used in documentation and other data. This dataset has a collection of abbreviations, along with a description and explanation. These abbreviations are also widely used throughout Collo columns and data entries.
* Waarnemingen: This data set contains all the 400 unique 'waarnemingen' or logistic events, along with a description of what each mean. Each Waarneming code of a parcel forms a row in Collo. Additionally, each code falls into either the internal or external category. The internal category indicates that this event is only visible for PostNL employees through Tracy for example. The external category indicates that the logistic event is being shared externally with customers. For example through the PostNL app or email.
* Location data: The location master data set contains all the locations that PostNL does business on. Examples are warehouses, sorting centres, distributions centres, hubs and more.
For this project access to Tracy was provided. Tracy is the web-application containing all the data on parcels in real-time. Tracy ingests Collo data. Tracy has been used throughout the process to lookup some barcodes for tests. The combination of the mentioned data sources allowed for a meaningful interpretation of the main dataset, Collo.
§.§ Data Preparation
The datasets procured from the data warehouse were initially in a raw format, necessitating extensive data preparation prior to further analysis. The initial phase involved Exploratory Data Analysis (EDA), through which a comprehensive understanding of the dataset was developed. This was followed by the data preparation phase, outlined as follows:
* Data Cleaning: This process addressed issues such as missing values and duplication of data points to ensure the integrity of the dataset.
* Statistical Analysis: Statistical summaries were performed to examine the distribution, mean, median, mode, and variance of the data. This analysis was critical for understanding the underlying patterns and anomalies within the dataset.
* Data Transformation:
* Language Standardization: Translated 132 columns of logistic parcel data from Dutch to English to establish a uniform language baseline essential for subsequent project implementation.
* Normalization and Cleansing: This step involved both normalizing and cleansing the textual data to enhance its suitability for analysis. Normalization tasks included case conversion to minimize case sensitivity issues and tokenization to structure the text into usable segments. Concurrently, the data was cleansed by removing punctuation and special characters to prevent potential data processing errors. These processes together ensured that the textual data was not only uniform but also clean and optimized for subsequent analytical tasks.
* Data Splitting: The dataset was segmented into training, validation, and test sets to support the development of robust predictive models.
These preparatory steps were instrumental in ensuring that the data was aptly conditioned for the sophisticated analyses and modeling that followed.
§.§ Model design
§.§.§ Expected model output
The first step of design thinking <cit.> is to understand the problem, and then have clear what the concept of the solution can be. So to understand what kind of output is expected. This was achieved by discussions with domain experts, creating a shared agreement on the possible expected outcomes of the final system. The agreed goal of the system is to simulate the comprehension and narration abilities of a logistic domain experts at PostNL, who answers in a user friendly and helpful manner.
§.§.§ Overall Model design
To achieve the goals stated in section 1.3, a solution is proposed building upon the findings of the literature study in section 2. The final solution is SuperTracy, a multi-agent LLM based system, leveraging open-source LLM models like GEMMA 2 and LLAMA 3 to make sure the company data is on premise and safe. Prompt engineering has been applied to direct the agents to a specific behaviour and output style. This solution is further fine-tuned on relevant datasets to make it suitable for specific use-cases. The system is build upon the RAG-architecture, such that hallucination is prevented and realtime data can be used. Also the solution is quantized, to make sure it can be run on modest hardware. The final solution contains different models and modules like logistic event prediction and translation to different languages for extra features.
§.§.§ language detection and translation
LLMs already have the capability to 'speak' in different langauges, but the performance is limited for some languages <cit.>. In order for an LLM based system to function, the instructions through prompt engineering are necessary. These prompt engineering templates need to be in the same language as the destination language. As the templates are designed by the developer, they are predefined. For this solution, the prompt templates are available in English and Dutch. But if another language is desired, the templates need to be translated. To do this first the language has to be detected from the user input, and then translation can happen. The CLD3 <cit.> neural network detects the language of the user input. As mentioned earlier in the literature review, T5 models are suitable for translation tasks <cit.>. The MADLAD model, which is a variation of the T5 model, is employed for text-to-text translation of the templates.
§.§.§ Logistic event prediction
On the moment of data retrieval, not all the 'waarnemingen' codes for parcels were complete, as some parcels were still in progress and their final code had not been generated yet. To explore LLMs for sequence-to-sequence problems, the T5 model is used to predict the most likely sequence of 'waarnemingen' codes to make up for the absence of the end state of the journey. By using this strategy, we can predict the future states of parcels. If a 'waarnemingen' code indicates a problem, we can anticipate which combinations of 'waarnemingen' codes will occur so that pro-active communication can take place. This allows us to complete the entire sequence of 'waarnemingen' codes and estimate the final stage of the parcels' journeys. In the generated output it will be stated what the prediction is.
Given an input sequence of 'waarnemingen' codes X = [x_1, x_2, …, x_n], where each x_i represents a specific code (e.g., "A01", "A98"), the goal is to predict the most likely next code in the sequence. Each 'waarnemingen' code is converted into a dense vector representation through an embedding layer. The self-attention mechanism computes the attention scores for each pair of codes in the sequence to capture their relationships:
Attention(𝐐, 𝐊, 𝐕) = softmax(𝐐𝐊^T/√(d_k)) 𝐕
Multiple self-attention mechanisms (heads) capture different aspects of the relationships:
MultiHead(𝐐, 𝐊, 𝐕) = [head_1; head_2; …; head_h] 𝐖_O
The output of the multi-head attention is passed through a position-wise feed-forward neural network to introduce non-linearity:
FFN(𝐱) = max(0, 𝐱𝐖_1 + 𝐛_1) 𝐖_2 + 𝐛_2
Finally, the transformer's decoder predicts the next 'waarnemingen' code by applying a softmax layer over the vocabulary of possible codes:
P(x_n+1 | x_1, x_2, …, x_n) = softmax(𝐡_n 𝐖_O + 𝐛_O)
Using these mathematical formulations <cit.>, the T5 model processes the input sequence of 'waarnemingen' codes, capturing dependencies and predicting the most likely subsequent codes, thereby allowing us to anticipate future states and resolve issues in parcel transportation <cit.>.
§.§.§ Fine-tuning LLAMA
LLAMA focuses on the main expected task of the solution: describing the journey of a specific barcode based on its associated data dependencies. In this step the advanced capabilities of the latest open-source LLM, specifically the LLAMA 3 model developed by Meta <cit.> are leveraged. The objective is to fine-tune this model using the prepared training datasets as described in 3.1.
The training dataset was constructed following the Alpaca-style methodology<cit.>, where synthetic data is generated to mimic real-world scenarios. This methodology involves structuring data in a specific format that consists of an instruction, contextual input, and the model's expected response. For example, the training set schema looks like this:
[language=Python]
alpaca_training_schema = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Describe the key events in the package's journey from sender to receiver, focusing on crucial moments.
### Input:
Tracking details indicating times, locations, and statuses.
### Response:
A concise narrative summarizing the packages journey, highlighting important transitions and updates.
"""
This structured approach allows us to create a high-quality, domain-specific training dataset without extensive manual annotation, typically required. By integrating these training datasets, the LLAMA model not only retains its comprehensive linguistic comprehension but also gains specialized knowledge crucial for interpreting parcel statuses and journeys effectively.
Fine-tuning the LLAMA 3 model with our enriched dataset is executed using the Hugging Face Supervised Fine-tuning Trainer<cit.>. The optimization objective is to minimize the loss function over the training data, typically using cross-entropy loss for language models:
ℒ(θ) = 1/N∑_i=1^Nℒ(ŷ_i, y_i)
where θ denotes the model parameters, N is the number of training samples, ŷ_i and y_i represent the predicted and true labels, respectively, and ℒ is the loss function <cit.>. The training process involves techniques such as gradient accumulation and mixed-precision training to manage memory consumption and accelerate computation. Gradient accumulation effectively increases the batch size without requiring additional memory, which is crucial for handling large models. By carefully adjusting parameters and leveraging these advanced training techniques, we ensure that the fine-tuned model achieves optimal performance.
The plot in figure 1 delineates the training loss trajectory of a Large Language Model (LLAMA3) <cit.> during a fine-tuning phase conducted over a single epoch, comprising 120 steps. The model configuration involves 83,886,080 trainable parameters. Key training parameters include a per-device train batch size of 2, a gradient accumulation strategy across 4 steps, and an initial learning rate of 2 × 10^-4, optimized using an 8-bit AdamW optimizer with a linear learning rate scheduler. The primary plot line, marked in blue, represents the raw training loss recorded at each step, reflecting the model's immediate response to batch-level optimizations. Overlaying this, a red trend line—calculated as a moving average—smoothes out fluctuations to highlight broader trends in model performance and stability. Noteworthy are the annotations at the lowest and highest points of loss, which pinpoint critical moments in the training process where the model achieved optimal learning and where it may have struggled, respectively.
§.§ Optimizing the performance of the model
To optimize the model's performance during fine-tuning, quantization is being applied. Quantization reduces the memory footprint and computational requirements of neural networks. Quantization involves reducing the precision of the model parameters, typically from 32-bit floating-point (FP32) to lower bit-width representations such as 16-bit floating-point (FP16), 8-bit integers (INT8), or even 4-bit integers (INT4). Mathematically, this can be expressed as:
W̃ = round(W - min(W)/Δ) ·Δ + min(W)
where W̃ represents the quantized weights and Δ is the quantization step size, defined as Δ = max(W) - min(W)/2^b - 1, with b being the number of bits used in the quantization <cit.>. This transformation maps the original weight values into a discrete set of levels, significantly reducing the number of bits required to store each weight. This process is crucial for deploying large models on resource-constrained devices, allowing efficient storage and faster computation without significantly compromising model performance.
Another employed technique is Low-Rank Adaptation (LoRA), which enhances the efficiency of large language models by updating only a small subset of model parameters. LoRA uses low-rank matrices to approximate the updates to the weight matrices in the model, thereby reducing the computational burden. Formally, let W ∈ℝ^d × k be the original weight matrix. d × k are the dimensions of the original weight matrix, where d is the number of output features and k is the number of input features. LoRA approximates W as:
W ≈ W_0 + Δ W
Where the change in W, denoted Δ W, is given by:
Δ W = A B
where W_0 is the pre-trained weight matrix, and A ∈ℝ^d × r and B ∈ℝ^r × k are low-rank matrices with r ≪min(d, k). This decomposition reduces the number of parameters from d × k to r(d + k), resulting in significant computational savings. The optimization problem during fine-tuning focuses on learning the matrices A and B, rather than the full weight matrix W <cit.>.
Quantization facilitates inference, particularly on devices with limited resources. Quantization during the training process ensures that the model is low-precision and robust. To ensure numerical stability, quantization should be configured in advance and gradients should be computed in reverse order with floating-point precision. By decreasing computational and memory consumption, quantization and LoRA facilitate the utilization of LLAMA 3 by quite modest hardware only.
§.§ Architectural design
§.§.§ The multi-agent setup
The main idea of having this multi-agent LLM-based system revolves around domain-specific knowledge agents characterized by role-based agents and system template prompt engineering. This approach determines the main goals of each specific agent and constrains their behavior based on specified requirements and the state of the environment. In this context, the environment is the SuperTracy interactive system, encompassing user interactions and the contextual states of the conversation. There are three agents defined:
* Reception Agent: The Reception Agent handles basic communication. It introduces itself to the user and provides guidance on using the system, including instructions to provide the parcel's barcode. This agent ensures a smooth initial interaction and prepares the user for further engagement with the system.
* Parcel Agent: The Parcel Agent is responsible for analyzing parcel data and generating detailed narratives of parcel journeys. These narratives can range from detailed reports to short, coherent answers, depending on user needs. This agent includes specialized sub-models, such as a predictive model for forecasting the future status of parcels. By customizing its responses, the Parcel Agent enhances the user's ability to track and understand parcel movements comprehensively.
* Knowledge Expert Agent: The Knowledge Expert Agent specializes in answering user questions related to internal PostNL concepts and domain-specific terms. This agent can handle queries ranging from simple explanations to complex scenarios. Its knowledge base is derived from PostNL's internal documents and general logistics knowledge. The agent's ability to reason and provide contextually accurate answers makes it a valuable resource for users seeking detailed information on PostNL operations.
The Reception agent and the Parcel agent contribute to the expected outcome of SuperTracy, that is to communicate on the parcel's track and trace journey. The Knowledge Expert agent is an additional bonus feature. The idea emerged beyond the scope of the main research question and the main requirements of the project which was only limited by the parcels track and trace. The knowledge expert agent is developed using the same approach as the Parcel agents, it is trained on internal PostNL documents. This enables it to answer and explain questions related to internal PostNL knowledge, providing reasoning based on this knowledge. It can handle simple queries, such as explanations of PostNL-specific abbreviations or business and technical terms, as well as more sophisticated questions, such as offering advice on logistical scenarios with certain problems. In all cases, the model answers questions using its acquired knowledge and foundational reasoning capabilities. You can see examples of simple and complex queries in Appendix A.
The agents together generate parcel journeys that are comprehensible due to the use of plain language instead of a sequence of logistic events. Through prompt engineering, the model exhibits both efficiency and adaptability, generating concise stories of parcel journeys that incorporate contextual information.
§.§.§ Prompt engineering
The developer of a LLM-based system can instruct the behaviour of the system through prompt engineering. For this solution Chain-of-Thought prompting and Few-Shot prompting are used <cit.>. Chain-of-Thought prompting is a technique to prompt LLMs in a way that facilitates coherent and step-by-step reasoning processes. Few-shot prompting provides models with a few input-output examples to induce an understanding of a given task.
Within prompt engineering a template refers to a natural language scaffolding filled in with raw data, resulting in a prompt <cit.>. Throughout the solution, 4 different types of templates have been identified.
* Template for cognitive behaviour of the agents: In this template the agent is told how to behave. This differs for each agent. Here you can see the prompt template for the Reception agent:
[language=Python]
def receptionagent_context_en():
return(
"""You are SuperTracy, an AI agent helping PostNL customers with their parcel tracking needs. You provide detailed information about the journey of parcels using a given barcode. Your primary function is to guide users to provide the barcode of their parcel, and then use that information to fetch and relay tracking details. You respond in a helpful and professional manner, always prompting users for the barcode if it hasnt been provided, and handling errors or questions about parcel tracking gracefully. If the barcode provided does not return any information or is invalid, you instruct the user on how to find a valid barcode or suggest alternative solutions. When you do not know the answer to a user's question, respond with, I'm sorry, I don't have information on that topic. Please provide a barcode if you need tracking details. Your interactions are designed to be clear, concise, and focused on parcel tracking to enhance customer service efficiency."""
)
* Template for instructions for each agents: The instruction templates tell agents what to do. In this template, variables can be used which refer to other prompt templates. In the example below, you see the variable context-str which refers to the template of the parcel report. Using that template, the agent can act based on the instructions.
def parcelagent_context_en(context_str):
return (
"""You are a PostNL customer service AI. You have been provided with a comprehensive overview of the journey of a parcel. This overview includes timelines, detailed event descriptions, and insights into the parcel's handling and route."
"Here are the relevant details for the context:"
"context_str"
"Instruction: give a concise and short story about a parcel's journey from shipment to potential delivery. Highlight key events and movements between sorting centers, important timelines, and the logistics of parcel handling. Incorporate predictions regarding the future states of the parcel.
"""
)
* Template for creating a parcel report: This template is based on Chain-of-Thought and introduces a step-by-step reasoning process for the agent on what to do with the input prompt. There are various steps which cant all be included here. But the general reasoning process for the agent is to extract the barcode from the input, and gather the related information from the provided data set. The key data is structured in a template, and passed to the parcel agent to generate a response with.
* Template for generating the output: In this prompt template variables like the memory of the agent, the context window, the chat mode and optional follow up questions to ask are determined. Also the temperature of the agent can be defined, which is a parameter that controls the randomness of the model's output, affecting how predictable or creative the generated text is. Based on this template, the agent will generate the response.
§.§.§ RAG architecture
To address challenges and drawbacks of the fine-tuned model like hallucinations and performance issues, we leverage the power of RAG architecture and a vectorized database. RAG architecture combines retrieval and generative capabilities to enhance LLM responses. The architecture has two components: the retriever and the generator. The retriever fetches relevant documents or data segments from a large corpus, while the generator creates responses based on the retrieved information <cit.>. Vector databases store data in vector format for efficient similarity searches. Embedding models convert textual data into high-dimensional vectors where semantically similar texts are closer together <cit.>. Hence, the similarity between a document 𝐝 and a query 𝐪 is computed using cosine similarity:
sim(𝐪, 𝐝) = 𝐪·𝐝/𝐪𝐝
Using “mxbai-embed-large” as embedding model, the mxbai-embed-large model, a transformer-based architecture, generates high-dimensional, semantically rich embeddings. It uses multi-head self-attention mechanisms, weighing the importance of different words in a sentence dynamically. The attention score α_ij is calculated as:
α_ij = exp(e_ij)/∑_k=1^nexp(e_ik)
where e_ij = 𝐪_i ·𝐤_j. The model passes input text through multiple layers of attention mechanisms and feed-forward neural networks, producing a dense vector that captures the text's semantic meaning. This embedding is used for retrieval in RAG architectures. Mxbai-embed-large achieves state-of-the-art performance in NLP tasks, making it a reliable choice for embedding-based retrieval in RAG architectures <cit.>. Optimizing the retriever, generator, and vector database is contributes to scalability. Verifying and validating generated responses are crucial to mitigate hallucinations <cit.>.
In order to input the parcel data into the implemented RAG architecture, an ETL pipeline is developed to Extract, Transform, and Load the parcels data. This pipeline consists of the following three steps:
* Extract: Data is retrieved from the earlier mentioned sources (3.1), the main one being Collo.
* Transform: Data is cleaned, normalized, and structured; for example, date formats are standardized, and missing values are handled.
* Load: Transformed data is loaded into a vectorized database, ready for querying and analysis by LLMs.
The integration of RAG architecture and the developed ETL pipeline enhances the precision and contextual richness of the responses of SuperTracy.
§.§ Overall System Architecture Overview
In the diagram below (Figure 2) you can see the architecture of the final system, where all the components come together that form SuperTracy. The architecture of the system is designed to ensure integration and effective collaboration among the agents and models. Key components include:
* LLAMA3 and GEMMA2: LLMs serving as the backbone for different agents, providing advanced language understanding and generation capabilities <cit.>.
* CLD3: A neural network model for detecting the language of user queries, ensuring accurate processing of inputs <cit.>.
* MADLAD: A text-to-text translation model facilitating multilingual support <cit.>.
* mxbai-embed-large: An embedding model used in the RAG architecture to enhance retrieval and augmentation processes <cit.>.
* Random Forests Classifier and Regex Pattern Matching: Models and methods for verifying the correctness of barcodes, enhancing the system's reliability.
* T5 model: This model is used for logistic event prediction of the parcel.
* Prompt Factory: Prompt factory is the module where prompt engineering takes place and various prompt templates are created and managed.
The integration of these components forms a cohesive system capable of addressing various user needs through the Reception, Parcel, and Knowledge Expert Agents. Each agent operates within its specialized domain, leveraging the strengths of the underlying models to provide accurate, relevant, and timely responses.
§.§.§ The final product and User-Interface
This final stage required integrating all components and sub modules into a unified architecture, involving the following key steps:
* System Integration: Ensuring all components, including the fine-tuned LLMs, the RAG architecture, and the agents work together seamlessly was a challenging software engineering task. This included writing code, debugging, and optimizing system performance to handle user queries efficiently.
* Web Platform Development: Creating a user-friendly web interface that allows users to interact with the system. This platform serves as the front end, providing an intuitive and accessible means for users to query the knowledge expert agent and receive responses. The web-interface is visible in the figures in Appendix A.
The successful execution of these procedures has led to the development of a comprehensive system that combines multiple LLMs into a fully operational and user-interactive system. This system not only provides answers to basic questions about PostNL-specific terminology but also offers advanced guidance on logistical situations, showcasing the advanced reasoning abilities of the LLMs.
§ EVALUATION AND DISCUSSION
§.§ Technical Evaluation of the model
SuperTracy is a MVP which hasn't been deployed yet. Therefore, technical performance cannot be formally measured. Below are a few important implementation goals, that made the the finalization of this MVP possible. These implementation goals build further on the three research goals mentioned in section 1.3.
* Lightweight Deployment: The system operates effectively on local machines with modest hardware. This has been achieved by using quantization and open source LLM models. For the deployment of the MVP of Supertracy, a MacBook Pro with chip M1 Max and 64 GB memory has been used. The reaction time of the system was always less than 2 second. Videos are taken of the performance.
* Complete Local Integration: All necessary modules and models used for SuperTracy are integrated and run entirely on the local system. This means that external API’s like Open-AI or Bedrock API’s are not used, which can be quite expensive on a large scale. Open source models like GEMMA 2 and LLAMA 3 have been used instead, which also ensure the company data stays on premise.
* Integration with RAG Architecture: The integration of agents with RAG architecture, allows agents to work individually or together to perform diverse and complex tasks. Using RAG has made it possible to specialize the LLMs for specific tasks, therefore enabling the business use-case. Through RAG, agents are able to utilize embedded documents to enhance their knowledge base. RAG also enables the system to recall its reasoning and used resources in contrast to the closed-source external LLMs which act as a black box.
§.§ Human Evaluation of the generated output by SuperTracy
The research goals stated in section 1.3 state that a LLM-based system can be build in-house, and solve the business use-case of mimicking the role of a logistical expert, making sense of logistical events of a barcode and being able to communicate about it.
Evaluation of LLM models are challenging, as the output is diverse each time and can be evaluated against a variety of different metrics, like fluency, accuracy, trustworthiness, presence of bias, factuality, multilingual tasks, reasoning and more <cit.>. Also the task of the LLM can be diverse, for example summarizing, translating, question and answering or casual conversing. Therefore the first question is 'what' to evaluate. For the case of SuperTracy, it has been decided together with the business experts, to evaluate on the factual and relevant information given in the parcel story. Aspects like fluency or multilingual performance and the performance of Knowledge Expert agent and recipient agent are left out of scope.
The second question is 'how' to evaluate. The two common evaluation methods are automatic evaluation and human evaluation <cit.>. For the evaluation of SuperTracy, Human evaluation is chosen, as available automated evaluation techniques or benchmarks are not suitable for evaluating factuality of the enterprise specific generated parcel stories <cit.>. When choosing human evaluation methods of LLMs, attention has to be paid to various crucial factors to guarantee the dependability and precision of assessments <cit.>. Important criteria are the number of evaluators being around 9 <cit.>, evaluation criteria ('what' to evaluate), and evaluator’s expertise level.
§.§.§ Experimental Setup of Human Evaluation
A panel of eight logistical domain experts of the supply chain team have been asked to critically review the output of SuperTracy based on the factual correctness of the generated parcel stories.
A sample of 100 parcel barcodes have been selected, with the minimal requirements provided by the logistics expert to ensure that they represented complex logistical scenarios likely to challenge the system's capabilities. These requirements have been provided by mentioning which 'waarneming' codes indicate an unhappy journey flow. The sample of these barcodes containing the selected events have been given to the model to generate a parcel story for.
These experts were asked to assess the factual correctness and relevance of the stories on a scale ranged from 1, indicating a very low level of correctness and relevance, to 5, representing an exemplary level of performance in the parcel story generation. Participants were asked to provide an explanation for a score lower than 3. In that way feedback and insight into lower scores have been achieved. It was decided together with the business experts, that a score of 3 was good enough and anything below indicated a low quality of the generated stories.
§.§ Results of the evaluation of Domain Experts
§.§.§ Quantitative Results
The results of the given scores by the domain experts on the generated parcel stories by SuperTracy are shown in the figure below. The AI generated outputs are generally well-received by the logistics domain experts. The most common scores are 3 and 4, with a median score of 4. 75% of the generated parcel stories got the score of 3 or more. This reflects that most domain experts rated SuperTracy's performance favorably. Comparing these results to the expected score of 3 or higher set by the domain experts, indicates that the performance of SuperTracy has been good enough to prove that it can solve the problem statement.
§.§.§ Qualitative Feedback
For the scores of lower than 3, the domain experts have provided reasoning for their choice. The main gist of the gathered feedback from the open answers is about incorrect generalizations or incorrect assumptions of a few event codes. These are summarized in the following points:
* Incorrect assumptions and generalizations of some logistic events: For example in the logistic events 'ETA was updated' happens quite a few times at the end of a logistical sequence. This is interpreted as a delay by the model, but this is not necessarily a delay. Also according to the standard process of evening distribution 'Avonddistributie' the route gets rescheduled, as they are in a planned network. Or in the morning planners can shift parcels from routes to improve the total planning. This has nothing to do with 'delay' or 'unforeseen circumstances' as SuperTracy calls it in the parcel stories.
* Some steps are default in the process and are not interesting to show: Sometimes it is possible to change something. This is present in the logistic event. For example, the event 'Changing ETA is possible'. But this does not mean that it has to be anticipated on, as its an automatic default released event. This doesn't have to be mentioned in the parcel story. Only the actual made changes need to be mentioned. An example is that before the first sorting there is always a notification that 'it is not possible to change the date or time'. This is interpreted as a problem, but actually nothing went wrong. The information that is not valuable to share, should be distinguished upfront to make the outcome more clear to the recipient.
* Data quality issues in location data: The location data had some issues. Most of the locations were identified as sorting centres, but not all locations are sorting centers. There is a difference between sorting, distribution and retail locations. Not distinguishing between these different locations which execute different business processes, makes the parcel story less accurate.
§.§ Conclusion
The goal for this project was to create a MVP which can act as a show-case of the value that generative AI can bring for PostNL. Reflecting on the goals set at the beginning of the research in section 1.3, we can conclude that the goals have been achieved.
The main goals were to explore business use-cases that can be solved by Generative AI and creating a MVP to showcase the value to the business. Throughout the demo and the evaluation of SuperTracy by domain experts, a lot of positive reactions are received. The use-case of SuperTracy has inspired the supply chain team to consider using such a system to improve their workflow, and for the business stakeholders to further explore possibilities of refining for deployment. The demo of SuperTracy has also inspired the stakeholders to realize the value of leveraging generative AI techniques, by brainstorming on various new use-cases during the demo. Also the goal to use in-house solutions when designing and creating the system was achieved, by using open source LLM models and using local computing power for running the system. To further prove the value of the use-case, the Human Evaluation confirms that SuperTracy is a successful MVP that can mimic the track and tracing capabilities at PostNL by effectively creating a story of the parcel's journey, receiving a score of higher than 3 in 75% of the cases. Interestingly, the feedback provided by the subject matter experts, were mostly about the quality of the input data, and not the LLM-based model itself. At the same time, PostNL is working on strengthening their data fundament, which will result in better data quality, which forms the base of a good performing LLM system.
All together, the creation of SuperTracy has been successful and contributed to the maturity of using generative AI solutions in PostNL, shedding light to its possibilities and business value, all done with the least amount of resources. This undermines the sometimes implicit assumption that adapting to generative AI technologies are expensive or out of reach.
§.§ Future Work
Future work depends on the scope, being the MVP or for scaling up through deployment. The MVP worked well, but could be improved. Future improvements to MVP of SuperTracy are refining the system’s ability to identify and communicate only the most relevant information, ensuring clarity and precision in the narratives. This can be done by removing auto generated default events in cases for events that can always be neglected. If events are interesting in some cases, a knowledge graph can be designed, which allows inference to understand the difference. Also the evaluation could be more extensive on diverse aspect. Human evaluation or Automatic evaluation methods could be used for aspects like fluency or multilingual performance. The additional aspects of the model could also be further evaluated, such as the individual performance of the logistic event prediction based on the T5 model.
The broader scope can be the deployment of SuperTracy and further specifying its use-case. In the case of deployment, it is important to pay significant attention to data privacy and security, preventing oversharing sensitive information with the AI system, which are common challenges when leveraging LLMs in business.<cit.>. This requires awareness of designing and using AI solutions in an enterprise. Next to the secure deployment, maintaining and increasing the data quality of the used data sets to fine-tune the LLM models are very important. In the evaluation some data quality issues have already been pointed out which deserve further attention. After all, the execution of this MVP has shown that the used methodologies provide value for the use-case. These same methods, technologies and approach can be used for other use-cases too.
99
postnlReport
PostNL: Annual Report 2023.<https://annualreport.postnl.nl/2023/xmlpages/tan/files?p_file_id=866>
banh2023generative
Banh, L., Strobel, G.: Generative Artificial Intelligence. Electronic Markets 33(1), 63 (2023)
wu2023chatgpt
Wu, T.Y., He, S.Z., Liu, J.P., Sun, S.Q., Liu, K., Han, Q.-L., Tang, Y.: A Brief Overview of ChatGPT: The History, Status Quo and Potential Future Development. IEEE/CAA J. Autom. Sinica 10(5), 1122–1136 (2023). <https://doi.org/10.1109/JAS.2023.123618>
teubner2023welcome
Teubner, T., Flath, C. M., Weinhardt, C., van der Aalst, W., Hinz, O.:
Welcome to the Era of ChatGPT et al.: The Prospects of Large Language Models.
Business & Information Systems Engineering, 65(2), 95–101 (2023).
arman2023exploring
Arman, M., Lamiyar, U.R.: Exploring the Implication of ChatGPT AI for Business: Efficiency and Challenges. International Journal of Marketing and Digital Creative 1(2), 64–84 (2023)
zhao2020exploring
Zhao, H., Jia, J., Koltun, V.: Exploring Self-Attention for Image Recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10076–10085 (2020)
chang2024survey
Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., Xie, X.: A Survey on Evaluation of Large Language Models. ACM Transactions on Intelligent Systems and Technology 15(3), 1–45 (2024)
liu2023llm360
Liu, Z., Qiao, A., Neiswanger, W., Wang, H., Tan, B., Tao, T., Xing, E.P.: LLM360: Towards Fully Transparent Open-Source LLMs. arXiv preprint arXiv:2312.06550 (2023)
kukreja2024literature
Kukreja, S., Kumar, T., Purohit, A., Dasgupta, A., Guha, D.: A literature survey on open source large language models. Proceedings of the 2024 7th International Conference on Computers in Management and Business. <http://dx.doi.org/10.1145/3647782.3647803> (2024)
vm2024fine
VM, K., Warrier, H., Gupta, Y.: Fine Tuning LLM for Enterprise: Practical Guidelines and Recommendations. arXiv preprint arXiv:2404.10779 (2024)
zhang2023siren
Zhang, Y., Li, Y., Cui, L., Cai, D., Liu, L., Fu, T., Huang, X., Zhao, E., Zhang, Y., Chen, Y., et al.: Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models. arXiv preprint arXiv:2309.01219 (2023)
gao2023retrieval
Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., Bi, Y., Dai, Y., Sun, J., Wang, M., Wang, H.: Retrieval-Augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 (2023). <https://arxiv.org/abs/2312.10997>
sorensen2022information
Sorensen, T., Robinson, J., Rytting, C. M., Shaw, A. G., Rogers, K. J., Delorey, A. P., Khalil, M., Fulda, N., Wingate, D.: An information-theoretic approach to prompt engineering without ground truth labels. arXiv preprint arXiv:2203.11364 (2022). <https://arxiv.org/abs/2203.11364>
marvin2023prompt
Marvin, G., Hellen, N., Jjingo, D., Nakatumba-Nabende, J.: Prompt Engineering in Large Language Models. In: International Conference on Data Intelligence and Cognitive Informatics, pp. 387–402, Springer Nature Singapore, Singapore (2023)
bsharat2023principled
Bsharat, S. M., Myrzakhan, A., Shen, Z.: Principled instructions are all you need for questioning llama-1/2, GPT-3.5/4. arXiv preprint arXiv:2312.16171 (2023). <https://arxiv.org/abs/2312.16171>
sahoo2024systematic
Sahoo, P., Singh, A.K., Saha, S., Jain, V., Mondal, S., Chadha, A.: A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications. arXiv preprint arXiv:2402.07927 (2024)
kaplan2020scaling
Kaplan, J., McCandlish, S., Henighan, T., Brown, T.B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., Amodei, D.: Scaling Laws for Neural Language Models. arXiv preprint arXiv:2001.08361 (2020)
liu2023llm
Liu, S., Liu, Z., Huang, X., Dong, P., Cheng, K.-T.: LLM-FP4: 4-Bit floating-point quantized transformers. arXiv preprint arXiv:2310.16836 (2023). <https://arxiv.org/abs/2310.16836>
jacob2018quantization
Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., et al.: Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2704-2713. <https://openaccess.thecvf.com/content_cvpr_2018/html/Jacob_Quantization_and_Training_CVPR_2018_paper.html>
raffel2019exploring
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P. J.: Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683 (2019). <https://arxiv.org/abs/1910.10683>
hadi2023survey
Hadi, M.U., Qureshi, R., Shah, A., Irfan, M., Zafar, A., Shaikh, M.B., Mirjalili, S.: A Survey on Large Language Models: Applications, Challenges, Limitations, and Practical Usage. Authorea Preprints (2023)
vaswani2017attention
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., et al.: Attention is all you need. Advances in Neural Information Processing Systems, 30. <https://arxiv.org/abs/1706.03762>
raffel2020exploring
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140), 1-67 (2020). <http://jmlr.org/papers/v21/20-074.html>
zhao2024expel
Zhao, A., Huang, D., Xu, Q., Lin, M., Liu, Y.J., Huang, G.: Expel: LLM Agents Are Experiential Learners. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 17, pp. 19632–19642 (2024)
guo2024large
Guo, T., Chen, X., Wang, Y., Chang, R., Pei, S., Chawla, N. V., Wiest, O., Zhang, X.: Large Language Model based Multi-Agents: A survey of progress and challenges. arXiv preprint arXiv:2402.01680 (2024). <https://arxiv.org/abs/2402.01680>
hacker1998design
Hacker, W., Sachse, P., Schroda, F.: Design Thinking-Possible Ways to Successful Solutions in Product Development. In: Designers: The Key to Successful Product Development, pp. 205–216, Springer London, London (1998)
zhao2024llama
Zhao, J., Zhang, Z., Zhang, Q., Gui, T., Huang, X.: Llama Beyond English: An Empirical Study on Language Capability Transfer. arXiv preprint arXiv:2401.01055 (2024)
ooms2024cld3
Ooms, J.: cld3: Google's Compact Language Detector 3 (Version 1.6.0) [R package]. <https://docs.ropensci.org/cld3/>, <https://github.com/ropensci/cld3>, <https://ropensci.r-universe.dev/cld3>
devlin2019bert
Devlin, J., Chang, M. W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186. <https://www.aclweb.org/anthology/N19-1423/>
MetaLlama
AI@Meta: Meta-Llama-3-8B(2024). <https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md>
taori2023alpaca
Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., Liang, P., Hashimoto, T.B.: Alpaca: A Strong, Replicable Instruction-Following Model. <https://crfm.stanford.edu/2023/03/13/alpaca.html> (2023)
huggingface2024sft
Hugging Face: Training Large Language Models with Hugging Face’s SFTTrainer. <https://crfm.stanford.edu/2023/03/13/alpaca.html> (2024)
hu2021lora
Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, L., et al.: LORA: Low-Rank Adaptation of Large Language Models. arXiv preprint arXiv:2106.09685. <https://arxiv.org/abs/2106.09685>
karpukhin2020dense
Karpukhin, V., Oguz, B., Min, S., Lewis, P., Wu, L., Edunov, S., et al.: Dense Passage Retrieval for Open-Domain Question Answering. arXiv preprint arXiv:2004.04906. <https://arxiv.org/abs/2004.04906>
lewis2020retrieval
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., et al.: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv preprint arXiv:2005.11401. <https://arxiv.org/abs/2005.11401>
lee2024open
Lee, S., Shakir, A., Koenig, D., Lipp, J.: Open Source Strikes Bread - New Fluffy Embeddings Model. <https://www.mixedbread.ai/blog/mxbai-embed-large-v1>
li2023angle
Li, X., Li, J.: AnglE-optimized Text Embeddings. arXiv preprint arXiv:2309.12871. <https://arxiv.org/abs/2309.12871>
gemma2024improving
Gemma Team, Google DeepMind: Gemma 2: Improving open language models at a practical size. <https://stoRAGe.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf>
kudugunta2023madlad
Kudugunta, S., Caswell, I., Zhang, B., Garcia, X., Choquette-Choo, C. A., Lee, K., Xin, D., Kusupati, A., Stella, R., Bapna, A., Firat, O.: MADLAD-400: A Multilingual And Document-Level Large Audited Dataset. arXiv. <https://arxiv.org/abs/2309.04662>
singhal2023large
Singhal, K., Azizi, S., Tu, T., Mahdavi, S.S., Wei, J., Chung, H.W., Scales, N., Tanwani, A., Cole-Lewis, H., Pfohl, S., et al.: Large Language Models Encode Clinical Knowledge. Nature 620(7972), 172–180 (2023)
belz2006comparing
Belz, A., Reiter, E.: Comparing Automatic and Human Evaluation of NLG Systems. In: 11th Conference of the European Chapter of the Association for Computational Linguistics, pp. 313–320 (2006)
§ APPENDICES
tocsectionAppendices
§ ILLUSTRATIONS OF VARIOUS APPLICATIONS OF SUPERTRACY, INCLUDING USER QUERIES AND SYSTEM RESPONSES.
*The illustrations include certain business-related information pertaining to PostNL, which has been redacted at the request of the company.
|
http://arxiv.org/abs/2409.02365v1 | 20240904012628 | Splitting of uniform bundles on quadrics | [
"Xinyi Fang",
"Duo Li",
"Yanjie Li"
] | math.AG | [
"math.AG",
"14M15, 14M17, 14J60"
] |
§ ABSTRACT
We show that there exist only constant morphisms from ℚ^2n+1(n≥ 1) to 𝔾(l,2n+1) if l is even (0<l<2n) and (l,2n+1) is not (2,5). As an application, we prove on ℚ^2m+1 and ℚ^2m+2(m≥ 3), any uniform bundle of rank 2m splits.
Nodeless superconductivity and topological nodal states in molybdenum carbide
Toni Shiroka
=============================================================================
Keywords: uniform bundle; quadric; splitting of vector bundles.
MSC: 14M15; 14M17; 14J60.
§ INTRODUCTION
In this article, we assume all the varieties are defined over ℂ. Assume that X is a rational homogeneous variety of Picard number 1 (we call
X a generalised Grassmannian for short), then X is swepted by lines. Let E be a vector bundle on X. We consider its restriction E|_L(≃𝒪(a_1)⊕⋯⊕𝒪(a_k)) to any line L⊆ X. If the splitting type (a_1,…,a_k) of E|_L is independent of the choice of L, we call E a uniform bundle.
Every uniform bundle on ℙ^n whose rank is smaller than n splits (cf. the main theorems of <cit.> and <cit.>). On a Grassmannian Gr(k,n+1)(k≤ n+1-k), any uniform bundle whose rank is smaller than k splits (cf. <cit.>). On some other generalised Grassmannians, there are similar results, see <cit.>, <cit.> and <cit.>.
In a recent article <cit.>, for an arbitrary generalised Grassmannian X, we prove that once the rank of a uniform bundle E is smaller than e.d.(VMRT)+1 (for the definition of e.d.(VMRT), see <cit.>), the bundle E necessarily splits. Moreover, for most generalised Grassmannians, we classify unsplit uniform bundles of minimal ranks. The upper bound e.d.(VMRT) is optimal for many generalised Grassmannians, for example it is optimal for ℚ^n(3≤ n ≤ 6) (see <cit.>). However, we will see that this upper bound is not optimal for higher dimensional quadrics.
In this article, we first show that there exist only constant morphisms from ℚ^2n+1(n≥ 1) to 𝔾(l,2n+1) if l is even (0<l<2n) and (l,2n+1) is not (2,5).
Combining this result with the method of <cit.>, we prove that every uniform bundle of rank 2n on ℚ^2n+1 and ℚ^2n+2(n≥ 3) splits and the upper bound 2n is optimal for ℚ^2n+1. This gives the first known example such that the optimal upper bound is bigger than e.d.(VMRT).
This article is organised as follows: in Section 2, we study morphisms from ℚ^2n+1(n≥ 1) to 𝔾(l,2n+1) and the main result of this section is Proposition <ref>; in Section 3, we study uniform bundles on smooth quadrics and the main result of this section is Theorem <ref>. Furthermore, we also study uniform bundles on B_n/P_k and D_n/P_k, see Corollary <ref>, Corollary <ref>, Corollary <ref> and Corollary <ref>.
§ MORPHISMS FROM QUADRICS TO GRASSMANNIANS
There exist only constant morphisms from ℚ^2n+1(n≥ 1) to 𝔾(l,2n+1) if l is even (0<l<2n) and (l,2n+1) is not (2,5).
We mainly follow the proof of the main theorem in <cit.>. Let H be the cohomology class of a hyperplane on ℚ^2n+1. The cohomology ring of ℚ^2n+1 is
H^∙(ℚ^2n+1,ℤ)=ℤ⊕ℤH ⊕⋯⊕ℤH^n-1⊕ℤH^n/2⊕⋯⊕ℤH^2n+1/2.
Let U (resp. Q) be the universal subbundle (resp. quotient bundle) on 𝔾(l,2n+1). Suppose that f:ℚ^2n+1→𝔾(l,2n+1) is a non-constant morphism. Let c_i and d_j be rational numbers satisfying
c_i(f^*U^∨)=c_iH^i for 1≤ i ≤ l+1 and c_j(f^*Q)=d_jH^j for 1≤ j ≤ 2n+1-l.
By <cit.>, both c_i(1≤ i ≤ l+1) and d_j(1≤ j ≤ 2n+1-l) are non-negative. We note that for any 1≤ i,j<n, c_i and d_j are integers. For any i,j≥ n, 2c_i and 2d_j are integers.
Then from the exact sequence 0→ f^*U →𝒪_ℚ^2n+1^⊕ 2n+2→ f^*Q → 0, we get the equality of polynomials:
(1-c_1t+c_2t^2+⋯+(-1)^l+1c_l+1t^l+1)(1+d_1t+⋯+d_2n+1-lt^2n+1-l)
= 1+(-1)^l+1c_l+1d_2n+1-lt^2n+2.
If c_l+1d_2n+1-l is 0, then by (<ref>) both c_1 and d_1 are zero, which implies that f is constant. So we may assume the numbers c_1,d_1,c_l+1 and d_2n+1-l are non-zero. By <cit.>, all the rational numbers c_1,c_2,…,c_l+1 and d_1,d_2,…,d_2n+1-l are positive.
Let a be √(c_l+1d_2n+1-l). We set C_i:=c_i/a^i(1≤ i ≤ l+1) and D_j:=d_j/a^j(1≤ j ≤ 2n+1-l). We note that if a,C_i and D_j are all positive integers, then by the same proof of <cit.>, we can get a contradiction. Then it suffices to show that a,C_i and D_j are integers.
We firstly show that a is an integer. Since l is even, similar to the proof in <cit.>, we can show that a is c_m+1/c_m, where m is l/2. So a is rational, we may assume a=s/t, where s and t are coprime positive integers. By definition, we have a^2n+2=c_l+1d_2n+1-l. Since 2c_l+1 and 2d_2n+1-l are integers, 4a^2n+2 is an integer. So t^2n+2 divides 4, which implies that t is 1, as n is at least 1. Hence a is an integer.
We now show that C_i(=c_i/a^i) and D_j(=d_j/a^j) are integers. Let F_1(x)⋯ F_k(x) be the irreducible factorization of 1-x^2n+2 over ℤ[x] with F_l(0)=1(1≤ l ≤ k). Then F_l(x) is also irreducible over ℚ[x] by Gauss Lemma. Then
(1-c_1t+c_2t^2+⋯+(-1)^l+1c_l+1t^l+1)(1+d_1t+⋯+d_2n+1-lt^2n+1-l)
= 1+(-1)^l+1c_l+1d_2n+1-lt^2n+2=1-a^2n+2t^2n+2=F_1(at)⋯ F_k(at).
Note that F_i(at) is also irreducible over ℚ[t]. We have
1-c_1t+c_2t^2+⋯+(-1)^l+1c_l+1t^l+1=F_i_1(at)⋯ F_i_k_1(at) and
1+d_1t+⋯+d_2n+1-lt^2n+1-l=F_j_1(at)⋯ F_j_k_2(at).
Since the coefficients of F_l(x) are integers, C_i(=c_i/a^i) and D_j(=d_j/a^j) are integers.
§ UNIFORM BUNDLES ON ℚ^2N+1(N≥ 3)
We are going to use the method in <cit.> to show that any uniform bundle of rank 2n on ℚ^2n+1(n≥ 3) splits. We firstly fix some notations.
Let E be a uniform bundle on X(=ℚ^2n+1)(n≥ 3) of rank 2n. Assume that E does not split, by <cit.>, we may assume that the splitting type of E is
(0,…,0_l+1,-1,…,-1_2n-l-1) (l+1≥ 2n-l-1 ≥ 1).
We denote the moduli of lines on X and the corresponding universal family by:
𝒰(=B_n+1/P_1,2)[r,"q"][d,"p"] ℳ(=B_n+1/P_2)
X(=B_n+1/P_1).
where ℳ is the moduli of lines and 𝒰 is the universal family.
The relative Harder-Narasimhan (H-N) filtration of p^*E induces an exact sequence:
0→ E_1(=q^*G_1) → p^*E → E_2(=q^*G_2⊗ p^*_X(-1)) → 0,
where G_1 (resp. G_2) is a vector bundle on ℳ of rank l+1 (resp. 2n-l-1). For each x∈ X, the restriction of relative H-N filtration to p^-1(x) induces a morphism
ψ_x:p^-1(x)(≅ℚ^2n-1)→ Gr(l+1,2n)(≅𝔾(l,2n-1)).
By <cit.>, we have the following description of cohomology rings:
H^∙(X,ℚ)=ℚ[X_1]/(X_1^2n+2),
H^∙(𝒰,ℚ)=ℚ[X_1,X_2]/(Σ_i(X_1^2,X_2^2)_n≤ i ≤ n+1),
H^∙(ℳ,ℚ)=ℚ[X_1+X_2,X_1X_2]/(Σ_i(X_1^2,X_2^2)_n≤ i ≤ n+1).
For a bundle F of rank r, the Chern polynomial of F is defined as
C_F(T):=T^r-c_1(F)T^r-1+⋯+(-1)^rc_r(F).
Let E(T,X_1)=∑_k=0^2ne_kX_1^kT^2n-k(∈ℚ[X_1,T]) and S_i(T,X_1,X_2)(∈ℚ[X_1+X_2,X_1X_2,T])(i=1,2) be homogeneous polynomials representing C_p^*E(T) and C_q^*G_i(T) in the cohomology rings respectively. There are equations
E(T,X_1)=C_p^*E(T) and S_i(T,X_1,X_2)=C_q^*G_i(T)(i=1,2).
Let R(X_1,X_2) be the polynomial Σ_n(X_1^2,X_2^2). By (<ref>), we have an equation of Chern polynomials:
E(T,X_1)-aR(X_1,X_2)=S_1(T,X_1,X_2)S_2(T+X_1,X_1,X_2)*.
If a is 0, then both S_1(T,X_1,X_2) and S_2(T+X_1,X_1,X_2) are polynomials only in variables T and X_1. Since S_i(T,X_1,X_2)(i=1,2) are symmetrical in X_1 and X_2, we must have S_1(T,X_1,X_2)=T^l+1 and S_2(T,X_1,X_2)=T^2n-l-1. So c_1(E_1) and c_1(E_2) are 0, and ψ_x is constant for each x∈ X, which implies that E splits (see, for example, <cit.>). Therefore, we have a 0.
§.§ Approximate solutions
To solve the equation (<ref>), we use the concept of approximate solutions introduced in <cit.>. The following definitions and propositions are basically from <cit.> and the proofs are similar.
A non-zero homogeneous polynomial P(T,X_1) with rational coefficients in variables T and X_1 of degree 2n is called an approximate solution if P(T,X_1)-R(X_1,X_2) has a proper divisor S(T,X_1,X_2) which is symmetrical in X_1 and X_2. We call such a divisor a symmetrical divisor.
By the the equation (<ref>), both 1/aE(T,X_1) and 1/aE(T-X_1,X_1) are approximate solutions. We call them approximate solutions associated with (<ref>).
In the following lemma, there are some restrictions on the coefficient of X_1^2n for an arbitrary approximate solution P(T,X_1).
Let P(T,X_1)=∑_k=0^2np_kX_1^kT^2n-k be an approximate solution. Then one of the followings holds.
(1) Any symmetrical divisor of P(T,X_1)-R(X_1,X_2) is of degree one.
(2) The coefficient p_2n is 0 and the zero set of P(0,1)-R(1,z) is ({z|z^2n+2-1=0})\{1,-1}.
(3) The coefficient p_2n is 1 and the zero set of P(0,1)-R(1,z) is ({z|z^2n-1=0}∪{0})\{1,-1}.
Let S(T,X_1,X_2) be a symmetrical divisor of P(T,X_1)-R(X_1,X_2). Then
S(0,X_1,X_2) divides p_2nX_1^2n-R(X_1,X_2). As S(T,X_1,X_2) is symmetrical in X_1 and X_2, S(0,X_1,X_2) divides p_2nX_2^2n-R(X_1,X_2).
Therefore S(0,X_1,X_2) divides p_2n(X_1^2n-X_2^2n).
By the equation R(X_1,X_2)(X_1^2-X_2^2)=X_1^2n+2-X_2^2n+2, we have
(X_1^2-X_2^2)(p_2nX_1^2n-R(X_1,X_2))+X_2^2(X_1^2n-X_2^2n)=(p_2n-1)X_1^2n(X_1^2-X_2^2).
If p_2n is not 0 and 1, we have S(0,X_1,X_2)| X_1^2n-X_2^2n and S(0,X_1,X_2)| X_1^2n(X_1-X_2)(X_1+X_2). Since S(0,X_1,X_2) is symmetrical in X_1 and X_2, S(0,X_1,X_2) is c(X_1+X_2) for some c∈ℚ. In particular, (S) is 1.
If p_2n is 0, then P(0,1)-R(1,z) is -z^2n+2-1/z^2-1(=-(z^2n+z^2n-2+⋯+1)). If p_2n is 1, then P(0,1)-R(1,z) is -z^2z^2n-1/z^2-1=(-(z^2n+z^2n-2+⋯+z^2)). Then the assertions follow immediately.
If p_2n is 0. Let F(T,X_1,X_2) be the polynomial satisfying
P(T,X_1)-R(X_1,X_2)=S(T,X_1,X_2)F(T,X_1,X_2).
Expand S(T,X_1,X_2) and F(T,X_1,X_2) as polynomials in the variable T, we get
S(T,X_1,X_2)=s_mT^m+⋯+s_1(X_1,X_2)T+s_0(X_1,X_2)
F(T,X_1,X_2)=f_2n-mT^2n-m+⋯+f_1(X_1,X_2)T+f_0(X_1,X_2).
By substituting T=0 in (<ref>), we have s_0(X_1,X_2)f_0(X_1,X_2)=-R(X_1,X_2). Since both s_0(X_1,X_2) and R(X_1,X_2) are symmetrical in X_1,X_2, f_0(X_1,X_2) is also symmetrical in X_1,X_2. As z^2n+2-1=0 has no multiple roots, we also have (s_0,f_0)=1. Consider the coefficients of T in both sides of (<ref>), we have s_1(X_1,X_2)f_0(X_1,X_2)+s_0(X_1,X_2)f_1(X_1,X_2)=p_2n-1X_1^2n-1. If p_2n-1 is 0, we have s_0| s_1f_0. From (s_0,f_0)=1, we get s_0| s_1
We call an approximate solution P(T,X_1)=∑_k=0^2np_kX_1^kT^2n-k a primitive approximate solution if p_2n∈{0,1} and P(T,X_1)-R(X_1,X_2) has a symmetrical divisor S_0(T,X_1,X_2) such that there is a 2(n-p_2n+1)-th primitive unit root y_0 satisfying S_0(0,1,y_0)=0.
As in <cit.>, we have the following classifications of primitive approximate solutions.
Let P(T,X_1) be a primitive approximate solutions. If p_2n is 0, we have P(T,X_1)=bT^2n for some b∈ℚ. If p_2n is 1, we have P(T,X_1)=Σ_n(bT^2,X_1^2) for some b∈ℚ.
The proof of Proposition <ref> is similar to that of <cit.>, we leave them in the Appendix (see Propositions <ref> and <ref>).
§.§ The case l is even
We firstly aim to show that l is not even and we will prove it by contradiction. Now suppose that l is even. If l is smaller than 2n-2 and (l,2n-1) is not (2,5), the morphism ψ_x (for the definition of ψ_x, see (<ref>)) is constant for each x∈ X according to Proposition <ref>. Then E splits. We exclude the remaining cases l=2n-2 and (l,2n-1)=(2,5) by calculations.
There does not exist an unsplit uniform bundle of rank 2n whose splitting type is (0,…,0,-1) on ℚ^2n+1(n≥ 3).
Suppose E is unsplit, by the same calculation as in <cit.>, we have c_1(E_2)=(X_1+X_2)-X_1=X_2. Let f(X_1,X_2) be a homogeneous polynomial of degree 2n-1 which is symmetrical in X_1 and X_2 and represents c_2n-1(E_1). By comparing the coefficients of T^0 on the left and right sides of the equation (<ref>), we get
f(X_1,X_2)X_2=e_2nX_1^2n-a(X_1^2n+X_1^2n-2X_2^2+⋯+X_2^2n).
Then we must have e_2n=a and f(X_1,X_2)=-a(X_1^2n-2X_2+X_1^2n-4X_2^3+⋯+X_2^2n-1), contradicting to the assumption that f is symmetrical in X_1 and X_2.
We now exclude the case (l,2n-1)=(2,5).
There does not exist an unsplit uniform of rank 6 whose splitting type is (0,0,0,-1,-1,-1) on ℚ^7.
Suppose E is unsplit. Then a in the equation (<ref>) is not 0 and
1/aE(T,X_1)=1/a∑_k=0^6e_kX_1^kT^6-k is an approximate solution which has a symmetrical divisor of degree 3. By Lemma <ref>, we have 1/ae_6=0 or 1/ae_6=1.
Let f(X_1,X_2) be a homogeneous polynomial representing c_3(E_1).
If 1/ae_6 is 0, we have f(X_1,X_2)| R(X_1,X_2)(=X_1^6+X_1^4X_2^2+X_1^2X_2^4+X_2^6). Since
the prime factorization of X_1^6+X_1^4X_2^2+X_1^2X_2^4+X_2^6 over ℚ[X_1,X_2] is (X_1^4+X_2^4)(X_1^2+X_2^2), R(X_1,X_2) has no divisor symmetrical in X_1 and X_2 of degree 3.
If 1/ae_6 is 1, then f(X_1,X_2) divides X_1^6-R(X_1,X_2)(=-X_2^2(X_1^4+X_1^2X_2^2+X_2^4)). The prime factorization of X_1^4+X_1^2X_2^2+X_2^4 over ℚ[X_1,X_2] is (X_1^2+X_1X_2+X_2^2)(X_1^2-X_1X_2+X_2^2). Therefore, X_1^6-R(X_1,X_2) has no divisor symmetrical in X_1 and X_2 of degree 3.
In both cases, we get contradictions.
§.§ The case l is odd
Suppose that l is odd. We begin with a lemma.
When l is odd, the equation E(t,1)-aR(1,0)=0 has no roots in ℝ.
In the equation (<ref>): E(T,X_1)-aR(X_1,X_2)=S_1(T,X_1,X_2)S_2(T+X_1,X_1,X_2), we let X_1 be 0 and let X_2 be 1. Then we get an equation T^2n-a=S_1(T,0,1)S_2(T,0,1). We write S_1(T,0,X_2) and S_2(T,0,X_2) as follows: S_1(T,0,X_2)=∑_i=0^l+1a_iX_2^iT^l+1-i and S_2(T,0,X_2)=∑_j=0^2n-1-lb_jX_2^jT^2n-1-l-j, where a_i(0≤ i≤ l+1) and b_j(0≤ j ≤ 2n-1-l) are rational numbers. Then -a is a_l+1b_2n-1-l. We now wish to show -a>0. To this end, for any x in X, we consider the embedding i_x:p^-1(x)(=ℚ^2n-1)↪𝒰(=B_n+1/P_1,2), which induces a morphism:
i_x^*:H^∙(𝒰,ℚ)≅ℚ[X_1,X_2]/(Σ_n(X_1^2,X_2^2),Σ_n+1(X_1^2,X_2^2))→ℚ[X_2]/X_2^2n≅ H^∙(ℚ^2n-1,ℚ).
Under the above identifications, we have S_1(T,0,X_2)=C_E_1|_p^-1(x)(T)=C_ψ_x^*U(T) and S_2(T,0,X_2)=C_E_2|_p^-1(x)(T)=C_ψ_x^*Q(T), where U (resp, Q) is the universal subbundle (resp. quotient bundle) on 𝔾(l,2n-1) and ψ_x is the morphism as in (<ref>). So we have
a_l+1X_2^l+1=(-1)^l+1c_l+1(ψ_x^*U)=c_l+1(ψ_x^*U^∨) and
b_2n-1-lX_2^2n-1-l=(-1)^2n-1-lc_2n-1-l(ψ_x^*Q)=c_2n-1-l(ψ_x^*Q) (since l is odd, 2n-1-l is even).
By <cit.>, c_l+1(ψ_x^*U^∨) and c_2n-1-l(ψ_x^*Q) are numerically non-negative, hence both a_l+1 and b_2n-1-l are non-negative. As -a is a_l+1b_2n-1-l and a is not 0, we have -a>0. So for any t∈ℝ, t^2n-a is bigger than 0. In other words, S_1(t,0,1) and S_2(t,0,1) are non-zero for any t∈ℝ. By the equations E(T,1)-aR(1,0)=S_1(T,1,0)S_2(T+1,1,0)=S_1(T,0,1)S_2(T+1,0,1), for any t∈ℝ, E(t,1)-aR(1,0) is not 0.
When l is odd and n is at least 3, the approximate solution 1/aE(T,X_1) or 1/aE(T-X_1,X_1) associated with (<ref>) is a primitive approximate solution.
Recall the equation 1/aE(t,1)-R(1,z)=1/aS_1(t,1,z)S_2(t+1,1,z). Note that l is odd implies l+1≥ 2n-1-l>1. So by Lemma <ref>, we have 1/ae_2n=0 or 1/ae_2n=1. And when 1/ae_2n is 0, 1/aE(0,1)-R(1,exp2iπ/2n+2) vanishes; when 1/ae_2n is 1, 1/aE(0,1)-R(1,exp2iπ/2n) vanishes.
Suppose that n is 3. When 1/ae_2n is 0, we have 1/aS_1(0,1,z)S_2(1,1,z)=-(z^6+z^4+z^2+1). The prime factorization of z^6+z^4+z^2+1 over ℚ[z] is (z^4+1)(z^2+1). Note (S_1)≥(S_2) and (S_2)>1, we must have S_1(0,1,z)=λ(z^4+1) for some λ∈ℚ. Then exp2iπ/8 is a root of S_1(0,1,z) and hence 1/aE(T,X_1) is primitive. When 1/ae_2n is 1, we have 1/aS_1(0,1,z)S_2(1,1,z)=-(z^6+z^4+z^2)=-z^2(z^4+z^2+1)=-z^2(z^2+z+1)(z^2-z+1). Note that S_1 is symmetrical in X_1, X_2 and (S_1) is at least (S_2), we have S_1(0,1,z)=λ(z^4+z^2+1). Then exp2iπ/6 is a root of S_1(0,1,z) and hence 1/aE(T,X_1) is also primitive.
Suppose now n is at least 4 and that 1/aE(T,X_1) is not a primitive solution. Then we have
S_2(1,1,exp2iπ/2n+2)=0 or S_2(1,1,exp2iπ/2n)=0.
We now show that 1/aE(T-X_1,X_1) is a primitive solution. First we have the following inequalities:
π/2n<2π/2n+2<2π/2n<3π/2n<4π/2n+2.
(Note that for the last inequality, we use the condition n≥ 4).
Since the roots of S_2(0,1,z) satisfy z^2n=1 or z^2n+2=1, it suffices to show that S_2(0,1,z) has a non-zero root y_0 with argument satisfying π/2n< y_0 <3π/2n. By the above inequalities, we have y_0=2iπ/2n+2 or y_0=2iπ/2n. Then 1/aE(T-X_1,X_1) is a primitive approximate solution.
It is enough to show that for any t∈ℝ, S_2(t+1,1,z) as a polynomial of z has a non-zero root with argument in (π/2n,3π/2n). Denote by K the set {t∈ℝ | ∃ r(t)( 0)∈ℂ, S_2(t+1,1,r(t))= 0 with π/2n< r(t) <3π/2n}. By (<ref>), we have 0∈ K. Note that K is open by construction, if we can show K is closed, then K is ℝ. Let t_0 be a limit point of K. Then S_2(t_0+1,1,z) has a root r(t_0) which is a limit point of {r(t) | t∈ K}. By Lemma <ref>, for any t∈ℝ, as a polynomial in z, the roots of 1/aE(t,1)-R(1,z) are non-zero. So the roots of S_2(t+1,1,z) are also non-zero. In particular, r(t_0) is not zero. Suppose that t_0∉ K, then we have r(t_0)=mπ/2n, where m is 1 or 3. We may assume r(t_0)=ρexpimπ/2n, where ρ is a positive real number. Then 1/aE(t_0,1)-R(1,ρexpimπ/2n) is 0. To get a contradiction, we show for any d∈ℝ, we have R(1,ρexpimπ/2n)+d 0. If R(1,ρexpimπ/2n)+d=0, we have the following identities:
(R(1,ρexpimπ/2n)+d)(ρ^2exp2imπ/2n-1)
= R(1,ρexpimπ/2n)(ρ^2exp2imπ/2n-1)+d(ρ^2exp2imπ/2n-1)
= ρ^2n+2expim(2n+2)π/2n-1+dρ^2exp2imπ/2n-d
= (dρ^2-ρ^2n+2)expimπ/n-(d+1)=0.
Since n is bigger than 3 and m is 1 or 3, expimπ/n is not real. We must have d+1=0 and dρ^2-ρ^2n+2=0. But it implies dρ^2-ρ^2n+2=-ρ^2-ρ^2n+2=0, which is absurd.
Now we complete the proof for the case l is odd. By Proposition <ref> and Proposition <ref>, we have the following possibilities (note the coefficient of T^2n in E(T,X_1) is 1):
E(T,X_1) is T^2n or aΣ_n(T^2/b_1,X_1^2); E(T-X_1,X_1) is T^2n or aΣ_n(T^2/b_2,X_1^2),
where b_i(∈ℚ) satisfy b_i^n=a(i=1,2).
The above possibilities are all impossible.
If E(T,X_1) is T^2n, T^2n-aR(X_1,X_2) is S_1(T,X_1,X_2)S_2(T+X_1,X_1,X_2). However, we have
X_1-exp2iπ/2n+2X_2 | R(X_1,X_2), (X_1-exp2iπ/2n+2X_2)^2 ∤ R(X_1,X_2), X_1-exp2iπ/2n+2X_2 ∤ 1.
By Eisenstein's criterion, T^2n-aR(X_1,X_2) is irreducible as the polynomial in the variable T with coefficients in ℚ[X_1,X_2]. This leads to a contradiction. Similar arguments can be applied to the case E(T-X_1,X_1) is T^2n.
If E(T,X_1) is b_1^nΣ_n(T^2/b_1,X_1^2)(=Σ_n(T^2,b_1X_1^2)), then we have the equalities E(T,X_1)-aR(X_1,X_2)=Σ_n(T^2,b_1X_1^2)-Σ_n(b_1X_1^2,b_1X_2^2)=(T^2-b_1X_2^2)Σ_n-1(T^2,b_1X_1^2,b_1X_2^2) (for the last equality, see <cit.> for example). We will make use of the following claim, whose proof will be given in Proposition <ref>.
Claim:
Σ_n-1(T^2,X_1^2,X_2^2) is irreducible in ℂ[T,X_1,X_2].
As b_1 is not 0, Σ_n-1(T^2,b_1X_1^2,b_1X_2^2) is irreducible. We have (T^2-b_1X_2^2)Σ_n-1(T^2,b_1X_1^2,b_1X_2^2)=S_1(T,X_1,X_2)S_2(T+X_1,X_1,X_2). Since S_1(T,X_1,X_2) is symmetrical in X_1,X_2 and the coefficient of T^2n of S_1 is 1,
then S_1(T,X_1,X_2) is Σ_n-1(T^2,b_1X_1^2,b_1X_2^2) and hence S_2(T+X_1,X_1,X_2) is T^2-b_1X_2^2. So S_2(T,X_1,X_2) is (T-X_1)^2-b_1X_2^2, which is not symmetrical in X_1,X_2. We get a contradiction.
If E(T-X_1,X_1) is b_2^nΣ_n(T^2/b_2,X_1^2)(=Σ_n(T^2,b_2X_1^2)), we have (T^2-b_2X_2^2)Σ_n-1(T^2,b_2X_1^2,b_2X_2^2)=S_1(T-X_1,X_1,X_2)S_2(T,X_1,X_2). Similarly, S_2(T,X_1,X_2) is Σ_n-1(T^2,b_2X_1^2,b_2X_2^2). But (S_2) is at most (S_1) and n is at least 3, it is impossible.
Now we prove our main theorem.
Every uniform bundle of rank 2n on ℚ^2n+1 and ℚ^2n+2(n≥ 3) splits.
We have shown that there is no unsplit uniform bundle of rank 2n on ℚ^2n+1 for n≥ 3. Now let E' be a uniform bundle of rank 2n on ℚ^2n+2. For every smooth hyperplane section ℚ^2n+1↪ℚ^2n+2, the restriction E'|_ℚ^2n+1 splits. Then we conclude by <cit.>.
Given a generalised Grassmannian X, we denote by μ(X) the maximum positive integer verifying that every uniform vector bundle whose rank is at most μ(X) splits. And we call μ(X) the splitting threshold for uniform vector bundles on X. For example, μ(ℙ^n) is n-1. The main theorem in <cit.> tells us that for most generalised Grassmannians X, μ(X) is equal to e.d.(VMRT) (see <cit.>). Note that for ℚ^2n+1 and ℚ^2n+2, their e.d.(VMRT) are all 2n-1.
(1) Theorem <ref> reveals that the splitting thresholds for uniform vector bundles on ℚ^2n+1 and ℚ^2n+2(n≥ 3) are at least 2n. Actually, ℚ^2n+1 and ℚ^2n+2(n≥ 3) are the first known examples such that μ(X) is bigger than e.d.(VMRT).
(2) The threshold μ(ℚ^2n+1) is 2n, as the tangent bundle T_ℚ^2n+1 is unsplit. Moreover, the splitting type of T_ℚ^2n+1 is (2,1,…,1,0).
In <cit.>, the authors classify the unsplit uniform bundles of minimal ranks on the generalised Grassmannians B_n/P_k (2≤ k < 2n/3), B_n/P_n-2, B_n/P_n-1 and D_n/P_k (2≤ k < 2n-2/3), D_n/P_n-3, D_n/P_n-2. As direct corollaries of Theorem <ref>, we can give further classification results for uniform bundles on B_n/P_k (2n/3≤ k ≤ n-3) and D_n/P_k (2n-2/3≤ k ≤ n-4).
Let X be B_n/P_k, where k is 2n/3 and k is at least 6. Let E be a uniform vector bundle on X of rank r.
* If r is smaller than k, then E is a direct sum of line bundles.
* If r is k, then E is either a direct sum of line bundles or E_λ_1⊗ L or E_λ_1^∨⊗ L for some line bundle L, where E_λ_1 is the irreducible homogeneous bundle corresponding to the highest weight λ_1.
Note that the e.d.(VMRT) of X is k-1(=2n-2k-1). By
<cit.>, the first assertion follows. For the case r=k, since 2n-2k+1=k+1 is at least 7, E|_ℚ^2n-2k+1 splits by Theorem <ref>. Suppose E is unsplit, then E|_ℙ^k is also unsplit. Then the second assertion follows from <cit.>.
Let X be B_n/P_k with 2n/3< k ≤ n-3. Every uniform bundle of rank 2n-2k on X splits.
If E is a uniform bundle of rank 2n-2k on X, then E|_ℙ^k splits, as 2n-2k is smaller than k. On the other hand, since k is at most n-3 and hence 2n-2k+1 is at least 7, E|_ℚ^2n-2k+1 splits by Theorem <ref>. Because any 2-plane in X is contained in a ℙ^k or ℚ^2n-2k+1, E splits by <cit.>.
Similar to the proofs of the above corollaries, we can prove the following results.
Let X be D_n/P_k, where k is 2n-2/3 and k is at least 6. Let E be a uniform vector bundle on X of rank r.
* If r is smallet than k, then E is a direct sum of line bundles.
* If r is k, then E is either a direct sum of line bundles or E_λ_1⊗ L or E_λ_1^∨⊗ L for some line bundle L, where E_λ_1 is the irreducible homogeneous bundle corresponding to the highest weight λ_1.
Let X be D_n/P_k with 2n-2/3< k ≤ n-4. Every uniform bundle of rank 2n-2k-2 on X splits.
§ CLASSIFICATION OF PRIMITIVE APPROXIMATE SOLUTIONS
We use methods in <cit.> to classify primitive approximate solutions. Let P(T,X_1)=∑_k=0^2np_kX_1^kT^2n-k be a primitive approximate solution.
If p_2n is 0, then P(T,X_1) is bT^2n for some rational number b.
Let S_0(T,X_1,X_2) be a symmetrical divisor of P(T,X_1)-R(X_1,X_2) such that for a 2(n+1)-th primitive unit root y_0, we have S_0(0,1,y_0)=0. Since y_0 is a simple root of P(0,1)-R(1,z), y_0 is also a simple root of S_0(0,1,z). Therefore, by the implicit function theorem, there is a germ of holomorphic function y(x) at a neighborhood of x=0 satisfying
S_0(x(1+y(x)),1,y(x))=0 and y(0)=y_0.
We now show by induction that for m=1,…,2n-1, we have y^(m)(0)=p_2n-m=0. As S_0 is symmetrical in X_1 and X_2, we have
P(x(1+y(x)),1)-R(y(x),1)=0, 1
P(x(1+y(x)),y(x))-R(y(x),1)=0. 2
By taking the derivatives of (<ref>) and (<ref>) at x=0 and note that p_2n is 0, we obtain
p_2n-1(1+y_0)-R'(y_0,1)y'(0)=0, 1'
p_2n-1(1+y_0)y_0^2n-1-R'(y_0,1)y'(0)=0. 2'
From R(y,1)(y^2-1)=y^2n+2-1, we have R'(y,1)(y^2-1)+R(y,1)·(2y)=(2n+2)y^2n+1. As R(y_0,1) is 0 and y_0 is not ± 1, R'(y_0,1) is (2n+2)y_0^2n+1/y_0^2-1( 0). From (<ref>) and (<ref>), we get p_2n-1(1+y_0)(y_0^2n-1-1)=0. Since y_0 is primitive, y_0^2n-1-1 and 1+y_0 are not zero. We have p_2n-1=0. By (<ref>) and note R'(y_0,1)0, y'(0) is 0.
For the case m≥ 2, we assume by induction that y^(m')(0)=p_2n-m'=0 for m'<m. Then the m-th derivatives of (<ref>) and (<ref>) satisfy the following equations:
m!p_2n-m(1+y_0)^m-R'(y_0,1)y^(m)(0)=0, 1^m
m!p_2n-m(1+y_0)^my_0^2n-m-R'(y_0,1)y^(m)(0)=0. 2^m
Since y_0 is primitive, y_0^2n-m-1 is not zero. We have y^(m)(0)=p_2n-m=0 as above.
If p_2n is 1, then P(T,X_1) is Σ_n(bT^2,X_1^2) for some b∈ℚ.
Let S_+(T,X_1,X_2) be a symmetrical divisor of P(T,X_1)-R(X_1,X_2) such that for a 2n-th primitive unit root y_0, we have S_+(0,1,y_0)=0. Note the equation Σ_n(p_2n-2T^2,X_1^2)-R(X_1,X_2)=(p_2n-2T^2-X_2^2)Σ_n-1(p_2n-2T^2,X_1^2,X_2^2) (see <cit.> for example). Let S_-(T,X_1,X_2) be Σ_n-1(p_2n-2T^2,X_1^2,X_2^2), then S_-(0,1,y_0) is also 0. Denote P(T,X_1) by P_+(T,X_1) and denote Σ_n(p_2n-2T^2,X_1^2) by P_-(T,X_1), we are going to show that P_+ equals P_-.
Let y_±(x) be germs of holomorphic functions satisfying
S_±(x(1+y_±(x)),1,y_±(x))=0 and y_±(0)=y_0.
As S_± is symmetrical in X_1 and X_2, we have equations:
P_±(x(1+y_±(x)),1)-R(y_±(x),1)=0, 1_±
P_±(x(1+y_±(x)),y_±(x))-R(y_±(x),1)=0. 2_±
Let p^+_2n-m (resp. p^-_2n-m) be the coefficient of X_1^2n-mT^m in P_+(T,X_1) (resp. P_-(T,X_1)). We prove p^+_2n-m=p^-_2n-m and y^(m)_+(0)=y^(m)_-(0) by induction for m (0≤ m ≤ 2n).
When m is 0, we have p_2n^+=p_2n^-=1, y_+(0)=y_-(0)=y_0. We next show p_2n-1^+=p_2n-1^-=0 and y'_+(0)=y'_-(0)=0. Taking the derivatives of (<ref>) and (<ref>) at x=0, we get
p_2n-1^±(1+y_0)-R'(y_0,1)y'_±(0)=0, 1'_±
p_2n-1^±(1+y_0)y_0^2n-1+2ny_0^2n-1y'_±(0)-R'(y_0,1)y'_±(0)=0. 2'_±
From R(y,1)(y^2-1)=y^2n+2-1, we have R'(y,1)(y^2-1)+R(y,1)·(2y)=(2n+2)y^2n+1. Note R(y_0,1)=1 (as y_0 is a 2n-th primitive unit root). Substituting it in the above equation, we have
2ny_0-y_0^2R'(y_0,1)=-R'(y_0,1). †
Multiplying (<ref>) by y_0^2 and using the relation (<ref>), one has
p_2n-1^±(1+y_0)y_0-R'(y_0,1)y_±'(0)=0. y_0^2· 2'_±
p_2n-1^±(1+y_0)-R'(y_0,1)y_±'(0)=0, 1'_±
Since y_0 is primitive, |[ y_0(1+y_0) -R'(y_0,1); 1+y_0 -R'(y_0,1) ]|
=(1-y_0)(1+y_0)R'(y_0,1) is not 0. Then one obtains p_2n-1^±=y_±'(0)=0.
When m is at least 2, by induction, we may assume p^+_2n-m'=p^-_2n-m' and y_+^(m')(0)=y_-^(m')(0) for m'<m. By taking the m-th derivatives of (<ref>) and (<ref>) at x=0, we have
m!p_2n-m^±(1+y_0)^m-R'(y_0,1)y^(m)_±(0)+T^1_±=0, 1^m_±
m!p_2n-m^±(1+y_0)^my_0^2n-m+2ny_0^2n-1y^(m)_±(0)-R'(y_0,1)y^(m)_±(0)+T^2_±=0, 2^m_±
where T^i_± are the remaining terms satisfying T^1_-=T^1_+ and T^2_-=T^2_+. For the case m=2, by construction, we automatically have p_2n-2^-=p_2n-2=p_2n-2^+. Then one obtains y_+^(2)(0)=y_-^(2)(0) from (<ref>). Suppose now m is at least 3, multiplying (<ref>) by y_0^2 and using (<ref>), we have
m!p_2n-m^±(1+y_0)^my_0^2n-m+2-R'(y_0,1)y_±^(m)(0)+y_0^2T^2_±=0. y_0^2· 2^m_±
Note that |[ m!(1+y_0)^my_0^2n-m+2 -R'(y_0,1); m!(1+y_0)^m -R'(y_0,1) ]|
=m!(1-y_0^2n-m+2)(1+y_0)^mR'(y_0,1) is not zero for m≥ 3. By solving the system of linear equations {(<ref>), (<ref>)} (view p_2n-m^± and y_±^(m)(0) as indeterminate), we have p_2n-m^+=p_2n-m^- and y_-^(m)(0)=y_+^(m)(0).
Σ_n(T^2,X_1^2,X_2^2) is irreducible in ℂ[T,X_1,X_2].
It suffices to show that the variety V:={(T,X_1,X_2)|Σ_n(T^2,X_1^2,X_2^2)=0}⊂ℂ^3 is smooth on ℂ^3\{0}. By <cit.>, the variety defined by Σ_n(T,X_1,X_2)=0 is smooth on ℂ^3\ 0. Note that the map (T,X_1,X_2)↦ (T^2,X_1^2,X_2^2) is a local isomorphism outside the locus defined by TX_1X_2=0, V is smooth on ℂ^3\{TX_1X_2=0}.
Now suppose (t,u,v)( (0,0,0)) is a singular point of V, then one of t,u and v is 0. By symmetry, we assume that t is 0. Then there are equations
∂/∂ TΣ_n(T^2,X_1^2,X_2^2)|_(0,u,v)=∂/∂ X_1Σ_n(T^2,X_1^2,X_2^2)|_(0,u,v)=∂/∂ X_2Σ_n(T^2,X_1^2,X_2^2)|_(0,u,v)=0.
Furthermore, we have equations:
∂/∂ X_1Σ_n(T^2,X_1^2,X_2^2)|_(0,u,v)=∂/∂ X_1Σ_n(X_1^2,X_2^2)|_(u,v)=0,
∂/∂ X_2Σ_n(T^2,X_1^2,X_2^2)|_(0,u,v)=∂/∂ X_2Σ_n(X_1^2,X_2^2)|_(u,v)=0.
Without loss of generality, we may assume v is not 0. As Σ_n(u^2,v^2)(=Σ_n(0,u^2,v^2)) vanishes, u/v is a root of Σ_n(z^2,1)=0 with multiplicity greater than 2.
But Σ_n(z^2,1)=z^2n+2-1/z^2-1 has no multiple roots, which is a contradiction.
plain
|
http://arxiv.org/abs/2409.02166v1 | 20240903180001 | Boundary SymTFT | [
"Lakshya Bhardwaj",
"Christian Copetti",
"Daniel Pajer",
"Sakura Schafer-Nameki"
] | hep-th | [
"hep-th",
"cond-mat.str-el",
"math-ph",
"math.CT",
"math.MP"
] |
=1
3cm
-1.5cm
-1.5cm
3.0cm
-1.5cm
#1#1
figures/
positioning
calc
decorations.pathreplacing,calligraphy
decorations.pathmorphing
decorations.markings
arrows
shapes
matrix
positioning
shapes.multipart
->-/.style=decoration=
markings,
mark=at position .5 with >,postaction=decorate
|
http://arxiv.org/abs/2409.03178v1 | 20240905021100 | Void Number Counts as a Cosmological Probe for the Large-Scale Structure | [
"Yingxiao Song",
"Qi Xiong",
"Yan Gong",
"Furen Deng",
"Kwan Chuen Chan",
"Xuelei Chen",
"Qi Guo",
"Yun Liu",
"Wenxiang Pei"
] | astro-ph.CO | [
"astro-ph.CO"
] |
firstpage–lastpage
Cosmic ray north-south anisotropy: rigidity spectrum and solar cycle variations observed by ground-based muon detectors
[
September 9, 2024
=======================================================================================================================
§ ABSTRACT
Void number counts (VNC) indicates the number of low-density regions in the large-scale structure (LSS) of the Universe, and we propose to use it as an effective cosmological probe. By generating the galaxy mock catalog based on Jiutian simulations and considering the spectroscopic survey strategy and instrumental design of the China Space Station Telescope (CSST), which can reach a magnitude limit ∼23 AB mag and spectral resolution R≳200 with a sky coverage 17,500 deg^2, we identify voids using the watershed algorithm without any assumption of void shape, and obtain the mock void catalog and data of the VNC in six redshift bins from z=0.3 to1.3. We use the Markov Chain Monte Carlo (MCMC) method to constrain the cosmological and VNC parameters. The void linear underdensity threshold δ_ v in the theoretical model is set to be a free parameter at a given redshift to fit the VNC data and explore its redshift evolution. We find that, the VNC can correctly derive the cosmological information, and the constraint strength on the cosmological parameters is comparable to that from the void size function (VSF) method, which can reach a few percentage level in the CSST full spectroscopic survey. This is because that, since the VNC is not sensitive to void shape, the modified theoretical model can match the data better by integrating over void features, and more voids could be included in the VNC analysis by applying simpler selection criteria, which will improve the statistical significance. It indicates that the VNC can be an effective cosmological probe for exploring the LSS.
Cosmology – Large-scale structure of Universe – Cosmological parameters
§ INTRODUCTION
Cosmic void, which is characterized by low density, large volume and linear evolution, is an important component of the cosmic large-scale structure (LSS). A large number of void samples can be obtained by galaxy surveys for studying the formation and evolution of the LSS and properties of dark energy and dark matter.
A variety of cosmological probes associated with voids have been proven to be very effective, such as the galaxy-void cross-correlation in redshift space <cit.> and the void size function <cit.>. Cosmic voids are suitable for studying modified gravity and massive neutrinos due to their large volume and low density <cit.>, and can also be used to measure baryonic acoustic oscillations (BAO) <cit.>. The great potential of voids has been shown in these existing studies, which can promote our research on the LSS of the Universe.
Here we propose to use void number counts (VNC) to explore the LSS and constraining the cosmological parameters. The VNC is the integral of the VSF over the void size at a given redshift. The VSF has been proven to be an effective method for constraining cosmological models in spectroscopic galaxy surveys, e.g. BOSS DR12 <cit.>. As a function representing the number density of voids at different scales at a given redshift, the VSF can illustrate the features and evolution of voids. The theoretical models of the VSF are also developed in current relevant researches, that usually assume the voids have spherical shape. One of the popular theoretical VSF models is represented by the Sheth and van de Weygaert model <cit.>, and it assumes that large voids grow from isolated small voids. And the SvdW model was later extended to the volume conserving model <cit.>, which assumes that large voids merge from small voids rather than evolve independently, and it is now one of the most widely used VSF model.
However, the shape of voids can be arbitrary and irregular, and the current void finders are usually based on the watershed algorithm <cit.>, without any assumption on the void shape. So it is necessary to trim the void catalogs for selecting the voids with spherical shape to match the theoretical model in current VSF studies, and recent works have trimmed void catalog in various degrees or improve existing theoretical models <cit.>. This obviously will exclude large numbers of voids, and dramatically lose statistical significance. On the other hand, if we consider the VNC, in principle, we do not need to trim the void catalog significantly, since the VNC is not as sensitive as the VSF to the void shape by integrating over the void size, and the theoretical model can be modified easily and may explain the data better. This will retain more voids in the analysis, and could improve the accuracy of cosmological constraint.
In order to investigate the feasibility of this method, we create mock galaxy and void catalogs based on simulations, and assume the China Space Station Telescope <cit.> as the instrument to perform the spectroscopic survey. We use the widely used watershed algorithm of the void finder and obtain the void effective radius, ellipticity, and volume weighted center needed in our analysis. Then we generate the mock data of the VNC in six redshift bins from z=0.3 to 1.3, and discuss two void samples with employing two selection criteria, i.e. applying empirical void size cut-off and the void ellipticity cut-off, respectively. We constrain the cosmological and void parameters by using the Markov Chain Monte Carlo (MCMC) method. We also compare the result to that from the VSF method given by <cit.> based on the same galaxy catalog, and demonstrate the feasibility of the VNC method.
The paper is organized as follows: In Section <ref>, we introduce the simulations for generating the mock galaxy and void catalogs; In Section <ref>, we discuss the computation of the theoretical model and generation of the mock data for the VNC; In Section <ref>, we show the constraint results of the cosmological and void parameters; The summary and conclusion is given in Section <ref>.
§ MOCK CATALOGS
§.§ Simulation
We use dark matter only N-body simulations, i.e. Jiutian simulations, to derive the mock galaxy and void catalogs. The Jiutian simulation we adopt covers a volume of 1 (h^-1Gpc)^3, and contains 6144^3 particles with a mass resolution of m_ p = 3.72 × 10^8 h^-1M_⊙. The simulation is run with the L-Gadget3 code, and uses friend-of-friend and subfind algorithm to identify dark matter halo and substructure <cit.>. It employs the fiducial cosmological model with Ω_m = 0.3111, Ω_b = 0.0490, Ω_Λ = 0.6899, n_s = 0.9665, σ_8 = 0.8102 and h = 0.6766 <cit.>.
The redshift space distortion (RSD) and structure evolution effects are also considered, by constructing simulation cubes with slices from the outputting snapshots at different redshifts in the redshift range of a simulation box. In our mock catalog, we trace the merger tree of each galaxy and locate the snapshot with the closest redshift to the distance of the galaxy. Rather than directly slicing and stitching snapshots by redshift, our method avoids repetition and omission of galaxies at slice boundaries. We do not use interpolation to accurately calculate the RSD effect, as interpolating between snapshots does not capture galaxy position and velocity information accurately. This can only affect small scales in non-linear regime, and will not change our results at the scales we are interested in, since we exclude voids smaller than 5 h^-1Mpc to avoid any impact on void identification and non-linear effect.
§.§ Galaxy mack catalog
We make use of the CSST spectroscopic galaxy survey as an example to construct the mock galaxy catalog. The CSST can simultaneously perform the photometric imaging and slitless spectroscopic surveys, covering 17500 deg^2 survey area and wavelength range 250-1000 nm. It has three spectroscopic bands, i.e. GU, GV and GI, with the spectral resolution R≳200. The magnitude limit for a band can reach ∼23 AB mag for 5σ point source detection.
We construct the mock galaxy catalog based on an improved Semi-Analytic Model <cit.>. The database contains luminosities of galaxy emission lines produced by post-processing as described in <cit.>, which can be used to select galaxies detected by the CSST spectroscopic survey according to the signal-to-noise ratio (SNR) and measure the galaxy redshift. Here four emission lines, i.e. Hα, Hβ, [OIII] and [OII], are considered. Compared to hydrodynamical simulations, this semi-analytic model is good enough for our analysis, which can correctly produce the luminosity functions of the emission lines we are interested in. The SNR per spectral resolution unit can be estimated by <cit.>
SNR=C_s t_exp√(N_exp)/√(C_s t_exp+N_pix[(B_sky+B_det)t_exp+R_n^2]),
where N_pix=Δ A/l_p^2 is the number of detector pixels covered by an object. Here Δ A is the pixel area on the detector, assumed to be the same for all galaxies for simplicity, and l_p = 0.074” is the pixel size. The point-spread function (PSF) is assumed to be a 2D Gaussian distribution with the radius of 80% energy concentration ∼0.3” in the CSST spectroscopic survey. N_exp is the number of exposures and t_exp is the exposure time, and we set t_exp=150 s and N_exp = 4. R_n = 5 e^-s^-1pixel^-1 is the read noise, and B_det = 0.02 e^-s^-1pixel^-1 is the dark current of the detector. B_sky is the sky background in e^-s^-1pixel^-1, which is given by
B_sky = A_eff∫ I_sky(ν)R_X(ν)l_p^2dν/hν,
where A_eff = 3.14 m^2 is the CSST effective aperture area, I_sky is the surface brightness of the sky background, R_X is the total throughput for band X including filter intrinsic transmission, mirror efficiency, and detector quantum efficiency. Here we estimate I_sky based on the measurements of earthshine and zodiacal light given in <cit.>. We find that B_sky=0.016, 0.196, and 0.266 e^-s^-1pixel^-1 for GU, GV and GI bands, respectively. C_s in Equation (<ref>) is the counting rate from galaxy, and for emission line i at frequency ν_i, we have
C_s^i = A_effR_X(ν_i/z+1)F_line^i/hν_i/(z+1),
where F_line^i is the flux of the emission line i which can be obtained from the simulation.
We select galaxies if SNR≥10 for any emission line of the four lines Hα, Hβ, [OIII] and [OII] in any spectroscopic band to get the mock galaxy catalog. We find that the number density of galaxies are n̅ = 1.5× 10^-2, 4.3× 10^-3, 1.2× 10^-3, 4.6× 10^-4, 2.0× 10^-4, and 9.0× 10^-5 h^3 Mpc^-3 for the six redshift bins at the central redshift z_ c = [0.3,0.5,0.7,0.9,1.1,1.3], respectively.
We also calculate the mean galaxy separation (MGS), which indicates the typical distance between two galaxies. It can be calculated by the number of galaxies N_ g and the survey volume V_ s at different redshift bins, i.e. MGS =(V_ s/N_ g)^1/3, and the MGS value in each redshift bin is shown in Table <ref>.
To include the RSD effect, the galaxy redshift is estimated by considering both galaxy peculiar motion and the accuracy of CSST slitless spectral calibration. The redshift z is involving the peculiar motions of the source z_pec and cosmological redshift z_cos with relation 1+z = (1+z_cos)(1+z_pec) = (1+z_cos)(1+v_ loc/c), where v_ loc is the LOS component of peculiar velocity. We also add a 0.2% error to each redshift as the uncertainty of CSST slitless spectral calibration <cit.>.
§.§ Void mock catalog
We identify voids in our mock galaxy catalog based on Voronoi tessellation and the watershed algorithm <cit.>, without any assumption of void shape. The Void IDentification and Examination toolkit[<https://bitbucket.org/cosmicvoids/vide_public/src/master/>] <cit.> is chosen to find voids, which is based on ZOnes Bordering On Voidness <cit.>. The low-density zones found by the watershed algorithm are further merged in , and the merging condition is set to be the boundary between two adjacent low-density zones < 0.2 n̅, where n̅ is the average tracer density. To avoid void-in-void cases, we use the low-density zones that are not merged before as the void mock catalog.
The voids we obtain are composed of cells, and each cell contains a galaxy as the particle tracer. Based on the cell volume V^i_cell, we can compute the void volume V and define the void effective radius R_v in Eulerian space as
V=∑ V^i_ cell=4/3π R_ v^3.
The void volume-weighted center 𝐗_v also can be estimated by the cell positions as
𝐗_v = 1/V∑^N_i𝐱_i V^i_ cell,
where 𝐱_i is the coordinate of the galaxy within a cell.
We evaluate the void shape by estimating the ellipticity ϵ, which can be derived by the smallest and largest eigenvalues of the inertia tensor, i.e. J_1 and J_3:
ϵ = 1 - (J_1/J_3)^1/4.
The inertia tensor is given by
I=[
[ I_ xx I_ xy I_ xz; I_ yx I_ yy I_ yz; I_ zx I_ zy I_ zz; ]].
The diagonal and off-diagonal components of the inertia tensor can be calculated as I_xx= ∑_i=1^N(y^2_i+z^2_i) and I_xy = -∑_i=1^Nx_iy_i, where x_i, y_i and z_i denote the position of the galaxy in cell i relative to the center of the void. To avoid the effect of nonlinear evolution, we only keep the voids with R_ v>5 h^-1Mpc in the analysis, which can retain as many as "usable" voids in our analysis <cit.>. We also notice that this cut-off is only effective at low redshifts, since the minimum void radius R_ v^ min is greater than 5 h^-1Mpc at z>0.7 as shown in Table <ref>. We show the void ellipticity distribution from different redshift bins in Figure <ref>. We can find that most voids in the mock catalog are spherical-like, and the peak of the void ellipticity distribution is about 0.15 at all redshift bins. Besides, the voids in higher redshift bins with larger R_ v^ mean also have relatively lower ellipticity, i.e. more spherical-like voids reside at high redshifts.
Since the voids identified by the watershed algorithm can have arbitrary shapes, which will conflict with the void spherical evolution assumed in the theoretical model described in Section <ref>, we need to make further void selection for model fitting. To reduce the discrepancy between the void data and theoretical model, we further trim our void catalog in two cases, i.e. Case 1 and Case 2, to explore the performance of the VNC method. Since larger voids are usually more spherical, in Case 1, we further select the voids only based on the void size, and empirically choose the void radius range considering the average void radius R_ v^ mean at each redshift to retain large-size voids. We find that the lower limit of this range is about 2 times larger than R_ v^ mean at z=0.3, and ∼1.5 times larger at z>0.3. The void radius ranges for different redshift bins are shown in Table <ref>. In Case 2, we basically choose the lower limit of the void radius range to include R_ v^ mean (about 2.5×MGS) in a redshift bin, and select voids with ϵ<0.15 according to the peaks of the ellipticity distributions at different redshifts at ϵ≃0.15. We find that selecting higher ellipticity cut-off, e.g. ϵ∼0.2, may not obtain spherical enough voids and lead to worse model fitting result. For Case 2, it has larger radius ranges and contains smaller voids compared to Case 1.
§ VOID NUMBER COUNTS
Theoretically, the VNC are obtained by integrating the VSF over the void radius in a survey volume V_ S at a given redshift, and it can be calculated by
N_v(z)=V_ S∫_R_v^min^R_v^maxdn/dR_vdR_v.
By assuming that the volume fraction V dn is constant from Lagrangian space to Eulerian space <cit.>, the VSF can be estimated as
dn/ dR_v=ℱ(ν)/V(R_ v)R_L/R_v dν/ dR_L,
where R_L is the Lagrangian void radius. The peak height ν is decided by the void linear underdensity threshold of void formation δ_v and the root-mean-squared density fluctuation at different redshifts, which is given by ν=|δ_v|/σ_M(z),
where σ_M(z)=σ_0(R_ L)D(z) and D(z) is the linear growth factor. We obtain σ_M(z) using in our work <cit.>. For the void radius ranges we choose, ℱ(ν) can be approximated very well by <cit.>
ℱ(ν)=√(2/π) exp(-ν^2/2) exp(-|δ_v|/δ_c𝒟^2/4ν^2-2𝒟^4/ν^4).
Here 𝒟=|δ_v|/(δ_c+|δ_v|), and δ_c=1.686. δ_ v usually can be derived theoretically assuming spherical evolution, and it is found to be -2.731 in the ΛCDM model when the void matter desity ρ_ v=0.2ρ̅_ m <cit.>. However, since the voids identified in our catalog do not assume any shape, we set δ_ v as a free parameter at a given redshift in the model fitting process, and it can be seen as an effective void linear underdensity threshold in our model. Then the relation between R_L and R_v is given by
R_L≃R_v/(1-δ_v/c_ v)^c_ v/3,
where c_ v =1.594 <cit.>.
In addition, we also consider the RSD and Alcock-Paczyński <cit.> effects in the model by two factors, for relating the void radius in observational redshift and real spaces. Following <cit.>, we have
R_ v^ obs = f_ RSDf_ APR_ v.
Here f_ RSD = 1 -(1/6)βΔ(R_ v), where Δ(R_ v) = (1-δ_ v/c_ v)^-c_ v-1 <cit.>, β = f/b, f is the growth rate, and b is the tracer bias. We set β as free parameters at different redshift bins in the fitting process. f_ AP=α_∥^1/3α_⊥^2/3, where α_∥ = H^ fid(z)/H(z) and α_⊥ = D_ A(z)/ D_ A^ fid(z), which are the ratios for Hubble parameters and angular diameter distances in the fiducial and real cosmologies.
In Figure <ref>, we show the VNC mock data and best-fit theoretical curves for Case 1 (red) and Case 2 (green). We use jackknife method to derive the error bar of each data point. We can find that, as expected, the number of voids decreases as the redshift increases, and the void number in Case 1 is lower than Case 2 since more voids are excluded in Case 1 by applying narrower void radius ranges.
§ CONSTRAINT AND RESULT
We adopt χ^2 for constraining the model parameters, which takes the form as
χ^2 = ∑_z bins[N_v^data(z_i)-N_v^th(z_i)/σ^i_v]^2,
where N_v^data(z_i) and N_v^th(z_i) are the data and theoretical model of the VNC, respectively, and σ^i_v is the data error in the ith redshift bin. The likelihood function can be calculated by ℒ ∝ exp(-χ^2/2).
We constrain the free parameters using the Markov Chain Monte Carlo (MCMC) method with <cit.>. We choose 112 walkers and obtain 20000 steps for each chain. The first 10 percent of steps are discarded as the burn-in process. In our model we totally have 18 free parameters. The cosmological free parameters are the dark energy equation of state w, reduced Hubble constant h, the total matter density parameter Ω_m, baryon density parameter Ω_ b, spectral index n_ s, and amplitude of initial power spectrum A_ s. The free parameters about void δ_v^i is the threshold for void formation from the six redshift bins from z=0.3 to 1.3. And we also set the RSD parameter β^i in the six redshift bins. The flat priors of the free parameters are Ω_ m∈(0.1,0.5), Ω_ b∈(0.02,0.08), A_s/10^-9∈(1.0,3.0), n_ s∈(0.7, 1.2), w∈(-1.8,-0.2), h∈(0.5,0.9), and δ_v^i∈(-2,0) and β^i∈(0, 0.5) in the six redshift bins. And we set the fiducial value of cosmological parameter with w=-1, h = 0.6766, Ω_m = 0.3111, Ω_b = 0.049, n_s = 0.9665, and A_ s/10^9=2.1. Note that δ_v^i and β^i are not the input parameters in the simulation, and they do not have the fiducial values.
In Figure <ref>, we show the one-dimensional (1D) probability distribution functions (PDFs) and contour maps of the six cosmological parameters constrained by the VNC for Case 1 and Case 2 at 68% and 95% confidence levels (CL). The details of the constraint result of all the free parameters are shown in Table <ref>, containing the best-fit values, 1σ errors, and relative accuracies. We can find that the fitting results of the cosmological parameters are consistent with the corresponding fiducial values within 1σ CL. This means that our method of the VNC can derive the cosmological information correctly.
Besides, the constraint powers in Case 1 and Case 2 are similar for the cosmological parameters, which give the constraint accuracies as Ω_ m∼30%, Ω_ b∼50%, A_s/10^-9∼20%, n_ s∼20%, w∼27%, and h∼10%. Note that the constraint accuracy can be improved by about one order of magnitude, if considering the full CSST spectroscopic survey covering 17500 deg^2. We can find that this result is comparable to that using the VSF method based on the same simulation and galaxy catalog as shown in <cit.>. This is probably because that the method of the VNC is not as sensitive as the VSF method to the void shape. Hence, we can adopt simpler selection criteria, keep more voids in the cosmological analysis and include more redshift bins, i.e. six redshift bins from z=0.3 to 1.3 in this work while only four bins from z=0.5 to 1.1 in <cit.>, which can effectively improve the statistical significance. Besides, our theoretical modeling without fixing δ_ v also could match the data better, since detailed void features are integrated over in the VNC. All these advantages can make the VNC obtain stringent and accurate constraints on the cosmological parameters. Here we do not show the constraint result of β, since we find that the VNC is not very sensitive to β and there is no stringent constraint on it (with an accuracy >60%) for the current mock data we use.
In Figure <ref>, we plot the best-fit values and 1σ errors of δ_ v in the six redshift bins for Case 1 and Case 2, and the results from the VSF method given by <cit.> are also shown for comparison. The contour maps and and 1D PDFs of δ_ v^i are shown in Appendix <ref>. We can find that the values of δ_ v in all the three cases have significant discrepancy compared to the theoretical values δ_ v≃-2.7, assuming the spherical evolution and simulation particles as tracers <cit.>. Since our voids are identified by the watershed algorithm without any assumption of void shape and adopting galaxies as tracers, the current result is reasonable.
We also notice that the values of δ_ v at different redshifts in the three cases have a similar trend, which are approaching to 0 when redshift increases. This means that, as expected in the linear evolution, the Eulerian void size R_ v will be close to the Lagrangian void size R_ L at high redshifts (see Eq. (<ref>)). In addition, the result of Case 1 is more consistent with that from the VSF method in z=0.5-1.1, since the two analyses select similar ranges of void radius. On the other hand, the values of δ_ v in Case 2 are always lower than the other two cases, especially at z<0.6. This is because the value of δ_ v can reflect the merging of small voids into large voids in the V dn model. Typically, a smaller δ_ v means the average void radius in a sample is smaller <cit.>. Since the lower limit of the void radius range in Case 2 is smaller than that in Case 1, Case 2 contains more small voids used in the analysis. This will suppress the average void radius in Case 2 and lead to a smaller δ_ v compared to Case 1, especially at low redshifts. If choosing similar radius ranges for Case 1 and Case 2, we find the values of δ_ v will be closer in these two cases.
§ SUMMARY AND CONCLUSION
In this work, we propose to use the VNC as a cosmological probe for studying the LSS. To check the feasibility, we generate the galaxy mock catalog based on Jiutian simulations and the CSST spectroscopic galaxy survey, and identify voids by VIDE without assuming void shape. The mock void catalog and data of the VNC are then derived at the six redshift bins from z=0.3 to 1.3 in two cases, i.e. Case 1 and Case 2, by using empirical void radius ranges and considering the void ellipticity, respectively. We also set δ_ v as a free parameter at a given redshift in the theoretical model for better fitting the mock data and studying its redshift dependency and evolution. The RSD and AP effects are also considered in the analysis. Then we perform a joint fit of the cosmological, void and RSD parameters using the mock VNC data by the MCMC method.
We find that both Case 1 and Case 2 with different selection criteria lead to similar results. For the constraints on the cosmological parameters, the VNC can correctly derive the cosmological information, that the constraint power is comparable to the VSF, and can provide a few percentage level constraints on the cosmological parameters in the CSST spectroscopic survey. This is due to that the VNC is insensitive to void shape, and more voids and redshift bins can be kept by simpler selection criteria in the analysis, which could effectively improve the statistical significance. The theoretical model of the VNC also can be effectively modified by assuming a free δ_ v at a given redshift, and can match the data better than the VSF by integrating over the void size. For the constraint on the void linear underdensity threshold δ_ v, the results at different redshift bins from the VNC method have a similar trend as that from the VSF method, i.e. it becomes larger and larger and close to zero when redshift increases. The value of δ_ v is also dependent on the chosen void radius range at a given redshift. All of these indicate that the VNC can be a feasible and effective probe in cosmological studies.
§ ACKNOWLEDGEMENTS
YS and YG acknowledge the support from National Key R&D Program of China grant Nos. 2022YFF0503404, 2020SKA0110402, and the CAS Project for Young Scientists in Basic Research (No. YSBR-092). KCC acknowledges the support the National Science Foundation of China under the grant number 12273121. XLC acknowledges the support of the National Natural Science Foundation of China through Grant Nos. 11473044 and 11973047, and the Chinese Academy of Science grants ZDKYYQ20200008, QYZDJ-SSW-SLH017, XDB 23040100, and XDA15020200. QG acknowledges the support from the National Natural Science Foundation of China (NSFC No.12033008). This work is also supported by science research grants from the China Manned Space Project with Grant Nos. CMS- CSST-2021-B01 and CMS-CSST-2021-A01.
§ DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
mnras
§ MCMC RESULTS FOR Δ_ V
In Figure <ref>, we show the contours at 68% and 95% CL and 1D PDFs of the linear underdensity thresholds for void formation δ_v^i in the six redshift bins for Case 1 (red) and Case 2 (green). Because δ_v^i is not the input parameter in the simulation, it does not have the fiducial value. The details of the best-fit values, 1σ errors, and relative accuracies for the six δ_v^i are also shown in Table <ref>.
|
http://arxiv.org/abs/2409.02304v1 | 20240903212928 | Wikipedia in Wartime: Experiences of Wikipedians Maintaining Articles About the Russia-Ukraine War | [
"Laura Kurek",
"Ceren Budak",
"Eric Gilbert"
] | cs.SI | [
"cs.SI"
] |
[email protected]
University of Michigan
USA
[email protected]
University of Michigan
USA
[email protected]
University of Michigan
USA
§ ABSTRACT
How do Wikipedians maintain an accurate encyclopedia during an ongoing geopolitical conflict where state actors might seek to spread disinformation or conduct an information operation? In the context of the Russia-Ukraine War, this question becomes more pressing, given the Russian government’s extensive history of orchestrating information campaigns. We conducted an interview study with 13 expert Wikipedians involved in the Russo-Ukrainian War topic area on the English-language edition of Wikipedia. While our participants did not perceive there to be clear evidence of a state-backed information operation, they agreed that war-related articles experienced high levels of disruptive editing from both Russia-aligned and Ukraine-aligned accounts. The English-language edition of Wikipedia had existing policies and processes at its disposal to counter such disruption. State-backed or not, the disruptive activity created time-intensive maintenance work for our participants. Finally, participants considered English-language Wikipedia to be more resilient than social media in preventing the spread of false information online. We conclude by discussing sociotechnical implications for Wikipedia and social platforms.
<ccs2012>
<concept>
<concept_id>10003120.10003130.10003233.10003301</concept_id>
<concept_desc>Human-centered computing Wikis</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Wikis
00 Month Year
[revised]00 Month Year
[accepted]00 Month Year
Wikipedia in Wartime: Experiences of Wikipedians Maintaining Articles About the Russia-Ukraine War
Eric Gilbert
September 9, 2024 – Version 1.0
==================================================================================================
§ INTRODUCTION
Wikipedia is one of the great successes of peer production and an essential information source on the internet <cit.>. What happens, however, when Wikipedia encounters an ongoing, evolving geopolitical conflict? As Wikipedia is a community built on consensus, we seek to understand how Wikipedians adjudicate what belongs in a neutral encyclopedia article, when state actors on either side of a conflict might seek to promote their war-time messaging.
In the context of the Russia-Ukraine War, this question becomes even more intriguing. The Russian government has a long history of spreading disinformation and conducting information operations, extending back into the Soviet era <cit.>. The word disinformation is etymologically related to the Russian word dezinformatsiya, describing the intentional manipulation of the information environment to advance political goals <cit.>. Following the full-scale Russian invasion of Ukraine on February 24, 2022, an emerging body of scholarship is documenting how the Russian government has attempted to conduct information operations on various social media platforms, including Telegram and TikTok <cit.>. The challenge of state-sponsored information campaigns is heightened in the case of the Russia-Ukraine War—making it a useful case study among the contentious topic areas on Wikipedia.
Theoretically, Wikipedia is an attractive target for an information operation, given its high visibility online and reputation as an authoritative information source. In 2022, the Wikipedia article titled “Russian invasion of Ukraine” was the second most viewed article on the English-language edition of the online encyclopedia, garnering over 50.6 million views <cit.>. The article remained in the top 30 most viewed Wikipedia articles in 2023, with over 12.7 million views <cit.>. Wikipedia, however, is understudied in scholarly work on disinformation and information operations, with social media platforms often serving as the main object of inquiry. Furthermore, Wikipedia differs from social media in several important ways. First, social media sites prioritize social interaction and entertainment, while Wikipedia is engaged in knowledge production <cit.>. Second, Wikipedia is perceived as an authoritative source of information, unlike social media <cit.>. Third, Wikipedia is often served in the top results of an internet search.[<https://en.wikipedia.org/wiki/List_of_most-visited_websites>] A Wikipedia article containing disinformation will arguably have a bigger impact than a social media post containing the same false information.
§.§ Research questions
In this work, we ask the following:
RQ1: In the English-language edition of Wikipedia, did articles in the Russo-Ukrainian War topic area experience disruptive activity?
* RQ1a: If so, did Wikipedians perceive there to be evidence of any state actor information operations targeting the Russo-Ukrainian War articles?
RQ2: How well did the English-language edition of Wikipedia maintain articles related to the ongoing conflict of the Russo-Ukrainian War?
To answer these questions, we conducted a semi-structured interview study with expert Wikipedia editors. We recruited 13 Wikipedia editors who maintained articles related to the Russo-Ukrainian War in the English-language edition of Wikipedia. For our sampling strategy, we first identified editors who had contributed extensively to the article “Russian invasion of Ukraine,"[<https://en.wikipedia.org/wiki/Russian_invasion_of_Ukraine>] and then identified additional Wikipedians based on these editors' recommendations, i.e. snowballing sampling. In the resulting sample, we heard from editors who had contributed to a wide range of the Russo-Ukrainian War articles as well as admins who assisted in the moderating this contentious topic area.
§.§ Summary of findings
Across the 13 interviews, our participants agreed that the Russo-Ukrainian War articles in the English-langauge edition of Wikipedia encountered high-levels of disruption from both Russia-aligned and Ukraine-aligned editors [RQ1]. Our participants described how the Russia-aligned and Ukraine-aligned editors employed similar disruptive tactics, but with differing informational goals. Disruptive activity came from new, unregistered accounts as well as experienced accounts, with the latter having a greater impact in undermining information integrity on Wikipedia. Newer accounts were seen spamming talk pages—where article improvements and content disputes are discussed—with unconstructive criticism and personal attacks. Participant 13 drew a parallel between on the online conflict and the offline conflict: "People, just, they dig in deeper ... you get editors having trench warfare, just like it's actually happening." The more experienced accounts, meanwhile, exhibited a greater knowledge of Wikipedia's policies and processes. As such, some experienced editors engaged in wiki-lawyering—the misapplying of policies to one's benefit—to argue for articles to present their side in a more favorable light.
Our participants, however, did not perceive there to be conclusive evidence of a state-backed information operation targeting the Russo-Ukrainian War articles [RQ1a]. Participants did not consider there to be obvious signals of coordination, as encountered in other contested areas on Wikipedia. We explore potential reasons as to why this may be, given that social media platforms have been targeted by Russian state-aligned information campaigns. Regardless of whether a state actor was involved, responding to the disruptive activity was time-intensive and tedious for our participants. Several participants expressed feeling burnt out or frustrated. As Participant 08 noted, "I’m kind of willing to spend some time to do that, but I’m not willing to get kicked in the teeth for it."
To respond to this disruption, the English-language edition of Wikipedia had existing policies and processes at its disposal–honed in prior content disputes and contentious topics–and editors employed some of the strictest protections [RQ2]. For example, prior to the invasion, Wikipedia had already achieved consensus on disallowing most Russian state media for article citations: as Participant 07 put it, "we go by ‘how often this does this source tell outrageous lies?’" The response also involved applying extended confirmed (EC) page protections–which allow only editors with over 500 edits and a minimum of 30 days on Wikipedia to edit articles–to the entire Russo-Ukrainian War topic area: such restrictions are seen in only four other topic areas in the English-language edition. Participants agreed EC protections reduced disruptive editing, but did not fully eradicate it. State-backed or not, disruptive activity created a large mess.
Finally, participants described the English-language edition of Wikipedia as more adept than social media at impeding the spread of false information online. Participants considered this to be the case for the Russo-Ukrainian War, as well as other geopolitical conflicts. Participants noted that Wikipedia was an attractive target for an information campaign, just as social media is. Many thought that various aspects of Wikipedia—barriers to entry for editors and community-created polices—made the online encyclopedia resilient. Participant 12 explained, "Our processes come from the community and they’re operated by the community. At Twitter, Facebook, or wherever, you know, that’s all done largely top down." We conclude with implications other platforms may draw from Wikipedia's resilience.
§ RELATED WORK
Next, we review four branches of related research: 1) foundational literature on Wikipedia broadly; 2) work surveying contentious topics on Wikipedia; 3) prior research on state-sponsored information operations (SSIOs), and in particular, the Russian government's use of information operations.; and 4) recent work on the Russo-Ukrainian War topic area on Wikipedia.
§.§ Wikipedia, the free encyclopedia
Wikipedia is one of the most successful examples of internet peer-production—where individuals self-organize to produce goods or services without managerial directives or monetary compensation <cit.>. Over the past two decades, the Wikipedia community created a variety of policies and guidelines to support the building of an online encyclopedia. Forte & Bruckman <cit.> describe how achieving consensus is a key organizing goal on Wikipedia. To facilitate consensus building among editors, Wikipedians have crafted community norms, formal policies, and technological infrastructure <cit.>. These community policies signal what the Wikipedia community considers important, both to external and internal stakeholders. In Should You Believe Wikipedia?, Bruckman <cit.> discusses how Wikipedia exemplifies the social construction of knowledge, where editors add content that is supported by the consensus of mainstream reliable sources, such as peer-reviewed research or reporting from reputable news organizations.
§.§ Contentious topics on Wikipedia
Yet, consensus does not arise without conflict. In Wikipedia parlance, a contentious topic is a topic that has experienced heightened levels of disruptive editing, and as such, admins are empowered to enact additional restrictions.[<https://en.wikipedia.org/wiki/Wikipedia:Contentious_topics>] Edit wars, where editors revert each others’ edits back and forth, involve heated arguments instead of constructive discussion to improve an article <cit.>. Edit wars often indicate a contentious topic, where factions of editors disagree over what content should be presented on the article, and the ensuing edit wars can negatively affect article quality. Yasseri et al. <cit.> for example, found that across 10 language editions of Wikipedia, contentious topics prone to edit warring often involve “religion, politics, and geographical places.”
Hickman et al. <cit.> used both observational data and interviews with Wikipedia editors to understand how the contentious topic of the Kashmir region was being presented across Hindi, Urdu, and English-language Wikipedias. While they found differences in article coverage, organization, and editors’ approach to collaboration, they observed that across all three languages, editors strove to maintain a neutral point of view (NPOV)—a Wikipedia core content policy—and to prevent political agenda pushing. Kharazian, Starbird, & Hill <cit.> conducted a comparison of Serbo-Croatian Wikipedias to understand how governance capture by far-right editors occurred on the Croatian edition, but not the Serbian edition. Through the development of an “insular bureaucratic culture”, a small number of editors took over the governance structure of Croatian Wikipedia, dismantling neutrality-supporting policies and introducing instead far-right narratives and disinformation <cit.>.
§.§ State-sponsored information operations
An information operation describes an organized attempt to manipulate the information environment towards a strategic goal <cit.>. In the 21st century, governments and political actors have leveraged the internet as another medium through which to conduct such operations <cit.>. Information operations can involve the propagation of true, false, speculative, or misleading content – making it a broader term than disinformation, which describes the intentional spreading of falsehoods <cit.>. CSCW scholarship has emphasized the collaborative nature of information operations online: while such operations are typically orchestrated by governments, the participation of human crowds online, or "unwitting agents" <cit.>, can be central to the operation's success <cit.>.
The increasing prevalence of information operations online has led to a large body of scholarship on their detection. This detection work has predominantly focused on the social media platform X (formerly known as Twitter) and has explored various approaches to identify coordinated behavior online — an indicator of a information campaign <cit.>. Detection work related to Wikipedia is often focused on sockpuppet accounts — where an editor user misuses multiple accounts, often to deceive other editors or evade bans <cit.>.
Our work offers a departure from much of the online information operations literature to date. Instead of attempting to quantitatively detect an information operation, we conduct qualitative interviews with platform moderators and users to understand their mental model of what might constitute an information operation on their platform — in this case the crowd-sourced encyclopedia Wikipedia.
§.§.§ Russian state-sponsored information operations and disinformation
The Russian government’s use of information operations and disinformation for domestic, near-abroad, and international audiences is well-studied. Russia’s focus on information as a strategic instrument can be traced to the `active measures' of the Soviet era, where intelligence services attempted to manipulate the information environment to influence political outcomes <cit.>. In the 21st century, Russia has continued the Soviet legacy of information manipulation using the scale and anonymity of the internet, often targeting post-Soviet and post-communist countries, such as Estonia, Latvia, Poland, and Ukraine <cit.>. Following the 2013-2014 Euromaidan protests and the ensuing invasion of Crimea, the Russian government created low-quality news sites and social media accounts to delegitimize Ukraine's interest in partnering closer with the European Union <cit.>.
Beyond Central and Eastern Europe, a Russian information operation targeted the 2016 U.S. presidential election, with the creation of fake accounts on Facebook and Twitter to engage in political debates and stoke division among U.S. voters <cit.>. Further investigation revealed a Russian state-supported organization known as the Internet Research Agency (IRA) was behind the 2016 U.S. election information operation <cit.>. A large body of research has sought to understand the activities of the IRA `trolls' during the 2016 U.S. election, studying the thematic content of the imposter social media accounts and the interactions that these inauthentic posts garnered from actual people <cit.>.
Following the 2022 Russian invasion of Ukraine, research is emerging to understand how Russia has attempted to manipulate the information environment in yet another geopolitical conflict. Current scholarship has analyzed Russian state media messaging on various social media platforms: Twitter <cit.>, Facebook <cit.>, Reddit <cit.>, Telegram <cit.>, and VKontakte <cit.>. Studies have sought to describe the messaging strategies of the Russian government <cit.>, as well as track how these narratives move across the internet <cit.>. Even with social media platforms placing restrictions on Russian state media accounts following the 2022 invasion, one study found that Russian state-aligned messaging continued to spread on Facebook and Twitter <cit.>.
Given that research on Russian disinformation and information operations around the war in Ukraine has focused primarily on social media platforms, we seek to extend the literature beyond social media to include knowledge-production platforms like Wikipedia.
§.§ The Russo-Ukrainian War on Wikipedia
As a recent geopolitical conflict, there is limited research on the Russo-Ukrainian War on Wikipedia. Roberts & Xiong-Gum <cit.> conducted a content analysis of the edit history of the article "Russian invasion of Ukraine" from February 24, 2022 to March 2, 2022. The authors described how editors acted as vandal fighters by reverting disruptive edits and argued that the editors' actions exhibit connective intelligence, whereby “editors connect with others toward a common goal” <cit.>. The work also notes that the article’s infobox as a site of conflict, where editors disagreed over which countries to include as belligerents—given the military equipment supplied to Ukraine by various countries.
Dammak & Lemmerich <cit.> conducted an observational study of articles related to the Russo-Ukrainian war on the Russian, Ukrainian, and English-language editions. In the initial week following the invasion, they found an increased revert rate across all language editions—an indication of elevated conflict and disagreement on these pages. The elevated levels of reverting, however, returned quickly to normal rates within two weeks, which the authors surmise was a result of additional editing restrictions and increasing consensus on the articles.
Prior to the 2022 invasion, Kozyr & Dubina <cit.> compared the Ukrainian and Russian language editions' coverage of the war in Donbas, which has been ongoing since Russia’s invasion of Crimea in 2014. They describe an “informational struggle” evident between the two Wikipedias, with article titles differing in their description of the conflict, e.g. "War in Eastern Ukraine" in the Ukrainian edition versus "Armed conflict in Eastern Ukraine" in the Russian edition.
Our paper contributes to the body of existing Wikipedia research on contested topics by conducting the first interview study of Wikipedians involved in the Russo-Ukrainian War topic area. In contrast to previous work in this space, we also investigate whether these articles were targeted by a state-sponsored information operation.
§ METHOD
We use a qualitative research design. We conducted 13 interviews with expert Wikipedians who edit articles about the Russia-Ukraine war. The study was approved by our university's Institutional Review Board (IRB). An interview study has several advantages in this context. First, as Wikipedia runs on a complex sociotechnical ecosystem of policies and processes created by the community, we wanted to hear from editors firsthand about how contentious topics are handled and which policies they employ. Second, to understand where the problem areas were, we considered it more effective to ask editors directly, rather than attempting to reverse-engineer the conflicts via edit history logs. Editors would be able to tell us the story behind the edit wars, where else to look, and who else to talk to. Third, interviewing editors would provide a sense of how the Russo-Ukrainian war topic area compares to other contentious topic areas.
§.§ Recruitment
We used a purposeful and snowball sampling strategy to recruit expert Wikipedia editors closely involved with articles related to the war in Ukraine. We selected the article "Russian invasion of Ukraine" (RIU) as our seed article, as it is a `parent' article which links to other `child' articles on the war.[<https://en.wikipedia.org/wiki/Russian_invasion_of_Ukraine>] We utilized the website XTools to collect the top 40 editors by edit count on the RIU article talk page—where content disputes and article improvements are discussed.[<https://www.mediawiki.org/wiki/XTools>] XTools provides statistical summaries of article history and editor activity on Wikipedia. With the list of 40 editors, we examined their editing history using both raw edit logs from Wikipedia and user contribution summaries from XTools.
Following our analysis of editor contributions, we narrowed our recruitment list to 22 editors who met the following criteria: (1) consistent engagement with the RIU article or other war-related articles (contributing edits month over month, following the influx of editing in the first weeks of invasion), (2) constructive discussion on the RIU talk page (as assessed by the first author), and (3) indication of willingness to be contacted. We considered an editor amenable to contact if their User page had an "Email this user" link and/or included text inviting contact from other editors. Of the 22 Wikipedia editors that met the above three criteria, 10 agreed to be interviewed. We asked these 10 editors for recommendations on who else to talk to, i.e., snowball sampling, resulting in three additional interviews.
§.§ Participants
With this combination of purposeful and snowball sampling, we conducted 13 interviews across five time zones. In our purposeful sampling, our coverage is high: 10 of the 22 criteria-passing editors agreed to be interviewed. For snowball sampling, when we asked participants who else to recruit for the study, our participants frequently mentioned each other (i.e., already interviewed Wikipedians). Among the recruited editors, most had contributed to a variety of Russo-Ukrainian War articles, beyond the main RIU article. In the resulting sample, we were able to hear from editors involved in a wide range of war-related articles as well as admins who moderate the Russia-Ukraine topic area. Table 1 lists the participants, their roles, and Wikipedia experience as summarized by number of edits and years on the platform. As Wikipedians do not supply demographic information to the wider internet, we did not collect demographic information to ensure the anonymity of our participants.
§.§ Interview procedure
All interviews were conducted by the lead author remotely over video conferencing software between October 2023 and January 2024. The length of interviews ranged from 57 minutes to 2 hours and 26 minutes, with an average of 1 hour and 33 minutes.[Interviewees often agreed to continue talking beyond the allocated time of 60 minutes.] Participants were compensated with $30 for participating in the interview. All interviews were recorded and transcribed using transcription software. The semi-structured interview questions were designed to draw out editors' experiences maintaining articles related to the war in Ukraine. Editors were asked (1) how they got involved in the Russo-Ukrainian War topic area, (2) what issues did the war-related articles face, and (3) how well did Wikipedia policies and processes support the resolution of these issues. See Appendix A for the full set of interview questions. After the first three interviews, the authors met to review the initial findings and revisit the interview protocol. We had begun to notice diverging opinions among editors as to how Wikipedia was responding to the contested war-related articles. We updated the protocol to include questions to further understand (4) what each editor's focus was on Wikipedia (e.g., Eastern Europe versus Military History) and (5) how editors' judged the intent of disruptive accounts.
§.§ Data analysis
The first author reviewed each auto-generated transcript and made corrections as needed via the transcription service Otter.AI. The interview transcripts were analyzed using three rounds of qualitative coding inspired by thematic analysis – an inductive, bottom-up approach which considers multiple observations across the collected data to derive codes and higher-level themes <cit.>. Using MAXQDA software, the first author conducted open coding and In Vivo coding for the first round. In the second round, focused coding was employed to identify emerging concepts from the initial codes. Examples of second-round codes included "main versus periphery articles," "articles stabilize over time," and "assume good faith culture." First and second round coding proceeded in an iterative fashion, shifting between data collection and qualitative analysis as common concepts coalesced from participant responses. The first author engaged in memo writing to keep track of key themes and patterns.
The iterative rounds of open coding resulted in hundreds of codes. The rounds of focused coded synthesized the initial codes into roughly 100 concepts across the 13 interviews. In the third round of analysis, the first author constructed an affinity map of the focused codes using MAXQDA. The affinity map distilled the focused codes into higher-level themes, such as "Activities of Russia-aligned editors," "Activities of Ukraine-aligned editors," "Disruption on Wikipedia," "Coordinated activity on Wikipedia", "Reliable sources policy," and "Page protections policy." All authors then met to synthesize the themes most central to the research questions. Finally, the authors structured the central themes into three high-level findings, each with several sub-findings nested beneath. Each high-level finding corresponds to a research question: RQ1, RQ1a, and RQ2.
§.§ Methodological limitations
Our sampling strategy and qualitative approach also have limitations. As a purposeful/snowball design, we can make limited claims about the representativeness of the reports given by our interviewees. Second, our qualitative design does not permit large-scale inferences about how Wikipedia deployed its sociotechnical architecture in a time of conflict.
§ FINDINGS
§.§ In the English-language edition of Wikipedia, Russo-Ukrainian War articles encountered disruption from both Russia-aligned and Ukraine-aligned editors.
In answer to RQ1, the 13 Wikipedia editors we interviewed described how the RIU article and other articles related to the Russo-Ukrainian war experienced high levels of disruptive editing from both Russia-aligned and Ukraine-aligned editors. The participants, however, did not perceive there to be evidence of a state-backed information operation from either Russia or Ukraine. Wikipedia has established norms around what is considered constructive editing behavior, which are encapsulated in policies, guidelines, and explanatory essays. The behavioral guideline of disruptive editing considers disruption to include: persistent pushing of a point of view, failure to cite reliable sources, failure to build consensus with other editors, and ignoring community feedback.[<https://en.wikipedia.org/wiki/Wikipedia:Disruptive_editing>] In our interviews, participants described encountering users who engaged in disruptive editing to push both Russia-aligned and Ukraine-aligned points of view onto articles related to the Russo-Ukrainian War.
Participants described how disruptive editing occurred shorty after the RIU article's creation. Disruptive editors vandalized the infobox template parameters, changing the conflict title from "2022 Invasion of Ukraine" to "2022 Liberation of Ukraine." Other disruptive edits targeted the infobox's conflict status: one editor changed the status to "Resolved," while another editor added extraneous text aligned with Russian state messaging that described the conflict as "a military occupation with the goal of demilitarization and denazification." Within 12 hours, the RIU article was protected so that only editors with over 500 edits and over 30 days on Wikipedia could edit the article, known as extended confirmed protection (ECP).[<https://en.wikipedia.org/wiki/Wikipedia:Protection_policy>] Page protections will be discussed in a later section, as a central tool used by Wikipedia editors to maintain contested articles.
Following the article page protections, participants recalled how disruptive editing continued on the talk pages, which are intended as a space to resolve disagreements over article content. Both Russia-aligned and Ukraine-aligned editors used talk pages to push their point of view, rather than constructively debate article improvements. Participant 05 noted, "... you get a lot of inexperienced editors with opinions that are very pro-Russian or very pro-Ukrainian. Not neutral. And make drive by statements". Participant 07 recalled:
"It became a battlefield. As obviously, pro-Russian editors were trying to say how brilliantly the Russians were doing. Pro-Ukrainian editors were saying how badly the Russians are doing."
This state-aligned point of view pushing on both sides was in violation of neutral point of view—one of Wikipedia's core content policies—which requires that articles "must not take sides, but should explain the sides".[<https://en.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view>]
The scale of disruptive editing can also be seen in the length of the talk page archives. Prior research has shown that the size of talk page archives can indicate conflict on Wikipedia <cit.>. Once a disagreement has been resolved, editors are expected to archive the discussion, allowing the talk page to remain navigable and present only current debates. The longer the talk page archives, the more disagreements that have occurred. Participant 07 described how the RIU article accumulated over five talk page archives only one week into the conflict: "We have basically pretty much ... almost a live update." Participant 04 recalled archiving a large number of talk page discussions "because of the sheer volume of people coming in."
§.§.§ Russia-aligned editing activity differed from Ukraine-aligned editing activity in terms of goals, but not necessarily in terms of tactics.
Participants reported that both Russia-aligned and Ukraine-aligned accounts engaged in disruptive editing on articles related to the war. We find that both types of accounts tended to employ similar disruptive tactics, while the respective informational goals of their activity differed. We discuss two tactics in particular – wiki-lawyering and creation of low-quality articles – and compare Russia-aligned goals to Ukraine-aligned goals.
Tactic: Wiki-lawyering The term wiki-lawyering is used by the Wikipedia community to refer to the misapplying of policies to one's benefit.[<https://en.wikipedia.org/wiki/Wikipedia:Wikilawyering>] Our participants explained that a clear example of wiki-lawyering would be to use a certain interpretation of a policy in one dispute and then use a different interpretation of the same policy in another dispute. The phenomenon of wiki-lawyering has only been given passing attention in existing work <cit.>, so we discuss it in greater detail here.
Example of Russia-aligned wiki-lawyering: From the examples provided by our participants, we found that Russia-aligned accounts appear to use wiki-lawyering to delegitimize Ukraine as a nation state. Participant 06 described how Russia-aligned editors would try to insert the full title for Russian president Vladimir Putin, but not for Ukrainian president Volodymyr Zelenskyy, instead updating the RIU article to read "Zelenskyy." Participant 06 recounted:
"I spent many days trying to put an equivalence on the [article]. I gave up eventually. Again, it was wiki-lawyering ... `Once you've used the word president once, you don't use it a second time. There's no need.' It's yeah ... a delegitimization."
Other editors recalled that Russia-aligned editors used wiki-lawyering on the Azov Brigade article to describe the Ukrainian volunteer military unit as definitively neo-Nazi affiliated, while attempting to minimize the neo-Nazi affiliations of the Wagner Group, the Russian state-affiliated private military company. Participant 07 remembered, "There was a different standard being applied to Wagner Battalion as opposed to the Azov Battalion ... there was a greater willingness to say that the Azov Battalion were Nazi and not the Wagner Battalion."
Participant 08 described wiki-lawyering in the article titled "Sexual violence in the Russian invasion of Ukraine." Several Russia-aligned editors argued repeatedly for the article to caveat negative reports about the Russian army, while advocating to include one report of sexual violence committed by the Ukrainian army. The Russia-aligned editors raised these issues several times in the article's talk page, citing Wikipedia policies such as neutral point of view and verifiability in attempt to support their case.[<https://en.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view>][<https://en.wikipedia.org/wiki/Wikipedia:Verifiability>] Participant 08 recalled, "I don't see how anybody can in all good faith, you know, wiki-lawyer an article into saying that."
Example of Ukraine-aligned wiki-lawyering: Multiple participants mentioned the article "Battle of Bakhmut" as a site of conflict between Ukraine-aligned and Russia-aligned editors, especially after Russia's capture of the administrative boundaries of the city in May 2023.[<https://en.wikipedia.org/wiki/Battle_of_Bakhmut>] In the talk page, editors debated whether the status of the battle should remain as "ongoing". Several Ukraine-aligned editors argued that reliable sources did not describe the battle as an outright Russian victory and that fighting continued in the city's outskirts. In response to the battle's status remaining as "ongoing", the talk page received an influx of criticism from Russia-aligned accounts. Participant 01 recalled, "It attracted a lot of pro-Russians on the talk page…saying that you just cannot accept Russian victory, that you are pushing Western propaganda." Several participants considered part of the issue was that Western reliable sources were at times sympathetic to Ukraine. Participant 09 commented, "There is a persistent pro-Ukrainian bias in a lot of places where, with like the counter offensive ... it'll fail, and Wikipedia cannot say that, because sources don't say it. They don't want to say it."
Another example of Ukraine-aligned wiki-lawyering occurred in the article for Donetsk, a city in eastern Ukraine, which has been occupied by Russian-backed militants since 2014. In September 2022, Russian president Vladimir Putin announced the annexation of Donetsk and the surrounding region—an act decried as illegal by Ukraine and other nations <cit.>. Despite this political turmoil, Participant 10 noted that in the Donetsk article, the infobox listed the country as Ukraine without mentioning Russia. In the first paragraph, Russia's 2014-present occupation is mentioned, but not the 2022 annexation, which is mentioned only later in the article. In the article's talk page, one editor requested that the infobox display the de facto country as Russia and the de jure country as Ukraine—a compromise seen in the article for the city of Sevastopol in Crimea. In response, one Ukraine-aligned editor argued (1) that there were no reliable sources to support this edit and (2) that the Wikipedia Manual of Style advises that an article's first sentence should not be overloaded with information. Using a content policy and a manual of style guideline, the Ukraine-aligned editor prevented the mentioning of the 2022 Russian annexation.[<https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Lead_section#First_sentence>] Participant 10 recalled:
"They claim ... there are no reliable sources saying [Donetsk] is annexed because Russian sources on that cannot be reliable. Which to some extent makes sense. But I mean, at the end of the day, right, we are there to provide information. We are not there to fight on technicalities. And that sometimes people forget."
Tactic: Creating articles that do not meet Wikipedia's standards Wikipedia uses the idea of notability—topics with sizable coverage by reliable sources—to determine what warrants an article on the online encyclopedia.[<https://en.wikipedia.org/wiki/Wikipedia:Notability>] Additionally, Wikipedia does not intend to act as a newspaper nor a public relations agency.[<https://en.wikipedia.org/wiki/Wikipedia:What_Wikipedia_is_not>] Participants recounted how Russia-aligned and Ukraine-aligned accounts created articles that did not meet Wikipedia's standards.
Example of Russia-aligned low-quality articles Participant 09 and Participant 13 described a Russia-aligned editor who engaged in a pattern of creating low-quality articles for small battles and skirmishes in the Russo-Ukrainian War. The articles would contain extraneous details, employ the Russian spelling of geographical places, and often mis-cite sources. When others sought to delete or merge these articles, the Russia-aligned editor would undo their actions. Participant 09 noted that responding to these low-quality articles, especially checking poor citations, was a time-consuming task. Another Russia-aligned editor created the article "Alley of Angels" describing a war monument in Russia-occupied Donetsk. The article was only three sentences long, provided no citations, and claimed that the Ukrainian military had killed over 500 children. After being approved for deletion in October 2022, the page was soon recreated in November 2022.[<https://en.wikipedia.org/wiki/Talk:Alley_of_Angels>] At a UN Security Council briefing in December 2022, Russian Ambassador Vasily Nebenzya used the article's deletion to claim that the West is concealing the truth of the war in Ukraine <cit.>. Participant 09 recalled, "After he made that statement, there was an influx of like IP accounts to the talk page of the deleted article, saying, `Why is there no article? Nothing will ever be good enough for you.'" This article was ultimately removed in February 2023.[<https://en.wikipedia.org/w/index.php?title=Alley_of_Angels&redirect=no>]
Example of Ukraine-aligned low-quality articles Ukraine-aligned editors, meanwhile, attempted to document each moment of the war. In the RIU article, Ukraine-aligned editors would provide daily updates on the unfolding invasion. These updates grew so numerous that editors decided to separate out these daily updates into timeline articles, with ultimately six timeline articles created by December 2023. Participant 07 remembered:
"So every single time a Russian tank was knocked out, it'd be put in the article ... Every time the Russians took a village, it'd be put in the article."
Several participants considered such content to not meet Wikipedia's standard of notability, given its similarity to live updates from a news outlet and use of emotional tone. Participant 06 noted that with the iterative cycles of editing, the articles would attain a more sober tone: "In the fullness of time, all of that trivia ... all of that emotion gets whittled out."
§.§ Wikipedia editors did not consider the Russo-Ukrainian War disruption to be part of a state actor information operation.
In answer to RQ1a, the 13 Wikipedia editors we interviewed did not perceive there to be evidence of a state-backed information operation from either Russia or Ukraine. Participants overall were hesitant to label the disruptive editors as part of a state-backed information campaign. This hesitancy extended to the Russia-aligned accounts, despite reports of Russian state media narratives circulating on social media as well as Russia's track record of waging information campaigns against Ukraine since the 2014 invasion of Crimea <cit.>. Participants did not perceive the Russia-aligned accounts to be coordinated enough, as seen in other contested topic areas on Wikipedia, e.g., Israel-Palestine, Nagorno-Karabakh, or India-Pakistan. Participant 04 described the editing activity on the RIU article as such:
"It didn't really feel coordinated ... If they were part of some kind of government, they didn't seem to be well coordinated or run ... like, if we're talking about an organized disinformation campaign, I would say at best it would be a disorganized information campaign."
§.§.§ Disruptive activity in Russo-Ukrainian War articles did not resemble other activity considered to be likely state-backed coordination on Wikipedia.
Participant 06 recalled working on articles related to the Nagorno-Karabakh conflict between Armenia and Azerbaijan and described crude coordination from the Azeri side. The most clear signals of coordination included recruiting potential editors on Facebook, copy-pasting wording from government press releases, and uploading high-quality photos of the President of Azerbaijan which presumably could only have been taken by an official press pool. Additional signals of coordination involved an “unlimited supply of editors on the Azeri side” who did not mind being banned or blocked, “a whole scale Azerification of names, places, everything” on even the most obscure of articles, and “cloying praising” of the Azerbaijani government. In comparing Azeri-aligned activity with Russia-aligned activity, Participant 06 noted the Russia-aligned editors did not engage in "hero worship" of the Russian government. Additionally, while Russia- and Ukraine-aligned editors engaged in edit warring over the spelling of geographic place names as Azeri- and Armenian-aligned editors did, Participant 06 considered only the Azeri-aligned disruptive editing to be “relatively transparent, bad coordination of state actors.” Participant 06 added:
"I'm 100% convinced that there were state actors involved in the Artsakh [Nagorno-Karabakh] campaign. I'm not so convinced on Russia."
Participant 10 suspected Russia-aligned editors active in 2014-2016 of being paid, given their editing exclusively during working hours, poor English skills, and appearance of having a “clear agenda” from which they did not deviate. Paid editing on Wikipedia must be disclosed. Otherwise, the offending accounts can be indefinitely blocked. Wikipedia maintains a list of paid editing companies known for violating this Wikipedia policy.[<https://en.wikipedia.org/wiki/Wikipedia:List_of_paid_editing_companies>] The 2014-2016 Russia-aligned editors would consistently go from article to article to make an identical edit, such as changing Donetsk from a Ukrainian city to a Russian city. When these editors were confronted about their disruptive behavior, they would not engage in talk page discussions to defend their edits, nor would they protest if topic banned or blocked. Another account would simply appear and begin making similar edits. Participant 10 contrasted this with Ukraine-aligned editors who would protest an account block and attempt to repeal it:
"But yeah, [Ukraine-aligned editors in 2014-2016] were very annoying. [Russia-aligned editors in 2014-2016] were not annoying. I mean, they were just doing their job. You block, you revert, they don't come back ... You see that one side is editing because they ... feel that they should edit. Another side is editing because they're just paid to edit."
By contrast, Participant 10 did not consider the disruptive editing occurring on the Russo-Ukrainian war articles following the 2022 invasion to be from paid editors on either the Russia-aligned or Ukraine-aligned side.
§.§.§ Wikipedia editors suspect that many of the Russia-aligned accounts could be individuals who consume state media as opposed to state-paid trolls.
Many of the participants that we interviewed acknowledged that Wikipedia is an attractive target for information manipulation by governments, companies, and other actors. Often cited was Wikipedia's high placement in Google search results. Participant 08 noted:
"If you're looking at it from the point of view of a search engine optimization, you really can't do better than Wikipedia. Because if you want to control a narrative, Wikipedia articles are usually the first or the second, you know, thing returned."
While several participants surmised that the Russian or Ukrainian governments could have been involved in the disruption surrounding the war-related articles, none said that they had any direct evidence of such involvement. Participants, instead, tended to assess the disruptive Russia-aligned editors as more likely to be private individuals with nationalistic, pro-Russia views rather than paid, state-backed trolls. Participants considered discerning the intent of a disruptive editor to be difficult, often surmising that Russia-aligned editors might be individuals who consume primarily Russian state media.
* P01: "I am inclined to think that this is just individual, independent people who have a pro-Russian position and who genuinely doubt certain things that are said to happen on the war, which is very clear proof of the effect of Russian disinformation."
* P02: "I would say it's very hard to distinguish between actors who are probably being financed by the Russian state to insert misinformation, and people who are maybe just watching Russia Today and being a bit brainwashed."
* P06: "I think it may have been just, you know, ultra nationalism, but private, ultra nationalism."
* P07: "Whereas a lot of Russian users, obviously because they've been fed the state media, do definitely see themselves as being the victims of Western propaganda."
* P08: "The people that are arguing for Russia ... I think some of it is simply that they consume different media. You know, they don't believe the Western media."
* P09: "I wouldn't say that they're like a Russian bot, that like they are working for the Internet Research Agency. But I would say this is bad faith. I would say that they know they're not in line with policy…But to me this just seems like a rage that [Wikipedia] is supposedly biased against Russia and knowingly breaking the rules of the website to try to remedy that."
Wikipedia's behavioral guideline "Assume good faith" encourages editors to assume others are acting in good faith to build an encyclopedia, even if their editing is disruptive.[<https://en.wikipedia.org/wiki/Wikipedia:Assume_good_faith>] Participants frequently referred to this norm of assuming good faith when describing the disruptive editing they had encountered. Participant 04 noted that on talk pages of Russo-Ukrainian War articles, disruptive editors often criticized the article without proposing any improvements. Participant 04 considered that these editors were either unaware of Wikipedia's policies or that they did not "really care how the encyclopedia works." Other participants similarly described disruptive editors as new and inexperienced.
For disruptive editors who continue disrupting even after being warned, the Wikipedia community has developed a lexicon to describe various problematic behaviors. One of these terms is "point of view pushing" or "POV pushing," referring to neutral point of view—one of Wikipedia's core content policies.[<https://en.wikipedia.org/wiki/Wikipedia:NPOV_dispute#POV_pushing>] Another term is "single purpose account," which describes an account whose editing is concentrated on a small set of articles and appears to promote an agenda.[<https://en.wikipedia.org/wiki/Wikipedia:Single-purpose_account>] There are also terms to describe coordinated editing across multiple accounts: sockpuppet, where one user attempts to edit under various names, and meatpuppet, where multiple users are recruited for disruptive activities.[<https://en.wikipedia.org/wiki/Wikipedia:Sockpuppetry>]
For the editors we interviewed, ultimately, it did not matter whether a disruptive editor was suspected to be paid by a state actor or not, as the disciplinary actions taken were the same. With terms like single purpose account and sockpuppet, Wikipedians appear to be focused on drawing out behavior that obstructs the building of the encyclopedia, rather than determining whether the problematic behavior is coming from opinionated individuals or paid state actors. Participant 07 noted, "So whether it be state organized or privately organized. It's the same thing. It's meatpuppetry." Participant 09 expressed a similar sentiment:
"[It] doesn't really matter if they're a state actor or not. Because the results are basically the same as if they're a lone actor."
§.§.§ State-backed or not, disruptive activity is time-consuming for Wikipedia editors to address.
Regardless of whether a state actor was involved or not, disruptive activity—whether from new IP users or experienced wiki-lawyering editors—created additional work for Wikipedia editors seeking to maintain a reliable encyclopedia during an ongoing war. On the talk pages, unconstructive comments and personal attacks needed to be archived, and offending users required disciplinary action—typically a warning at first and then later a ban or block of some kind. Participant 07 recalled responding to disruptive activity on the RIU article's talk page:
"We were just being bombarded with essentially unactionable requests, which was just wasting our time, because you have to read them. Because if you don't read them, you can't know they are actionable. And that takes time. It's basically time wasting."
For low-quality articles, citations needed to be double checked or replaced entirely. Participant 09 recounted working on these low-quality articles:
"It was making basically a complete nightmare for people who wanted to like edit it. Because to fix things, I would have to go through to every claim, check the source. And oftentimes it just wouldn't say what [the Russia-aligned editor] was saying."
When recalling their efforts at countering disruption, participants often expressed feelings of burnout and frustration. Participant 06 commented, "Sometimes you have the energy to combat them. Sometimes you don't." Participant 08 added, "I'm kind of willing to spend some time to do that, but I'm not willing to get kicked in the teeth for it." Participant 10 said, "I have only so much time, I can spend it elsewhere." On a more positive note, Participant 11 explained that extended confirmed page protections can assuage editor burn out: "The editors sort of who are here in good faith are more likely to stay and are less likely to just burn out knowing that there's just this general level of protection."
Several participants also noted that the experienced Russia-aligned editors would attempt to retaliate against editors who called out their policy-violating conduct. Participants recalled how they were reported to administrator noticeboards, being called an `agenda-pusher' by the very agenda pushers they were trying to stop. Participant 03 described the Russia-aligned EC editors: "They're very clever. They know what they're doing. They're familiar with Wikipedia policy. They know how to, if they get in trouble, play victim."
§.§ English-language Wikipedia had policies in place to protect against disruptive activity, and editors employed some of the strictest protections.
In answer to RQ2, throughout our interviews, participants' responses were replete with references to myriad Wikipedia policies, guidelines, and informational essays. Our participants—unpaid, volunteer editors—appeared to understand the policies front-to-back and how to apply them. Some of the most intensive editing restrictions were employed to maintain information integrity on the Russo-Ukrainian War articles. Our participants, however, noted that peripheral articles still suffered from disruptive activity, given editors' limited attention and time. Our participants were split in their perception of how Wikipedia responded to the war-related disruption overall.
§.§.§ Wikipedia editors relied on page protections, talk page moderation, and reliable sources to maintain Russo-Ukrainian War articles.
Among the policies available to Wikipedia editors to counter disruptive activity, some of the most intensive editing restrictions were employed. These restrictions involved both limiting who could edit articles related to the Russo-Ukrainian War and moderating talk page discussions.
Limiting who can edit articles: In response to disruptive editing, articles related to the Russo-Ukrainian War received extended confirmed (EC) page protection as well as a contentious topic designation. Under EC protections, only EC editors (over 500 edits and over 30 days on Wikipedia) can edit, while non-EC editors can request edits on the talk page. As a contentious topic, Wikipedia administrators—a community-elected position—are empowered to place editing restrictions on disruptive accounts, without having to open a case at an administrative noticeboard. Participant 11 explained what happens when a topic is designated as contentious:
"The rules are sort of heightened. It's not necessarily that there any new rules, it's just that enforcement happens faster, and maybe more liberally, right? So we're cautioning editors to sort of like keep it within the lines more. Because you might get blocked or topic banned faster than you would in any other topic area."
The editing restrictions in the topic area of the Russo-Ukrainian War increased over the course of the evolving conflict. The topic area of Eastern Europe has been designated as contentious since 2007, under the Wikipedia acronym WP:CT/EE.[<https://en.wikipedia.org/wiki/Wikipedia:Contentious_topics/Balkans_or_Eastern_Europe>] When the article "Russian invasion of Ukraine" was created on February 24, 2022, it was already eligible for protection as a contentious topic under Eastern Europe. Within 12 hours of the article's creation, a request for EC page protection was submitted and quickly approved. As the conflict progressed, other articles about the war were similarly protected: de facto contentious under the Eastern Europe topic area and EC protected on an ad-hoc basis.
In October 2022, restrictions were increased further after Wikipedia editors enacted community authorized sanctions. Under General Sanctions/Russo-Ukrainian War, all war-related articles are EC protected, non-EC editors cannot create new articles, and they cannot participate in internal project discussions on talk pages or noticeboards. Known by the acronym WP:GS/RUSUKR, these community-enacted general sanctions are among some of the most strict protections on Wikipedia.[<https://en.wikipedia.org/wiki/Wikipedia:General_sanctions/Russo-Ukrainian_War>] Having all articles within a topic area be EC protected is only seen in four other areas on English-language Wikipedia: Israel-Palestine, Armenia-Azerbaijan, Kurds and Kurdistan, and antisemitism in Poland.
The ability to limit who can edit articles has existed for several years, a result of Wikipedia editors dealing with contentious topics previously. Six levels of page restrictions exist: pending changes, semi, extended confirmed, template, full, and interface.[<https://en.wikipedia.org/wiki/Wikipedia:Protection_policy#Comparison_table>] Extended confirmed (EC) level was the most heavily employed in the Russo-Ukrainian topic area. Several participants noted that EC page protections evolved from, in part, dealing with the contentious topic of Israel-Palestine.[<https://en.wikipedia.org/wiki/Wikipedia:Contentious_topics/Arab-Israeli_conflict>] For the Russo-Ukrainian War articles, the EC protections were widely described as successful by participants. Participant 09 said:
“A lot of this stuff related to the war is so locked down with extended confirmed protection that I- I'm struggling to think of examples of [disruption]. I guess sometimes you'd maybe see stuff on the talk page of like IP users showing up and saying like, `This is biased. This is like so far off from truth. You people will never admit Ukraine lost' and that kind of thing.”
Moderating talk page discussions: With page protections shielding articles, non-EC editors would often attempt to continue their disruption on the talk pages. Participants described various measures that were taken to keep talk page discussions constructive. At the least extreme, editors marked talk page discussions as closed and place a short explanation describing which Wikipedia policies or guidelines were violated. Going further, editors were able to collapse talk discussions so that its visibility was reduced, a practice know as `hatting' on Wikipedia.[<https://en.wikipedia.org/wiki/Wikipedia:Closing_discussions#HATTING>]
Under the community sanctions WP:GS/RUSUKR, non-EC editors are not allowed to engage in talk page discussions related to content disputes. One of Wikipedia's processes for dispute resolution is a Request for Comment (RfC), where editors can discuss content decisions.[<https://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment>] On articles related to the Russo-Ukrainian war, non-EC editors attempted to disrupt these processes. Participant 05 described having to place a banner within the RfC discussion, notifying that comments from non-EC editors will be removed. Participant 05 recalled:
"The banner hasn't always been present, but it's been acted upon. And then I've been putting banners in. Shortly after, we started becoming aware of how we could use this to manage all the, all the inexperienced and opinionated editors that were basically drive by."
Of the talk page moderation measures, one of the most extreme would be to fully EC-protect the talk page, not allowing non-EC editors to engage at all on the talk page. Participants noted that such an extreme measure is only enacted for short periods of time. To our knowledge, full EC-protections for talk pages have not been enacted on articles related to the Russo-Ukrainian War. Participant 11 explained, "Most of the time a talk page is not EC-protected, and individual editors and admins will just revert or delete participation by people who are not permitted because of the EC restriction." One participant noted they had only seen full EC talk page protections in the Israel-Palestine topic area.
Perennial reliable sources: A central part of writing Wikipedia articles involves citing high-quality, reliable sources – encapsulated in the core content policy Verifiability.[<https://en.wikipedia.org/wiki/Wikipedia:Verifiability>] Wikipedia maintains a list of perennial reliable sources, which summarizes the community's consensus on sources whose reliability has been repeatedly debated.[<https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Perennial_sources>] Russian state media outlets such as RT, Sputnik, and TASS had been deemed ‘generally unreliable’ by the Wikipedia community prior to the February 2022 invasion. Select Russian state media, such as TASS, can be used only to attribute statements from the Russian government. Many participants referenced this virtual ban on Russian state media—as codified in the perennial sources list—as an effective measure that prevented Russian state-aligned narratives from appearing in war-related articles. Participant 07 commented on how Wikipedians ascertain the overall reliability of sources:
"All sources are biased. Biased because of the people who own them. Biased because of the people who write for them. We have to accept that. Therefore we go by ‘how often this does this source tell outrageous lies?’ … this is one reason why we don't use Russian sources, because we know full well they tell outrageous lies."
To illustrate this, Participant 07 noted how the Russian government had outlawed the use of terms "invasion" or "war” to describe the fighting in Ukraine, and as such Russian state media would be unfit for inclusion in a Wikipedia article <cit.>.
§.§.§ Disruptive activity from new, unregistered users was typically simpler to address than disruptive activity from experienced users.
Participants recounted how both Russia-aligned and Ukraine-aligned accounts demonstrated varying levels of experience with Wikipedia: some accounts were new, unregistered accounts, while other accounts were more experienced and even had attained extended confirmed (EC) status. While unregistered users were mostly impeded by Wikipedia's page protections, the more experienced, EC editors were able to change article content, and thus able to engage in protracted editing conflicts.
Often referred to as IP users, as their edits are attributed to an IP address rather than a username, unregistered users misused the talk page to criticize the article and bad-mouth other editors. Participant 07 noticed IP users making the same point across multiple articles: "Across multiple pages there would be patterns where like, for example, the stuff about it being a special military operation to de-nazify Ukraine. That occurred over multiple pages, not just on the invasion of Ukraine page." Given that many Russo-Ukrainian War articles had been protected so that only more experienced editors could edit, the unregistered users were largely unsuccessful in changing article content. Relegated to the talk pages, IP users would launch disruptive comments, but these were quickly archived or even deleted. Participant 02 commented: "Often it would be people just coming and ranting, `I hate this article. It's rubbish. You're just awfully biased.' And it's not going to make any difference at all."
By contrast, experienced editors who engaged in disruptive activity were more difficult to counter. These editors exhibited an understanding of Wikipedia's policies and had reached extended confirmed (EC) status by having an account older than 30 days and having made over 500 edits. The experienced, EC editors were able to edit the protected Russo-Ukrainian War articles, rather than spam the talk pages as the unregistered users did. Disruptive EC editors created biased and low-quality content, using wiki-lawyering in their talk page arguments to keep their work from being reverted. Several participants noted how EC editors would create misleading citations in an attempt to support their biased content. Participant 08 recalled protracted disputes over citations used to demonstrate whether the Azov Brigade was neo-Nazi affiliated or not. Participant 08 noted:
"Now pro-tip, usually, if there's six references [for a single sentence], that means there has been a dispute, and it may be wrong."
§.§.§ Wikipedia editors consider the main invasion article to be of fairly high quality, but note that more peripheral articles can suffer from disruptive editing.
Participants tended to be in agreement that the main invasion article—Russian invasion of Ukraine—was fairly neutral and free of out-right bias given its high-visibility. Participants explained how an article viewed by many people also attracts a large number of editors—many of whom consider editing high-volume articles to be particularly impactful work on Wikipedia. Participant 06 observed, "It's a function of eyes. People have it on their watch list. And so it's very difficult to get away with casual fly-by vandalism on what is the main page." Participant 02 commented:
"I would say the attempts to insert misinformation, I think, are very hard to make work on a very highly trafficked article. Unless it's from someone who really understands Wikipedia very well. And by its nature, Wikipedia is quite a hard thing to learn."
Periphery articles related to the conflict, however, were often described as more vulnerable to disruptive and biased editing, given that fewer editors have these articles on their radar. Participant 10 explained, "We are talking about maybe dozens of articles altogether, which are on many watch lists. And we are talking about hundreds of 1000s of articles which are in the topic area, which are not on any watch list." Participant 10 added that while he watches some of the more peripheral articles related to Ukraine, he cannot watch them all, and that small disruptive edits—such as edit warring over the Russian or Ukrainian spelling of geographical places—will likely occur without anyone noticing. Participant 06 echoed:
"By the time you've gone down the 10th level of a category tree ... you will find evidence of gaming, of partisan, of bias ... There are just literally 1000s of [articles] out there. It's very difficult to police them all."
Previous work has described this main versus periphery dichotomy. Hickman et al. <cit.> interviewed Wikipedia editors who maintain articles related to the Kashmir region in the English-, Hindi-, and Urdu-language editions. Greenstein & Zhu <cit.> compared the quality of Wikipedia articles to Encyclopedia Britannica articles. Both papers found that the more attention an Wikipedia article received—i.e. number of active editors and revision count—the more neutral the article was. Both papers likened this phenomenon to the software development axiom Linus's law: "given enough eyeballs, all bugs are shallow." Our findings concur with this previous work that Linus's law can apply not only to software bugs, but also to content quality.[<https://en.wikipedia.org/wiki/Linus%27s_law>]
§.§.§ Wikipedia editors are split in their perception of how well Wikipedia responded overall.
Participants differed in their assessments of how Wikipedia responded to the disruptive editing from Russia-aligned and Ukraine-aligned accounts. Six of the 13 editors we spoke with considered Wikipedia to overall have done a decent job in maintaining information integrity. These editors considered the extended-confirmed page protections to have stymied the bulk of disruptive editing. Participant 02 noted, "I wouldn't say it's cast iron protection. But I would say that out and out disinformation has a really hard time getting through that." Three editors were more lukewarm in their assessment, often noting that while page protections were helpful, the articles still faced other issues: disagreement over what sources were considered reliable and the time-sink of responding to disruptive behavior, i.e. enforcing bans and cleaning up low-quality articles.
Three editors had a mostly negative assessment of Wikipedia's response to disruptive editing in the Russo-Ukrainian topic area. These editors often described the main versus periphery article phenomenon, noting that while the main RIU article was in decent shape, many other periphery articles struggled with information integrity, such as articles about ongoing battles. These participants also considered the experienced EC editors who engaged in wiki-lawyering to be a considerable issue for the online encyclopedia. Participant 06 observed, "No, really, Wikipedia was not set up ... was not able to cope with the outfall from that war. It got too big, too quick, and it quickly emerged that their entire structure of Wikipedia guidelines, policies, was open to gamification." Participant 03 commented, "There's no requirement for Wikipedia editors to actually be neutral. There's just a neutral point of view policy, but that's for contents, not behavior ... I've become pretty jaded over Wikipedia lately." Finally, one participant demurred on their assessment, given that as an admin, they had approved requests for page protections, had not developed opinion on how the topic area was faring overall.
§ DISCUSSION
§.§ Where did the trolls go?
Participants were hesitant to classify the disruptive editing they encountered as part of a state-backed information campaign from either Russia or Ukraine. Russia's efforts at waging information operations against Ukraine via social media, TV, and print media are well-documented <cit.>. Investigative journalism identified the Internet Research Agency, an internet troll farm located in Saint Petersburg, as the operators behind many Russian state information operations <cit.>. Following the 2022 invasion, scholarship has uncovered how Russian state media narratives have circulated on social media platforms, including Twitter, Facebook, Reddit, Telegram, and VKontakte <cit.>. So why did we not find clear evidence of a Russia-aligned information operation on Wikipedia? There are several possibilities.
One possibility is that the Russian government did not attempt any manipulation on English-language Wikipedia in the 2022-2023 time frame, having deprioritized the online encyclopedia in favor of other online targets. Scholarship has noted the extensive use of the encrypted messaging platform Telegram to promote Russia-aligned narratives around the war <cit.>. For example, the Telegram channel "War on Fakes" was created on February 23, 2022—one day before the Russian invasion of Ukraine—and dismissed negative coverage of the Russian military as being faked, co-opting techniques from legitimate fact-checking sites. Investigative reporting has linked this Telegram channel to a journalist affiliated with Russian state media <cit.>. Beyond Telegram, reporting from BBC has exposed a Russian information operation on TikTok consisting of at least 800 accounts, which sought to discredit Ukrainian officials <cit.>. As mentioned in our findings, Participant 10 considered there to be paid Russia-aligned editors on Wikipedia in 2014-2016 but not in 2022-2023. As such, it is possible that Wikipedia is not presently a priority information target for the Russian government, as resources are being spent on trending social media platforms, such as Telegram and TikTok.
A second possibility is that the Russian government attempted to launch an information campaign on Wikipedia in the early days of the invasion, but later discontinued their efforts. As mentioned earlier, the RIU article was extended confirmed (EC) protected within 12 hours of its creation. Other Russo-Ukrainian War articles were also quickly EC-protected. As such, non-EC editors were left the ability to request edits on article talk pages. When these accounts did request an edit, they often provided links to Russian state media, which the Wikipedia community does not consider to be a reliable source. Moreover, participants emphasized that since the main war-related articles received such high levels of attention, disruption rarely went unnoticed. Wikipedia's barriers to entry for new editors, established reliable source policies, and heightened vigilance for war-related articles could have stymied an information campaign.
A third possibility is that the Russian government did orchestrate some part of the Russia-aligned disruptive editing described in our interviews, but purposefully designed the campaign to look sporadic and uncoordinated to avoid detection. Participant 01 observed that the Russia-aligned editors had varying writing styles and levels of English proficiency. Participant 06 described how in the contentious topic area of Armenia-Azerbaijan, the activity of disruptive editors evolved over time, from more crude to more subtle tactics as editors became more familiar with Wikipedia. It is possible that the Russian state government's tactics have evolved since the 2014-2016 paid editing activity described by Participant 10. A sign of this evolution might be the presence of experienced Russia-aligned editors who had achieved extended confirmed (EC) status. Participant 03 recalled encountering around 8-10 such accounts, while other participants mentioned around 2-3. Though the Russia-aligned EC editors appear to be small in number, participants concurred that responding to their disruption took considerable effort.
Relatedly, our participants could have been too generous in their "assume good faith" mindset—as the Wikipedia behavioral guideline encourages. While many participants considered the Russia-aligned editors to likely be nationalistic individuals, the English-language RIU article has been banned in Russia since May 2022.[<https://en.wikipedia.org/wiki/List_of_Wikipedia_pages_banned_in_Russia>] As such, when a talk page was spammed by Russia-aligned IP users, a good faith mindset assumes that these accounts were either Russian citizens violating their government's Wikipedia ban or individuals abroad sympathetic to Russia, rather than state-backed accounts given clearance to access a banned website.
§.§ Connections to prior work on information integrity and Wikipedia
A central finding of our paper is: while the Russo-Ukrainian War articles faced disruption, Wikipedia had policies and processes in place to respond and mitigate it. This finding aligns with and extends existing literature on Wikipedia and information integrity. McDowell & Vetter <cit.> describe how Wikipedia's policies enable the encyclopedia to assuage problematic content:
“Wikipedia’s battle against fake news, misinformation, and disinformation is waged within and through community-mediated practices, and policies put into place in the encyclopedia to verify and validate information, to ensure accuracy, neutrality, and to guard against bias and misinformation.”
In our findings, page protections were central to our participants response to disruption. Editors have to be in good standing to edit, especially on sensitive topics like the Russia-Ukraine war. Hill & Shaw <cit.> describe page protections as a type of "hidden wikiwork" — an unobtrusive feature which enables the encyclopedia to resolve disputes. Ajmani et al. <cit.> conceptualize page protections as a "frictions-mechanism" to safeguard information quality on the encyclopedia. McDowell & Vetter <cit.> similarly note the importance of user access levels to deter vandalism and low quality editing. Both Wikipedians and researchers caveat, however, that page protections should be used sparingly and for limited amounts of time <cit.>.[<https://en.wikipedia.org/wiki/Wikipedia:Protection_policy>] Over restricting access to a page can put strains on the amount of editors available to maintain those pages, potentially leading to reductions in information quality.
Reliable sources were also frequently mentioned by our participants. McDowell & Vetter <cit.> discuss the importance of reliable sources in maintaining Wikipedia’s information integrity. They aptly note that while the policy of Verifiability is fairly short — “articles must be based on reliable, independent, published sources with a reputation for fact-checking and accuracy” <cit.> — this policy page is supplemented by dozens of supporting pages, which provide guidelines on how to arbiter whether a source is reliable. One of these supporting pages is the Perennial Reliable Sources list — which our participants described as important to responding to disruption on the Russia-Ukraine War articles. Steinsson <cit.> discusses how this "sourcing hierarchy" was developed overtime, as Wikipedia editors sought to delineate fringe views from mainstream ones. In addition, prior work has shown the presence of numerous and high-quality sources is important to how Wikipedia readers assess an article’s credibility <cit.>.
Beyond page protections and reliable sources, our participants mentioned other Wikipedia features which have been studied previously. Participants noted the use of automated Wikipedia tools to respond to disruption on the Russo-Ukrainian War articles, such as reporting of potential vandals, which has been discussed in previous CSCW work <cit.>. Participants also described differing information integrity outcomes, with main articles receiving more attention and thus being of higher-quality than peripheral articles. This main versus periphery dichotomy has been described in at least two previous studies <cit.>. Finally, participants also frequently cited the importance of the neutral point of view policy (NPOV) in guiding content decisions on contested articles. Steinsson <cit.> argues that the NPOV policy has enabled Wikipedia to become an increasingly reliable information source since its creation in 2001.
§.§ Wikipedia is not social media. Perhaps social media could be more like Wikipedia?
Across the interviews, participants considered Wikipedia to be more successful than social media at preventing the spread of false or misleading information during evolving geopolitical events, including the Russo-Ukrainian War. Several participants attributed this success to Wikipedia's higher barriers to entry, noting how learning to edit Wikipedia required time and patience. In addition to high barriers to entry, Participant 12 emphasized the role of the Wikipedia community in resolving content and conduct disputes: "Our processes come from the community and they're operated by the community. At Twitter, Facebook, or wherever, you know, that's all done largely top down ... And they don't, they're not really under any obligation to explain their decisions all that well." As our participants acknowledged, Wikipedia is an attractive target for information operations, but unlike social media, Wikipedia appears to be adept at preventing false or misleading information from entering its articles.
Among social media platforms, content moderation strategies range from centralized to decentralized. Wikipedia is a combination of these approaches, where moderation is crowd-sourced but based on a universal set of rules. For social media platforms that employ a centralized approach, Trust and Safety teams review content for violating company-defined rules (e.g. X, formerly Twitter). This approach can run into issues of being overly opaque and devoid of community input. For social media platforms that employ a decentralized approach, content is moderated by the community, often with community-defined rules (e.g. Reddit, Discord). This approach can lead to differing moderation decisions across one platform, in addition to smaller sub-communities being overwhelmed by the demands of moderation. A recent interview study found that Reddit moderators wanted more platform-level guidance when addressing ambiguous cases of COVID-19 misinformation <cit.>. Wikipedia offers a middle-ground approach, where there is a universal set of rules that are enforced in a crowd-sourced manner by community members. While there is no one-size-fits-all approach to content moderation, social media platforms could look to Wikipedia for how to increase both the transparency and the community engagement of its moderation processes.
While our interview protocol did not explicitly include design-oriented questions, our qualitative analysis suggests several design implications from Wikipedia that might be of use to social media platforms. Wikipedia benefits from experienced `old-timer' editors who pass on social norms to incoming users, such as civility and assume good faith <cit.>. One idea would be for a social media platform to invite long-term users to help create content and conduct policies for the community. Or, to elevate them to these roles via some mechanism. Social media sites Reddit and Discord already employ community-based content moderation. In addition to community creation of policies, community enforcement might also be beneficial for social media sites, in the spirit of Linus' Law—with enough eyes, most problematic content should be flagged. To thank social media users for their work in helping maintain online spaces, social media platforms could learn from Wikipedia's Barnstars: tokens of appreciation that editors can send to other editors for their contributions.[<https://en.wikipedia.org/wiki/Wikipedia:Barnstars>] In addition, social platforms could look to the success that Wikipedia has had in debating and curating lists of reliable and unreliable sources, a process which could be replicated in other social platforms. At minimum, platforms could import the work that Wikipedians have done, and label or algorithmically alter content from low-reliability sources.
§.§ Limitations of studying only English-language Wikipedia
The focus of our study on the English-language edition of Wikipedia, one of over 340 language editions,[<https://meta.wikimedia.org/wiki/List_of_Wikipedias>] presents several limitations. First, the English-language edition of Wikipedia is arguably one of the most well-resourced editions, which might make it singularly resilient to disruptive editing. Previous work has proposed that active editors, community governance, and diversity in demographics all contribute to making Wikipedia language editions resilient to knowledge integrity risks <cit.>. Kharazian et al. <cit.> investigated how differing community governance structures can lead to different outcomes: the Croatian language edition of Wikipedia was captured by far-right editors motivated by a nationalistic agenda, while the Serbian and Serbo-Croation language editions remained resilient to governance capture.
Second, while Wikipedia's policies and processes are similar across language editions, they are not identical <cit.>. Page protections — an aspect of Wikipedia central to maintaining English-languages articles related to the Russo-Ukrainian — exist in other language editions with mild variations. For example, the German-language edition has four types of page protections, as opposed to six types in the English-language edition, and users report misconduct of other users, rather than request a page protection outright <cit.>. Talk pages — another aspect of Wikipedia discussed extensively by our participants — is a forum where edits are first discussed before the article is updated, or at least for English-language Wikipedia. Bipat et al. <cit.> found that the Spanish-language and French-language editions do not use the talk page to discuss potential article edits. Given these varying policies and processes, there is potential for heterogeneity in the responses to Russia-Ukraine War articles across the different language editions.
Third, the Russian-language edition of Wikipedia has faced attempts from the Russian government to censor its articles — warranting further study. On March 1, 2022, the Russian agency for the Supervision of Communications, Information Technology and Mass Media (Roskomnadzor) asked the Wikimedia Foundation to remove the Russian-language article "Russian invasion of Ukraine." <cit.> In the proceeding months of the war, Roskomnadzor continued to demand Wikipedia articles be removed with threats of fines and began placing articles on a list of forbidden sites.[<https://en.wikipedia.org/wiki/List_of_Wikipedia_pages_banned_in_Russia>] In June 2023, Ruwiki was launched, a government-sanctioned fork of Russian Wikipedia, which copied over 1.9 existing articles while removing any mention of the invasion of Ukraine <cit.>. Such attempts at censorship could explain why our participants did not perceive there to be any state-backed information campaign: perhaps the Russian government was more focused on censoring Wikipedia than influencing it.
Despite these limitations, it is important to note the utility of studying English-language Wikipedia for non-English language contexts. Our participants noted how editors from all over the world, many of whom English is not their first language, focus their time and energy on the English-language edition given its high visibility and reach. For several of our participants, this was indeed the case: English was not their first language, yet they contributed extensively to English-language articles on the Russo-Ukrainian War. As such, we emphasize that our study's findings are not restricted to only native English-language editors or readers on Wikipedia.
§.§ Future work
We believe this work could be informative for future, large-scale observational studies of Wikipedia. Across interviews, participants frequently mentioned other topic areas, such as Israel-Palestine and Armenia-Azerbaijan, and suggested potential for a comparative study of the handling of contentious topics on Wikipedia. A comparative study of contentious topics might further probe the extent of page protections' effectiveness: are there perhaps instances when an information operation has been successfully waged even with these protections in place? Future work could also investigate how the pressures of an authoritarian regime impacts the governance of a Wikipedia edition. Relatedly, a comparison of the Russian-language edition of Wikipedia and its government-sanctioned fork Ruwiki could be revealing: what content has been wiped from Ruwiki; do Ruwiki editors collaborate like editors in the original Russian-language edition? Another approach would be to explore the coverage of the Russo-Ukrainian war in other language Wikipedias. As our participants noted, if Wikipedia is indeed a reflection of what the mainstream sources say, then perhaps Hindi Wikipedia[<https://hi.wikipedia.org/wiki/>] or Italian Wikipedia,[<https://it.wikipedia.org/wiki/>] countries with closer diplomatic relationships with Russia, might present the Russo-Ukrainian War differently than English Wikipedia.
§ CONCLUSION
Across 13 interviews with expert Wikipedia editors, we surfaced challenges faced by articles related to the Russo-Ukrainian War on the English-language edition. Our participants did not perceive there to be clear evidence of a state-backed information campaign, a finding that stands in contrast to scholarship showing evidence of Russia-aligned information operations targeting social media platforms. Whether or not a state actor was present, participants reported high-levels of disruptive editing from both Russia-aligned and Ukraine-aligned accounts, which created time-intensive maintenance work for editors. The English-language edition of Wikipedia overall appeared prepared to address the disruption in the wake of the Russia-Ukraine War, relying upon existing policies and processes honed in other contentious topic areas. We are optimistic this paper will support future work to further elucidate Wikipedia’s resilience in the face of information manipulation online and explore potential lessons for other internet platforms.
ACM-Reference-Format
§ INTERVIEW PROTOCOL
§.§ Involvement with Wikipedia
* How did you start editing for Wikipedia?
* How did you become involved in the editing articles related to the Russian invasion of Ukraine?
* Is this your main topic area, or are you more involved in other topic areas on Wikipedia?
§.§ Experiences editing in the Russo-Ukrainian War topic area
* Walk me through what it has been like working on the articles related to the war.
* Can you think of a case where there was problematic or disruptive behavior on an article?
* What was your process or strategy to determine whether the behavior was disruptive?
* Can you think of a case where there was problematic or disruptive behavior on a talk page?
* What was your process or strategy to determine whether the behavior was disruptive?
* How do you decide whether you'll respond to the disruption?
* If so, what does a response look like?
* What types of tools do you have at your disposal in order to respond?
* How much does the intent of the account matter in terms of the what action is taken?
* What do you make of the disruptive activity you have seen? Do you consider it to be small in scope, or have you come across things that look more like larger, coordinated attacks?
* How do you make an assessment as to whether the disruption was coordinated?
* If coordinated, how would you describe the goal(s)/aim(s) of the coordination?
§.§ Wikipedia's response to disruption in the Russo-Ukrainian War topic area
* How do you think Wikipedia’s processes, policies, and norms have held up for maintaining articles about an on-going war?
* Can you think of a case that worked well and a case that did not work well?
* For the cases that didn't work, in what ways did they fail?
* To my understanding, only extend confirmed editors can edit articles in this topic area, due to WP:GS/RUSUKR. Do you think the general sanctions were effective or not effective for responding to disruptive activity?
* In the archived talk pages, I noticed that many discussions centered around Wikipedia policies. Can you think of a time when a policy was very relevant to resolving a dispute?
* Which policies were most effective for responding to disruptive activity?
* Were any policies consistently debated? Were any policies consistently violated?
* What do you consider to be a reliable source? How do you make this judgement?
* How does your experience with the Russian invasion of Ukraine compare with other topics areas on Wikipedia?
|
http://arxiv.org/abs/2409.03037v1 | 20240904191642 | Rectifiability of the singular set and uniqueness of tangent cones for semicalibrated currents | [
"Paul Minter",
"Davide Parise",
"Anna Skorobogatova",
"Luca Spolaor"
] | math.AP | [
"math.AP"
] |
§ ABSTRACT
We prove that the singular set of an m-dimensional integral current T in ℝ^n + m, semicalibrated by a C^2, κ_0 m-form ω is countably (m - 2)-rectifiable. Furthermore, we show that there is a unique tangent cone at ℋ^m - 2-a.e. point in the interior singular set of T. Our proof adapts techniques that were recently developed in <cit.> for area-minimizing currents to this setting.
*
Axel Klawonn0000-0003-4765-7387
Martin Lanser0000-0002-4232-9395
======================================================================
§ INTRODUCTION
In this article, we study the structure of interior singularities of semicalibrated integral currents in ^m+n. Let us recall the basic definitions.
Let m, n ≥ 2 be positive integers. A semicalibration in ^m+n is a C^1-regular m-form such that ‖ω‖_c ≤ 1, where ‖·‖_c denotes the comass norm on Λ^m(^m+n). An m-dimensional integral current T in ^m+n (denoted T∈_m(^m+n)) is semicalibrated by ω if ω(T⃗) = 1 at ‖ T ‖-a.e. point, where T⃗ = d T/dT denotes the polar of the canonical vector measure associated to T (also denoted by T, abusing notation) and T denotes the canonical mass measure associated to T.
Note that we may assume that the ambient space is Euclidean, equipped with the Euclidean metric, in place of a sufficiently regular Riemannian manifold (as is often assumed when studying the regularity properties of area-minimizing currents). Indeed, this is because the presence of an ambient submanifold of ^m+n in which T is supported may be instead incorporated into the semicalibration; see <cit.>.
We say that p∈(T)∖(∂ T) is an interior regular point if there exists a neighborhood of p in which T is, up to multiplicity, a smooth embedded submanifold of ^m+n. We denote the interior regular set by (T), and we refer to its complement in (T)∖(∂ T) (which is a relatively closed set) as the interior singular set, denoted by (T).
The regularity of area-minimizing currents and more specifically calibrated currents (where the semicalibrating m-form ω is closed) has been studied extensively <cit.>. Semicalibrated currents form a natural class of almost area-minimizing currents for which the underlying differential constraint has more flexibility with respect to deformations than that for calibrated currents. Typical examples of these objects are given by almost complex cycles in almost complex manifolds. Semicalibrated currents exhibit much stronger regularity properties than general almost-minimizing currents (see <cit.>), and have thus far been shown to share the same interior regularity as area-minimizing integral currents. Indeed, in the series of works <cit.> by De Lellis, Spadaro and the fourth author, it was shown that interior singularities of two dimensional semicalibrated currents are isolated, much like those of two dimensional area-minimizing integral currents. It was further shown by the fourth author in <cit.> that the interior singular set of an m-dimensional semicalibrated current has Hausdorff dimension at most m-2, which is consistent with Almgren's celebrated dimension estimate on the interior singular set of area-minimizing integral currents. In the case of special Legendrian cycles, i.e. when the ambient space is S^5 ⊂ℂ^3 and the semicalibration ω has a specific form inherited from the complex structure, it was already shown by Bellettini and Rivière that the singular set consisted only of isolated singularities <cit.>.
The aim of this article is to further improve on these and establish a structural result for the interior singular set of semicalibrated currents, analogous to that obtained in the works <cit.>. More precisely, our main result is the following.
Let T be an m-dimensional integral current in ^m+n, semicalibrated by a C^2,κ_0 m-form ω for some κ_0 ∈ (0,1). Then (T) is countably (m-2)-rectifiable and there is a unique tangent cone to T at ^m-2-a.e. point in (T).
This result is interesting both from a geometric and an analytic point of view. On the geometric side, calibrated submanifolds have been central objects of study in several areas of differential geometry and mathematical physics since the seminal work of Harvey and Lawson <cit.>. Two primary examples are holomorphic subvarieties and special Lagrangians in Calabi-Yau manifolds, which play an important role in string theory (especially regarding mirror symmetry, cf. <cit.>), but they also emerge naturally in gauge theory (see <cit.>). Semicalibrations are a natural generalization of calibrations, removing the condition dω = 0 on the calibrating form which is rather rigid and in particular very unstable under deformations. In fact semicalibrations were considered already in <cit.> (cf. Section 6 therein) and around the same time they became rather popular in string theory when several authors directed their attention to non-Calabi-Yau manifolds (the subject is nowadays known as “flux compactification”, cf. <cit.>): in that context the natural notion to consider is indeed a special class of semicalibrating forms (see for instance the works <cit.>, where these are called quasi calibrations). The fine structure of the singular set in the 2-dimensional case has found applications to the Castelnuovo's bound and the Gopakumar–Vafa finiteness conjecture in the recent works <cit.>.
From an analytic point of view, it exhibits a striking difference with the setting of area-minimizing currents regarding notions of frequency function. In the work <cit.>, Krummel and Wickramasekera introduced an intrinsic version of Almgren's frequency function for an area-minimizing current, known as planar frequency. Under suitable decay hypotheses, they were able to show that the planar frequency in the area-minimizing setting is almost monotone, which then played a pivotal role in their analysis of interior singularities. However, in the semicalibrated setting one does not expect almost monotonicity of the planar frequency function under the same hypotheses as in <cit.>, and indeed in Part <ref> we provide a simple counterexample demonstrating this. Intuitively, the reason for this is that the semicalibration condition is more flexible. In particular, the graph of any C^1,α single-valued function is a semicalibrated current (although with a semicalibrating form less regular than the one in Definition <ref>) and, at such a level of generality, these currents are not expected to exhibit unique continuation properties, and consequently an almost monotone planar frequency function. We have been unable to adapt the approach of Krummel and Wickramasekera to the present setting, which is ultimately why we follow the ideas in <cit.>. Whether or not one can prove Theorem <ref> utilizing the ideas in <cit.>, and in particular finding a suitable semicalibrated `planar frequency', is an interesting question.
Finally we remark that Theorem <ref> is optimal in light of recent work of Liu <cit.>.
§.§ Structure of the article and comparison to <cit.>
In Part <ref>, we recall the singularity degree as introduced in <cit.>, and verify that its properties remain valid in the semicalibrated setting. Part <ref> is then dedicated to treating flat singular points of singularity degree strictly larger than 1, for which we may exploit the rectifiable Reifenberg methods of Naber & Valtorta, similarly to <cit.>. In Part <ref>, we then treat points of singularity degree 1 and the lower strata (the latter just for the uniqueness of tangent cones), following <cit.>. Finally, in Part <ref> we present the example that demonstrates the failure of almost-monotonicity for the intrinsic planar frequency as introduced in <cit.>, and draw some comparisons with the area-minimizing setting of <cit.>.
Although throughout this article we mostly follow the methods of the works <cit.> of the first and third authors with Camillo De Lellis, there are a number of important differences:
* Due to the presence of the semicalibration, the corresponding error term in the first variation of T must be taken into account for all variational estimates. In particular, this creates an additional term in Almgren's frequency function in this setting (see Section <ref>), which must be taken care of when establishing the BV estimate Theorem <ref>. The existence of such variational errors was already taken into consideration in the works <cit.>.
* When taking coarse blow-ups (see Section <ref>), we observe that we may assume that the term dω^2_C^0 r^2-2δ_3 is infinitesimal relative to the tilt excess (T,_r) for δ_3∈ (0,δ_2); note that this subquadratic scaling is stronger than having the same assumption with the natural quadratic scaling of dω^2_C^0. The latter would be the analogue of the corresponding assumption in <cit.>, but we require this stronger assumption for Part <ref> (see Case 2 in the proof of Lemma <ref>), and we verify that it indeed holds.
* We modify the original construction in <cit.> of the intervals of flattening adapted to a given geometric sequence of radii in Part <ref>, instead providing one that avoids requiring the separate treatment of the case of a single center manifold and infinitely many. This modified procedure will further be useful in the forthcoming work <cit.>.
* When T is merely semicalibrated, extra care needs to be taken when applying the harmonic approximation. Indeed, note that in order to apply <cit.>, we require the stronger hypothesis dω^2_C^0≤_23(T,_1) in place of ^2 ≤(T,_1)^1/2 + 2δ in the area-minimizing case (this difference was already present in <cit.>). In particular, this affects the two regimes in the case analysis within the proof of Lemma <ref>. In order to maintain the validity of Case 1 therein, we must require that dω^2_C^0≤_23(T,_1), in place of ^3 ≤(T,_1). This in turn affects the treatment of Case 2 therein; see the second bullet point above.
* In order to obtain quadratic errors in dω_C^0 in all estimates exploiting the first variation of T in Part <ref>, we must employ an analogous absorption trick to that pointed out in <cit.> for area-minimizing currents. This makes arguments in Section <ref> more delicate.
§.§ Acknowledgments
The authors would like to thank Camillo De Lellis for useful discussions. This research was conducted during the period P.M. was a Clay Research Fellow. D.P. acknowledges the support of the AMS-Simons Travel Grant, furthermore part of this research was performed while D.P. was visiting the Mathematical Sciences Research Institute (MSRI), now becoming the Simons Laufer Mathematical Sciences Institute (SLMath), which is supported by the National Science Foundation (Grant No. DMS-1928930). L.S. acknowledges the support of the NSF Career Grant DMS 2044954.
§ PRELIMINARIES AND NOTATION
Let us first introduce some basic notation. C, C_0, C_1, … will denote constants which depend only on m,n, Q, unless otherwise specified. For x∈(T), the currents T_x,r will denote the rescalings (ι_x,r)_♯ T, where ι_x,r (y):= y-x/r and ♯ denotes the pushforward. We will typically denote (oriented) m-dimensional subspaces of ^m+n (often simply referred to as planes) by π, ϖ. For x∈(T), _r(x) denotes the open (m+n)-dimensional Euclidean ball of radius r centered at p in ^m+n, while for an m-dimensional plane π⊂^m+n passing through x, B_r(x,π) denotes the open m-dimensional disk _r(x)∩π. _r(x,π) denotes the (m+n)-dimensional cylinder B_r(x,π)×π^⊥ of radius r centered at x. We let 𝐩_π: ℝ^m+n→π denote the orthogonal projection onto π, while 𝐩_π^⊥ denotes the orthogonal projection onto π^⊥. The plane π is omitted if clear from the context; if the center x is omitted, then it is assumed to be the origin. ω_m denotes the m-dimensional Hausdorff measure of the m-dimensional unit disk B_1(π). The Hausdorff distance between two subsets A and B of ^m+ n will be denoted by (A,B). Θ(T,x) denotes the m-dimensional Hausdorff density of T at x∈(T). For Q∈, _Q(^n) denotes the metric space of Q-tuples of vectors in ^n, equipped with the L^2-Wasserstein distance (see e.g. <cit.>). Given a map f = ∑_i=1^Q f_i taking values in _Q(^n), we use the notation η∘ f to denote the ^n-valued function 1/Q∑_i=1^Q f_i.
As for area-minimizing currents, we primarily focus our attention on the flat singular points of T, namely, those at which there exists a flat tangent cone Qπ_0 for some m-dimensional (oriented) plane π_0. By localizing around a singular point and rescaling, we may without loss of generality work under the following underlying assumption throughout.
m≥ 3, n ≥ 2 are integers. T is an m-dimensional integral current in _7√(m) with ∂ T_7√(m) = 0. There exists a C^2,κ_0 semicalibration ω on ^m+n such that T is semicalibrated by ω in _7√(m), with
dω_C^1,κ_0(_7√(m))≤,
where ≤ 1 is a small positive constant which will be specified later.
Recall that if T satisfies Assumption <ref>, then in particular T is Ω-minimial as in <cit.> for some Ω >0, namely
𝐌(T) ≤𝐌(T + ∂ S) + Ω𝐌(S),
for every S ∈𝐈_m + 1(^m+n) with compact support, and in particular one can take Ω = ‖ dω‖_C^0. In addition, if T satisfies Assumption <ref> then we have the first variation identity
δ T(X) = T(dω X),
where X ∈ C_c^∞(ℝ^m+n∖(∂ T); ^m+n), and where δ T denotes the first variation of T:
δ T(X) = ∫_T⃗(q) X(q) d ‖ T ‖(q) ,
where T⃗(q) is the oriented (approximate) tangent plane to T at q.
Recall that the tilt excess (T,_r(x,π),ϖ) relative to an m-dimensional plane ϖ in a cylinder _r(x,π) is defined by
(T,_r(x,π),ϖ) := 1/2ω_m r^m∫__r(x,π) |T⃗ - ϖ⃗|^2 dT.
The (optimal) tilt excess in _r(x,π) is in turn defined by
(T,_r(x,π)) := inf_m-planes ϖ(T,_r(x,π),ϖ).
The quantities (T,_r(x),ϖ) and (T,_r(x)) are defined analogously.
§.§ Intervals of flattening and compactness procedure
As in <cit.>, we introduce a countable collection of disjoint intervals of radii (s_j,t_j] ⊂ (0,1], for j∈∪{0} and t_0 =1, referred to as intervals of flattening, such that for _3 > 0 fixed as in <cit.> we have
(T, _6√(m)r) ≤_3^2, (T_0,t_j, _r) ≤ C m_0,j r^2-2δ_2 ∀ r∈(s_jt_j,3],
where
m_0,j:=max{(T,_6√(m)t_j), ^2 t_j^2-2δ_2},
and δ_2 is fixed as in <cit.>. Observe that this definition of _0,j comes from the observation that if T is dω_C^0-minimal in _7√(m), then T_0,t_j is t_jdω_C^0-minimal in _7√(m), together with (<ref>), the estimates in <cit.> and the observation that
ι_0,t_j(dω)_C^0(_6√(m)) = dω∘ι_0,t_j_C^0(_6√(m)) = dω_C^0(_6√(m)t_j)≤ C dω_C^0(_6√(m)) t_j^2.
We will therefore henceforth work under the following assumption, allowing us to indeed iteratively produce the above sequence of intervals.
T and ω are as in Assumption <ref>. The origin is a flat singular point of T and Θ(T,0) = Q∈_≥ 2. The parameter is chosen small enough to ensure that m_0,0≤_3^2.
Following the procedure in <cit.> (see also <cit.>) with this amended choice of m_0,j, we use the center manifold construction in <cit.> to produce a sequence _j of center manifolds for the rescalings T_0,t_j with corresponding normal approximations N_j:_j →_Q(T_j^⊥), whose multigraphs agree with T_0,t_j in _3∖_s_j/t_j over an appropriately large proportion of _j∩(_3∖_s_j/t_j). By a rotation of coordinates, we may assume that the m-dimensional planes π_j over which we parameterize _j are identically equal to the same fixed plane π_0 ≡^m×{0}⊂^m+n. We refer the reader to <cit.> or <cit.> for the basic properties of the intervals of flattening. Given a center manifold and a point x∈, we will let _r(x) denote the geodesic ball _r(x)∩ of radius r in . It will always be clear from context which particular center manifold we are using for such a ball.
Given a flat singular point of T with density Q∈ (denoted by x∈_Q(T)), we will use the terminology blow-up sequence of radii around x to refer to a sequence of scales r_k↓ 0 such that T_x,r_k_6√(m) Qπ for some m-dimensional plane π. If x=0, we will simply call this a blow-up sequence of radii, with no reference to the center. Observe that for any blow-up sequence of radii r_k, for each k sufficiently large there exists a unique choice of index j(k) such that r_k ∈ (s_j(k),t_j(k)].
Under the validity of Assumption <ref>, given a blow-up sequence of radii r_k, we will henceforth adopt the notation
* T_k for the rescaled currents T_0,t_j(k)_6√(m);
* _k and N_k rescpectively for the center manifolds _j(k) and the normal approximations N_j(k);
* _k for the map parameterizing the center manifold _k over B_3(π_0) (see <cit.>);
* for the orthogonal projection map to (see <cit.>).
In addition, let s=s̅_kt_j(k)∈(3 r_k/t_j(k), 3r_k/t_j(k)] be the scale at which the reverse Sobolev inequality <cit.> (see also <cit.>) holds for r=r_k/t_j(k). Then let r̅_k = 2s̅_k3t_j(k)∈(r_kt_j(k), 2r_k/t_j(k)], and in turn define the corresponding additionally rescaled objects
T̅_k = (T_k)_0,r̅_k = (ι_0,r̅_k t_j(k))_♯ T _6√(m)r̅_k^-1, _k = ι_0,r̅_k(_k),
together with the maps _k(x) := (x,_k(r̅_k x)) parameterizing the graphs of the rescaled center manifolds and the rescaled normal approximations N̅_k:_k→^m+n defined by
N̅_k(x) := N_k(r̅_k x)/r̅_k.
Consequently we let u_k: B_3 ≡ B_3(π_0)→_Q(^m+n) be defined by
u_k := N̅_k∘𝐞_k/N̅_k _L^2(_3/2),
where 𝐞_k denotes the exponential map from B_3⊂π_0≅ T_r̅_k^-1Φ_k(0)_k to _k. In light of the reverse Sobolev inequality <cit.> implies that the sequence u_k is uniformly bounded in W^1,2(B_3/2). Then, by <cit.> (see also <cit.>), up to extracting a subsequence, there exists a Dir-minimizer u∈ W^1,2(B_3/2(π_0);_Q(π_0^⊥)) such that
* η∘ u =0;
* u_L^2(B_3/2)=1;
* u_k → u strongly in L^2∩ W^1,2_ (B_3/2).
Recall that for Dir-minimizers u: Ω⊂^m →_Q(^m) on an open domain Ω, we may consider a regularized variant of Almgren's frequency function, defined by
I_u(x,r) := rD_u(x,r)/H_u(x,r), r∈ (0,(x,∂Ω))
where
H_u(x,r) = -∫|u(y)|^2/|y-x|ϕ'(|y-x|/r) dy, D_u(x,r) = ∫ |Du(y)|^2 ϕ(|y-x|/r) dy,
and ϕ:[0,∞)→[0,1] is a monotone Lipschitz function that vanishes for all t sufficiently large and is identically equal to 1 for all t sufficiently small. Similarly to the classical frequency, which formally corresponds to taking ϕ=1_[0,1], r↦ I_u(x,r) is monotone non-decreasing for each x∈Ω, and takes a constant value α if and only if u is radially α-homogeneous about x (see e.g. <cit.>). In particular, the limit
I_u(x,0) := lim_r↓ 0 I_u(x,r)
exists and in fact is independent of the choice of ϕ. We will henceforth fix the following convenient choice of ϕ:
ϕ (t) =
{[ 1 ,; 2-2t ,; 0 . ].
When x=0, we omit the dependency on x for I, H and D.
Now let us define the natural regularized frequency associated to the graphical approximations for a semicalibrated current T satisfying Assumption <ref>. Given a center manifold ≡_j and a corresponding normal approximation N:→_Q(^m+n), we define the regularized frequency _N of N at a given center x∈ and scale r>0 by
_N(x,r) := rΓ_N(x,r)/_N(x,r),
where Γ_N(x,r) = _N(x,r) + _N(x,r) with
_N(x,r) := -∫_|N(y)|^2/d(y,x) |∇_y d(y,x)|^2ϕ'(d(y,x)/r) d^m(y),
_N(x,r) = ∫ |DN(y)|^2 ϕ(d(y,x)/r) d^m(y),
_N(x,r) = ∑_i=1^Q ∑_l=1^m (-1)^l+1∫⟨ D_ξ_l N_i(y) ∧ξ̂_l(y) ∧ N_i(y), dω∘ι_0,t_j(y)⟩ϕ(d(y,x)/r) d^m(y).
Here, ξ̂_l = ξ_1∧⋯∧ξ_l-1∧ξ_l+1∧⋯∧ξ_m for an orthonormal frame {ξ_j}_j=1^m of T and d is the geodesic distance on the center manifold. We will often write ∇ d(x,y) to denote the derivative ∇_y d(x,y). If x=0, we will omit the dependency on the center. Recall that the presence of the additional term _N in the frequency (in contrast to that for area-minimizing integral currents) is due to the term T(dω X) in the first variation δ T(X) for T; see <cit.> for more details.
Note in above the presence of the scaling ι_0,t_j. This is due to the error in the first variation being for T_0,t_j, and not for T. This is consistent with the quadratic scaling we expect for the dω_0 term in our definition of _0.
PART:
Singularity degree of flat singular points
§ MAIN RESULTS
Following <cit.>, we define a fine blow-up u to be any Dir-minimizer obtained through the compactness procedure in Section <ref> along a blow-up sequence of radii r_k, and we let
(T,0) := {I_u(0) : u is a fine blow-up along some sequence r_k ↓ 0}
denote the set of frequency values of T at 0. We further recall the notion of singularity degree introduced in <cit.>:
The singularity degree of T at 0 is defined as
(T,0) := inf(T,0).
Of course, one may analogously define the set of frequency values and the singularity degree at any other point x∈(T) by instead considering fine blow-ups taken around the center point x in place of 0, and thus all of the results in this part clearly hold for any x∈_Q(T) in place of 0.
Let us now state the main result of this part, which concerns the main properties of the singularity degree.
Suppose that T satisfies Assumption <ref>. Then
(i) (T,0) ≥ 1 and (T,0) = {(T,0)};
(ii) All fine blow-ups are radially homogeneous with degree I(T,0);
(iii) if s_j_0=0 for some j_0∈ then lim_r↓ 0_N_j_0(r) = (T,0);
(iv) if, conversely, there are infinitely many intervals of flattening (s_k,t_k], the functions _N_j converge uniformly to the constant function (T,0) when (T,0)>1, while when (T,0)=1, lim_k →∞_j(k)(r_kt_j(k)) = (T,0) =1 for every blow-up sequence of radii r_k;
(v) if (T,0)>1 then the rescalings T_0,r converge polynomially fast to a unique flat tangent cone Qπ as r↓ 0;
(vi) if additionally (T,0)> 2-δ_2 then s_j_0 = 0 for some j_0∈;
(vii) if (T,0) < 2-δ_2 then there are infinitely many intervals of flattening and inf_j s_j/t_j > 0.
In Part 2, we will then follow the arguments in <cit.> (based on the seminal work <cit.>) to prove the following.
Let T and ω be as in Theorem <ref>. Then the set {x∈(T): (T,x)>1} is countably (m-2)-rectifiable.
Recall that by the work <cit.> of Naber & Valtorta, for each k=0,…, m, the k-th stratum ^(k)(T), defined to be the set of all points x∈(T)∖(∂ T) such that for any tangent cone S at x we have
({y:(τ_y)_♯ S = S}) ≤ k,
is countably k-rectifiable. Here, τ_y(p) := p+y denotes the map that translates by y.
In Part 3 we then complete the proof of Theorem <ref> by showing the following (cf. <cit.>).
Let T and ω be as in Theorem <ref>. Then the set {x∈(T): (T,x)=1} is ^m-2-null. Moreover, the tangent cone is unique at ^m-2-a.e. point in ^(m-2)(T).
Combining these results then gives Theorem <ref>.
The starting point for studying the properties of Almgren's frequency function with respect to varying normal approximations is the following result, which provides uniform upper and lower frequency bounds over all the intervals of flattening around a given flat singular point.
Suppose that T satisfies Assumption <ref>. Then there exist constants c_0 = c_0(m,n,Q,T) > 0, C=C(m,n,Q,T)>0 and α=α(m,n,Q)>0, such that
_N_j(r) ≥ c_0 ∀ r∈(s_j/t_j,3],
_N_j(a) ≤ e^Cb^α_N_j(b) ∀ r∈(s_j/t_j,3].
Moreover,
0< inf_j inf_r∈ (s_jt_j, 3]_N_j(r) ≤sup_j sup_r∈ (s_jt_j,3]_N_j(r) < +∞.
The uniform upper bound of Theorem <ref> was established in <cit.> (see also <cit.>). For the uniform lower bound, we refer the reader to <cit.>. Although this is only proven therein in the case where T is area-minimizing, one may easily observe that the proof in fact works in exactly the same way when T is merely semicalibrated, since all of the preliminary results required (e.g. <cit.> and the estimates <cit.>) hold here also. In particular, observe that including the term _N in the frequency ensures that the variational error terms for the frequency are of exactly the same form as those in the area-minimizing case.
Given Theorem <ref>, observe that the arguments in <cit.>, rewritten for semicalibrated currents (namely, replacing all results from <cit.> with their counterparts from <cit.>), yields a local upper Minkowski dimension estimate of m-2 as obtained in <cit.> for area-minimizing integral currents.
In order to derive the conclusions of Theorem <ref>, we wish to sharpen the uniform bounds of Theorem <ref> to a quantitative control on the radial variations of the frequency function, across uninterrupted strings of intervals of flattening. With this in mind, we recall the following definition of the universal frequency from <cit.>.
Suppose that T is as in Assumption <ref> and let {(s_k,t_k]_k=j_0^J} be a sequence of intervals of flattening with coinciding endpoints (i.e. s_k = t_k+1 for k=j_0,…,J-1), with corresponding center manifolds _k and normal approximations N_k. For r∈ (s_J, t_j_0], let
(r) := _N_k(rt_k)1_(s_k,t_k](r),
(r) := _N_k(rt_k)1_(s_k,t_k](r),
(r) := _N_k(rt_k)1_(s_k,t_k](r),
(r) := _N_k(rt_k) 1_(s_k,t_k](r).
We refer to as the universal frequency function, whenever it is well-defined.
We have the following frequency BV estimate on the universal frequency function, which is not only a crucial tool for the proof of Theorem <ref>, but will also be useful in its own right in establishing the rectifiability of the points with singularity degree strictly larger than 1 in Part 2.
There exists γ_4=γ_4(m,n,Q)>0 and C=C(m,n,Q)>0 such that the following holds. Let {(s_k,t_k]_k=j_0^J} be a sequence of intervals of flattening with coinciding endpoints. Then log( +1)∈((s_J,t_j_0]) with the quantitative estimate
|[dlog(+1)/dr]_-|((s_J,t_j_0]) ≤ C ∑_k=j_0^J_0,k^γ_4.
In addition, if (a,b]⊂ (s_k,t_k] for some interval of flattening (s_k,t_k], we have
|[dlog(+1)/dr]_-|((a,b]) ≤ C (b/t_k)^γ_4_0,k^γ_4.
§ COARSE BLOW-UPS
It will be convenient to consider an alternative type of blow-up to a fine blow-up, avoiding reparameterization to center manifolds; we follow the setup of <cit.>. Consider a blow-up sequence of radii r_k and the associated sequence T_0,r_k of rescaled currents. We may assume without loss of generality that T_0,r_k Qπ_0 in _4.
Compared to the set up of <cit.>, we are taking M = 1/2, which is sufficient for our purposes here.
Notice that for r̅_k := r_k/t_k, in light of the stopping condition <cit.> for the intervals of flattening, we have _L⊂_2r̅_k for any Whitney cube L∈^(j(k)) with L∩B̅_r̅_k(π_0)≠∅ (see <cit.>). Let ϖ_k denote a sequence of planes such that (T_0,r_k,_4r̅_k,ϖ_k) = (T_0,r_k,_4r̅_k). Observe that for k sufficiently large, the height bound <cit.> guarantees that
(T_0,r_k, _2,ϖ_k) ≤(T_0,r_k, _4) =: E_k → 0.
In particular, ϖ_k →π_0 (locally in Hausdorff distance). By replacing T_0,r_k by its pushforward under a rotation mapping ϖ_k to a plane parallel to π_0, which is converging to the identity, we may therefore assume that ϖ_k = π_0.
We may then ensure that for all k sufficiently large we have E_k < _1, where _1>0 is the threshold of <cit.>, which in turn yields a sequence of Lipschitz approximations
f_k:B_1/2(π_0)→_Q(π_0^⊥)
for T_0,r_k. Define the normalizations
f̅_k := f_k/E_k^1/2.
We will work under the additional assumption that
dω_C^1,κ_0^2 r_k^2-2δ_3 = o(E_k) ,
for a fixed choice of parameter δ_3 ∈ (0, δ_2).
Notice that this is a slightly stronger hypothesis that the corresponding assumption <cit.>. The reason for asking for merely almost-quadratic scaling on the left-hand side of (<ref>) will become apparently in Part <ref>; see Remark <ref>.
Note that (<ref>) need not necessarily hold in general, but we will only need to consider cases where it is indeed true.
In light of <cit.>, we may thus conclude that up to extracting a subsequence, there exists a Dir-minimizer f̅: B_1(π_0) →_Q(π_0^⊥) with f̅(0) = Q 0 such that
f̅_k →f̅ in W^1,2_∩ L^2(B_1(π_0)).
Recalling <cit.>, we refer to such a map f̅ as a coarse blow-up of T at 0, and we say that f̅ is non-trivial if it is not identically equal to Q 0. We in turn define the average free part
v(x) := ∑_i=1^Q f̅_i(x) - η∘f̅(x)
for f̅. As usual, one may analogously define a coarse blow-up and its average-free part at another point x∈(T) under the assumption (<ref>).
Observe that unlike for a fine blow-up, it could be that f̅≡ Qη∘f̅. Indeed, one may construct examples of such behavior from holomorphic curves in ^2 that are of the form {(w,z): w^Q = z^p} for non-integer ratios p/Q larger than 2 (see e.g. <cit.>, and <cit.>).
We have the following frequency lower bound for coarse blow-ups, which follows from a Hardt-Simon type estimate (see <cit.>).
Let T be as in Assumption <ref>. If f̅ is a non-trivial coarse blow-up and v is its average-free part, then I_f̅(0)≥ 1 and if v is not identically zero, then I_v(0)≥ 1.
We refer the reader to <cit.> for the proof of this, with the observation that the only differences in the argument therein are
* as in <cit.>, the fact that f̅ is non-trivial is equivalent to the existence of a radius ρ>0 and a constant c̅>0 such that
lim inf_k→∞(T_0,r_k,_ρ,π_k)/E_k≥c̅.
* in the proof of the preliminary lemma <cit.>, the estimate (9) therein is replaced with (<ref>) here, and the Lipschitz approximation and surrounding estimates are instead taken from <cit.>;
* the estimate (15) therein for a semicalibrated current follows from <cit.> in place of the classical monotonicity formula for mass ratios for stationary integral varifolds (namely, there is the presence of a higher order error term, which vanishes as the inner radius is taken to zero).
In light of Theorem <ref>, the validity of the lower bound on the singularity degree in Theorem <ref> is therefore reduced to the following.
Suppose that T is as in Assumption <ref>. Let r_k ∈ (s_j(k),t_j(k)] be a blow-up sequence of radii with
lim inf_k→∞s_j(k)/r_k >0.
Then (<ref>) holds, and the coarse blow-up f̅ along (a subsequence of) r_k is well-defined. Moreover, for the average-free part v of f̅ and a corresponding fine blow-up u along (a further subsequence of) r_k, we have
v=λ u for some λ >0.
In particular, I_u(0) ≥ 1.
In the case where (<ref>) fails, we might have that v is trivial while u is not, precisely because of the possibility that f̅ = Qη∘f̅ in this case; see the above discussion.
Before discussing the proof of Proposition <ref>, let us point out the following result, which one obtains as a simple consequence a posteriori, after obtaining the conclusions of Theorem <ref>, in light of the classification of homogeneous harmonic functions. This corollary will be exploited in Part 3.
Let T be as in Assumption <ref> and suppose that (T,0) < 2-δ_2. Then, assuming the conclusions of Theorem <ref>, any coarse blow-up f̅ at 0 is non-trivial, average-free and (T,0)-homogeneous. Moreover, for each γ > 2((T,0) - 1) we have the lower decay bound
lim inf_r↓ 0(T,_r)/r^γ > 0,
and there exists r_0= r_0(Q,m,n, T)>0 such that
(T,_r) ≥(r/s)^γ(T,_s) ∀ r < s < r_0.
Observe that given the conclusions (ii) and (vii) of Theorem <ref>, the proof of Corollary <ref> is exactly the same as that of <cit.>, when combined again with the aforementioned observation that T_0,r_j is r_jdω_C^0-minimal in _7√(m), which allows one to verify that the property (<ref>) is preserved under rescalings of blow-up sequences (see <cit.>). We thus omit the details here.
§.§ Proof of Proposition <ref>
The proof of Proposition <ref> follows analogous reasoning as that of <cit.>.
First of all, recall the following Lemma from <cit.>. Note that it does not rely on any properties of the center manifold other than its regularity, so clearly remains unchanged in this setting.
There are constants κ=κ (m,n,Q)>0 and C=C (m,n,Q)>0 with the following property. Consider:
* A Lipschitz map g: ℝ^m ⊃ B_2 →𝒜_Q (ℝ^n) with g_C^0 + Lip (g) ≤κ;
* A C^2 function φ : B_2 →ℝ^n with φ (0) = 0 and Dφ_C^1≤κ;
* The function f (x) = ∑_i φ (x) + g_i (x) and the manifold ℳ := {(x, φ(x))};
* The maps N, F: ℳ∩_3/2→𝒜_Q (ℝ^m+n) given by <cit.>, satisfying F (p) = ∑_i p+ N_i (p), N_i(p)⊥ T_p ℳ, and T_F _5/4 = _f _5/4.
If we denote by g̃ the multi-valued map x↦g̃ (x) = ∑_i (0, g_i (x))∈𝒜_Q (ℝ^m+n), then
𝒢 (N (φ (x)), g̃ (x)) ≤ C D φ_C^0 (g_C^0 + D φ_C^0) ∀ x∈ B_1 .
In addition, we also have the same comparison estimates as in <cit.>:
Suppose that the assumptions of Proposition <ref> are satisfied along a sequence of radii r_k. Then (<ref>) holds, and
(i) for _k = N̅_k_L^2(_3/2) with N̅_k as in (<ref>), we have
0 < lim inf_k→∞_k^2/E_k≤lim sup_k→∞_k^2/E_k < +∞;
(ii) for f_k defined as in (<ref>) and the map _k on B_2(0,π_k) such that
(_k)∩_3/2(0,π_k) = ι_0,r_k/t_j(k)(_k),
we have
∫_B_3/2|_k- η∘ f_k|^2 = o(E_k).
Observe that the arguments in proof of Lemma <ref> remains completely unchanged from that of <cit.>, after replacing the application of the relevant preliminary results from <cit.> with their analogues in <cit.>. In particular, we emphasize the following:
(1) Given the estimate <cit.>, which remains valid herein since the properties of the Whitney decomposition remain unchanged, we conclude that (<ref>) holds for any δ_3∈ (0,δ_2).
(2) The estimates <cit.> are replaced by <cit.> respectively; namely, despite the constructions of the respective center manifolds for area-minimizing integral currents and semicalibrated currents being different, the relevant comparison and derivative estimates are still satisfied.
With Lemma <ref> and Lemma <ref> at hand, the proof of Proposition is exactly the same as that in <cit.>.
§ IMPROVED FREQUENCY LOWER BOUND
This section is dedicated to the proof of the lower bound (T,0) ≥ 1 in Theorem <ref>(i). This maybe be equivalently restated as follows.
Suppose that T satsfies Assumption <ref>. Then I_u(0) ≥ 1 for any fine blow-up u.
The proof of Theorem <ref> follows very similar reasoning to <cit.>, now that we have Theorem <ref>, which generalizes <cit.> to the semicalibrated setting. Nevertheless, we repeat the details here.
§.§ Proof of Theorem <ref>
Let r_k ∈ (s_j(k),t_j(k)] be a blow-up sequence of radii which generates a fine blow-up u. Up to extracting a subsequence, we have three cases:
(a) there exists J ∈ such that s_J = 0 and {r_k}⊂ (0,t_J];
(b) #{j(k):k∈} = ∞ and lim_k→∞s_j(k)/r_k = 0;
(c) #{j(k):k∈} = ∞ and lim_k→∞s_j(k)/r_k > 0.
Case (a): Let , N denote respectively the center manifold and normal approximation associated to the interval of flattening (0,t_J], and let us omit dependency on N_J for and related quantities.
In light of the almost-monotonicity (<ref>) of Theorem <ref>, the limit I_0 := lim_r↓ 0(r) exists and lies in [c_0, ∞). Furthermore, the strong W^1,2_∩ L^2-convergence of u_k to u as in Section <ref> implies that I_u(0) = I_u(r) ≡ I_0 for each r∈ (0,3/2). It therefore remains to check that I_0 ≥ 1.
First of all, observe that the stopping criteria for the intervals of flattening (see e.g. <cit.>) guarantees that
(r) ≤ Cr^m+2-2δ_2 ∀ r∈ (0,1].
Together with <cit.>, we additionally obtain
(r) ≤ Cr^m+3-2δ_2 ∀ r∈ (0,1].
On the other hand, observe that since (r) ≥I_0/2 for every r>0 sufficiently small, the estimates <cit.> can be rewritten as
|∂_rlog((r)/r^m-1) - 2(r)/r| ≤Cr^γ_3(r)/r ∀ r∈ (0,r_0],
for some r_0 = r_0(I_0)>0 sufficiently small. Here, γ_3 is as in <cit.>. In particular, given >0, we have
2 I_0 - /r≤∂_rlog((r)/r^m-1) ≤2 I_0 + /r ∀ r∈(0,r_1],
for some r_1=r_1() ∈ (0, r_0]. This in turn yields
lim inf_r↓ 0(r)/r^m-1+ 2I_0 +≥(r_1)/r_1^m-1+2I_0 + > 0,
which then further gives the consequence
lim inf_r↓ 0Γ(r)/r^m-2+ 2I_0 + > 0.
Recalling that Γ = (r) + (r) and combining with the decay (<ref>), (<ref>), we must therefore have
I_0 ≥ 2-δ_2,
which in particular implies the desired lower bound of 1, in this case.
Case (b): Let u denote the fine blow-up generated by (a subsequence of) r_k. Notice that
I_u(ρ) = lim_k→∞_N_j(k)(ρ r_k/t_j(k)) ∀ρ∈ (0,1],
in light of the strong W^1,2∩ L^2-convergence described in Section <ref>. In particular, since the stopping conditions for the intervals of flattening guarantee that
(T,_s_j(k)) = (T_0,t_j(k),_s_j(k)/t_j(k)) ≤ C_3^2 (s_j(k)/t_j(k))^2-2δ_2⟶ 0 as k →∞,
we deduce that {s_j(k)} is a blow-up sequence of radii. Applying Proposition <ref>, we conclude that the fine blow-up ũ generated by (a subsequence of) s_j(k) satisfies I_ũ(0) ≥ 1. Combining this with (<ref>) (which additionally holds with u replaced by ũ and r_k replaced by s_j(k)), we conclude that
lim inf_k→∞_N_j(k)(s_j(k)/t_j(k)) ≥ 1.
Combining this with the almost-monotonicity (<ref>) of the frequency, we easily conclude that for δ>0 arbitrary, there exists ρ̅>0 such that
lim inf_k→∞_N_j(k)(ρ r_k/t_j(k)) ≥ 1-δ ∀ρ∈(s_j(k)/t_j(k), ρ̅),
from which the desired conclusion follows immediately. See <cit.> for details.
Case (c): In this case, the hypotheses of Proposition <ref> hold, so applying this proposition, we immediately obtain the desired conclusion.
§ FREQUENCY BV ESTIMATE
This section is dedicated to the proof of Theorem <ref>. To begin with, we state a sharper formulation of the variational identities <cit.>. Let T satisfy Assumption <ref>, and let be a center manifold for T with associated normal approximation N. Given x∈, set
_N(x,r) = -1/r∫_ϕ'(d(x,y)/r)∑_i N_i(y)· DN_i(y)∇ d(x,y) dy ,
_N(x,r) = -1/r^2∫_ϕ'(d(x,y)/r) d(x,y)/|∇ d(x,y)|^2∑_i |DN_i(y) ·∇ d(x,y)|^2 d y ,
Σ_N(x,r) = ∫_ϕ(d(x,y)/r)|N(y)|^2 d y .
There exist γ_4 = γ_4(m,n,Q)>0 and C = C(m,n,Q)>0 such that the following holds. Suppose that T satisfies Assumption <ref>. Let (s,t] be an interval of flattening for T around 0 with associated center manifold and normal approximation N and let _0 be as in (<ref>) for this interval. Then _N(0,·), _N(0,·) are absolutely continuous on (st,3] and for almost every r∈ (st,3] we have
∂_r _N(0,r) = - ∫_ϕ'(d(y)/r) d(y)/r^2 |DN(y)|^2 dy
∂_r _N(0,r) - m-1/r_N (0,r) = O(m_0) _N (0,r) + 2 _N(0,r),
|Γ_N(0,r) - _N(0,r)| ≤∑_j=1^5 |_j^o| ≤ Cm_0^γ_4_N(0,r)^1+γ_4 + Cm_0Σ_N(0,r),
≤ Cm_0^γ_4_N(0,r)^1+γ_4 + C_0 r^2 _N(0,r),
|∂_r _N(0,r) - (m-2) r^-1_N(0,r)- 2_N(0,r)| ≤ 2 ∑_j=1^5 |_j^i| + C m_0_N(0,r)
≤ Cr^-1m_0^γ_4_N(0,r)^1+γ_4 + Cm_0^γ_4_N(0,r)^γ_4∂_r _N(0,r) +Cm_0 _N(0,r),
|_N(0,r)| ≤ C_0^1/4 r _N(0,r), |∂_r_N(0,r)| ≤ C_0^γ_4 (r^-1∂_r_N(0,r) _N(0,r))^γ_4
where _j^o, _j^i are the variational errors as in <cit.>.
The estimates in Proposition <ref> follow by the same reasoning as their weaker counterparts in <cit.>, together with the following observations:
* the error estimates in <cit.> (more precisely, see <cit.>) can be optimized so as to gain a factor of _0^γ_4 on the right-hand side, in light of the following estimates on the geodesic distance d on each center manifold :
* d(x,y) = |x-y| + O(m_0^1/2 |x-y|^2),
* |∇ d(x,y)| = 1 + O(m_0^1/2d (x,y)),
* ∇^2 (d^2) = g + O(m_0 d), where g is the metric induced on ℳ by the Euclidean ambient metric.
* the estimates <cit.> in fact immediately yield the estimates in (<ref>), in light of (<ref>).
These estimates are a simple consequence of the C^3,κ-estimates for each center manifold; see e.g. <cit.>. We therefore omit the details here.
The estimates of Proposition <ref> in turn gives rise to the following almost-monotonicity estimate for the frequency relative to a given center manifold.
There exists C=C(m,n,Q)>0 such that the following holds. Suppose that T satisfies Assumption <ref>. Let (s,t], x, , N and γ_4 be as in Proposition <ref>. Then _N(0,·) is absolutely continuous on (st,3] and for almost-every r∈ (st,3] we have
∂_r log (1 +_N(0,r)) ≥ - C_0^γ_4(1 + _N(0,r)^γ_4/r + _N(0,r)^γ_4-1∂_r_N(0,r))
Now observe that <cit.> can in fact be stated for a general manifold that is the graph of a sufficiently regular function as follows, with the proof remaining completely unchanged.
There exists a dimensional constant C = C(m,n,Q) > 0 such that the following holds. Let =(φ_r) be a C^3 m-dimensional C^3 submanifold of ^m+n, where φ_r ∈ C^3(B_r(0,π);π^⊥). Let f: B_r(0,π)→_Q(π^⊥) be a Lipschitz map. Then we have
| ∫__r(0,π). .|_f(z) - ∘𝐩(z)|^2ϕ(|𝐩_π (z)|/r) d_f(z) - ∫_B_r(0,π)(Df, Q Dφ_r)^2 dy |
≤ C∫_B_r(0,π) (|Df|^4 + |Dφ_r|^4) ϕ(|y|/r) dy
+C ∫__r(0,π)| (𝐩(z)) - (φ_r(𝐩_π(z))) | d_f(z).
Thus, we immediately deduce the analogue of <cit.> when T is semicalibrated.
There exists a dimensional constant C = C(m,n,Q) > 0 such that the following holds. Let T satisfy Assumption <ref>. Let (s,t] be an interval of flattening for T around 0 with corresponding center manifold and normal approximation N, let m_0 be as in (<ref>) for (s,t]. Let φ be the parameterizing map for over B_3(π) and let f: B_1(π) →_Q(π^⊥) be a π-approximation for T_0,t in _4(0,π) according to <cit.>. For r̅=s/t, let f_L: B_8r_L(p_L,π_L) →_Q(π_L^⊥) be a π_L-approximation for T_0,t corresponding to a Whitney cube L as in <cit.>. Let π_r̅ be such that (T_0,t,_6√(m)r̅) = (T_0,t,_6√(m)r̅, π_r̅) and let B^L := B_8r_L(p_L,π_L). Let f_r̅:B_r̅(0,π_r̅) →_Q(π_r̅^⊥) be the map reparameterizing gr (f_L) as a graph over π_r̅ and let φ_r̅, φ_L be the maps reparameterizing (φ) as graph over π_r̅, π_L respectively. Then we have
| ∫_B_1(0,π). .(Df, Q Dφ)^2ϕ(|y|) d y - ∫__1 ∩ |DN|^2 ϕ(d(y)) d y |
≤ C∫_B_1(0,π) (|Df|^4 + |Dφ|^4)d y + Cm_0^1+γ_2 + C ∫__1 ∩ (|_|^2|N|^2 + |DN|^4)
+C ∫__1(0,π)| (𝐩(z)) - (φ(𝐩_π(z))) | d_f(z),
and
| ∫_B_r̅(0,π_r̅). .(Df_r̅, Q Dφ_r̅)^2ϕ(|y|/r̅) d y - ∫__r̅∩ |DN|^2 ϕ(d(y)/r̅)d y |
≤ C∫_B_r̅(0,π_r̅) (|Df_r̅|^4 + |Dφ_r̅|^4) d y + C∫_B^L (|Df_L|^4 + |Dφ_L|^4) d y
+ Cm_0^1+γ_2r̅^m+2+γ_2 + C ∫_^L (|_|^2|N|^2 + |DN|^4)
+C ∫__r̅(0,π_r̅)| (𝐩(z)) - (φ(𝐩_π_r̅(z))) | d_f_r̅(z),
where _ denotes the second fundamental form of and γ_2 is as in <cit.>.
In light of the estimates of Corollary <ref>, we further wish to control the difference between an orthogonal projection to a center manifold and the image on of an orthogonal projection to a plane over which the center manifold is parameterized.
There exists a constant C=C(m,n,Q) > 0 such that the following holds. Suppose that T, , m_0, r̅, f, f_r̅, π, π_r̅, φ_r̅, γ_2 are as in Corollary <ref>. Then we have
∫__r̅(0,π_r̅)| (𝐩(z)) - (φ_r̅(𝐩_π_r̅(z))) | d_f_r̅(z) ≤ Cr̅^m+1m_0^1+γ_2,
∫__1(0,π)| (𝐩(z)) - (φ(𝐩_π(z))) | d_f(z) ≤ Cm_0^1+γ_2.
The proof of Lemma <ref> follows exactly as that of <cit.>, exploiting the estimates of <cit.> and <cit.> in place of their respective analogues in <cit.>.
We further have the following comparison estimate between neighboring center manifolds.
There exists a constant C=C(m,n,Q) > 0 such that the following holds. Suppose that T satisfies Assumption <ref>. Let _k-1, _k be successive center manifolds for T associated to neighboring intervals of flattening (t_k,t_k-1] and (t_k+1, t_k] around 0. Let _k-1,_k denote their respective parameterizing maps and let N_k-1,N_k denote their normal approximations. Assume that (T_0,t_k,_6√(m),π_k) = (T_0,t_k,_6√(m)) for some plane π_k and let φ̃_k-1 be the map reparametrizing (φ_k-1) as a graph over π_k.
Letting φ̃_k := φ̃_k-1(t_k/t_k-1·), we have
∫_B_1 |Dφ_k - Dφ̃_k|^2 ≤ C m_0,k^3/2.
and
∫_B_2 |φ_k - φ̃_k|^2 ≤ C m_0,k .
The proof follows the same reasoning as that of <cit.>. Nevertheless, let us provide an outline here and highlight the differences. First of all, observe that by a rotation of coordinates, we may without loss of generality assume that π_k-1=π_k≡π_0 and _k-1 = _k-1.
Let η∈ C_c^∞(B_2(π_0);[0,1]) be a cutoff function satisfying η≡ 1 on B_1. Via an integration by parts, we have
∫_B_1 |Dφ_k - Dφ̃_k|^2 ≤∫_B_2 |Dφ_k - Dφ̃_k|^2 η
= -∫_B_2 (φ_k - φ̃_k)ηΔ (φ_k - φ̃_k) - ∫_B_2∖ B_1 Dη· (φ_k - φ̃_k) D(φ_k - φ̃_k)
≤ C(m_0,k^1/2 + t_k/t_k-1m_0,k-1^1/2) ∫_B_2 |φ_k - φ̃_k|.
Now recall that the construction procedure for the intervals of flattening guarantees that
(t_k/t_k-1)^2-2δ_2m_0,k-1≤ C _0,k.
Thus, (<ref>) follows from (<ref>), and so it suffices to demonstrate the latter. Given a Lipschitz approximation f_k:B_3(π_0)→_Q(π_k^⊥) for T_0,t_k_4(0,π_0) as in <cit.>, which can indeed be considered for _3 sufficiently small since (T_0,t_k,_4(0,π_0)) ≤ C_0,k, observe that it suffices to show
∫_B_2 |_k - η∘ f_k| ≤ C_0,k,
∫_B_2 |_k - η∘ f_k| ≤ C_0,k.
In fact, notice that (<ref>) will follow from exactly the same argument as (<ref>), when combined with (<ref>). Indeed, this is due to the fact that for f̃_k:=f_k-1(t_k·t_k-1) and f_k-1 as above but for T_0,t_k-1_4(0,π_0), we have _f_k≡_f_k≡ T_0,t_k on K×π_0^⊥ for a closed set K⊂ B_2 with
|B_2∖ K| ≤ C_0,k^1+β_0,
where β_0>0 is as in <cit.>.
Now let us demonstrate the validity of (<ref>). By the construction of _k, B_2 is covered by a disjoint union of the contact set Γ⊂π_0 and Whitney cubes ':={L∈: L∩ B_2≠∅}.
Therefore we have
∫_B_2 |_k -η∘ f_k|≤∫_Γ∩ B_2 |_k -η∘ f_k|_(A) + ∑_L∈'∫_L∩ B_3 |_k -η∘ f_k|_(B).
Firstly, we have
|(A)| ≤ C _0,k^1+β_0,
due to (<ref>), together with the fact that Γ⊂ K (see <cit.>) and the estimates <cit.>.
Meanwhile, for (B), we argue as follows. For each L∈', let π_L denote the optimal plane associated to L as in <cit.>, with corresponding π_L-approximation f_L, associated tilted L-interpolating function h_L as in <cit.> and (straight) L-interpolating function g_L. Note that h_L (and hence g_L) are constructed via a different smoothing procedure here, in comparison to that in <cit.> where T is area-minimizing. More precisely, here h_L is constructed by solving a suitable PDE with boundary data η∘ f_L, while in <cit.> it is constructed via convolution of η∘ f_L. Nevertheless, by <cit.>, we still have the key estimates
∫_L |_k - g_L| ≤ C_0,kℓ(L)^m+3+β_2/3,
for β_2>0 as in <cit.>, and
∫_B_2√(m)ℓ(L)(p_L,π_L) |h_L - η∘ f_L| ≤ C _0,kℓ(L)^m+3+β_2,
where p_L is the center of L and ℓ(L) is the side-length of L. Combining these with a reparameterization from π_0 to π_L and the tilting estimate <cit.>, then summing over L∈', the conclusion follows; see <cit.> for the details.
§.§ Proof of Theorem <ref>
With all of the preliminary estimates of this section at hand, we are now in a position to conclude the BV estimate of Theorem <ref>. Let _k:=_N_k, _k:=_N_k, _k:=_N_k, Γ_k and let _k := r^-(m-2)_k, _k := r^-(m-1)_k, _k := r^-(m-2)_k , Γ̅_k:= _k + _k denote their respective scale-invariant quantities. Let us first consider the jumps of at the radii t_k. We have
|(t_k^+) - (t_k^-)| =|_k-1(t_kt_k-1) - _k(1)/_k(1)| + |_k-1(t_kt_k-1) - _k(1)/_k(1)|
+ |Γ̅_k-1(t_kt_k-1)| |1/_k-1(t_kt_k-1) - 1/_k(1)|,
where (t_k^+):= _k-1(t_kt_k-1) and (t_k^-) := _k(1).
Now in light of (<ref>) and (<ref>), we have
|_k-1(t_kt_k-1) - _k(1)/_k(1)| ≤ C_0,k^1/4|_k-1(t_kt_k-1) - _k(1)/_k(1)|,
and
|Γ̅_k-1(t_kt_k-1)| |1/_k-1(t_kt_k-1) - 1/_k(1)| ≤ C_0,k^1/4_k-1(t_kt_k-1) |1/_k-1(t_kt_k-1) - 1/_k(1)|.
Thus, by the exact same reasoning as that for the estimates <cit.>, we obtain
|(t_k^+) - (t_k^-)| ≤ C _0,k^γ_4(1+ (t_k^+)).
When combined with the elementary identity log w ≤ w-1 for w>0, we obtain
|log(1+ (t_k^+)) - log(1+ (t_k^-))| ≤|(t_k^+) - (t_k^-)| /1+ (t_k^+)≤ C_0,k^γ_4
On the other hand, recall from Corollary <ref> that |_(s_k,t_k) is absolutely continuous and
∂_r log(1+(r)) ≥ -C/t_k_0,k^γ_4(1+ (rt_k)^-1_k(rt_k)^γ_4 + _k(rt_k)^γ_4-1∂_r (rt_k))_=: ν_k(r) ∀ r∈(s_k,t_k).
Thus, we may introduce a suitable function Ω as in <cit.>, whose distributional derivative is the measure
C∑_k=j_0^J _0,k^γ_4(δ_t_k + ν_k(r)1_(s_k,t_k)^1),
so that in addition log( + 1) +Ω is monotone non-decreasing. Since
|∂_r Ω|((s_J,t_j_0]) ≤ C∑_k=j_0^J _0,k^γ_4,
and |[∂_rlog(+1)]_-|((s_J,t_j_0]) ≤ |∂_r Ω|((s_J,t_j_0]), the proof is complete.
§ PROOF OF THEOREM <REF>
Now that we have demonstrated that Theorem <ref> holds, we are in a position to complete the proof of the remaining conclusions (ii)-(vii) of Theorem <ref>. The starting point is the following tilt excess decay result.
For any I_0> 1, there exist constants C=C(I_0,m,n,Q)>0, α=α(I_0,m,n,Q)>0 such that the following holds. Let T satisfy Assumption <ref> and suppose that (T,0)≥ I_0. Then there exists r_0=r_0(I_0,m,n,Q,T) > 0 (depending also on the center point 0) such that
𝐄 (T, 𝐁_r) ≤ C (r/r_0)^αmax{𝐄 (T, 𝐁_r_0), ε̅^2 r_0^2-2δ_2} ∀ r∈ (0,r_0) .
Furthermore, if C is permitted to additionally depend on α, one may choose α to be any positive number smaller than min{2((T,0)-1),2-2δ_2}.
The proof of this, and all its necessary preliminaries, is the same as that of <cit.>.
Observe that a combination of the excess decay in Proposition <ref> and the universal frequency BV estimate of Theorem <ref> immediately implies the conclusions (iii), (v), (vi) of Theorem <ref>, together with (ii), (iv) in the case when (T,0)>1. Indeed, observe that Proposition <ref> guarantees that if (T,0)>1, there exists an index j_0=j_0(r_0) large enough such that t_k+1=s_k for each k≥ j_0. Combining this with Theorem <ref> and the observation that t_k+1t_k≤ 2^-5, we obtain a uniform -estimate of the form
|[dlog(+1)/dr]_-|((0,t_j_0]) ≤ C ∑_k=j_0^∞_0,k^γ_4≤ C ∑_k=j_0^∞ 2^-5αγ_4(k-j_0)_0,j_0^γ_4≤ C _0,j_0^γ_4.
for α=α((T,0),m,n,Q)>0 as in Proposition <ref>, where C=C((T,0),m,n,Q). In particular I_0:=lim_r↓ 0(r) exists, and
I_u(ρ) = I_0 ∀ρ∈(0,1],
for every fine blow-up u. Note that in particular, if s_j_0=0 for some j_0∈, Corollary <ref> alone provides the desired conclusion that I_0=lim_r↓ 0_N_j_0(r).
It remains to verify the conclusions (vi), (vii) of Theorem <ref>, as well as the conclusions (ii) and (iv) when (T,0)=1. These all follow by the same reasoning as <cit.>, with the use of the Lipschitz approximation <cit.> in place of <cit.> where needed, so we do not include the details here.
PART:
Rectifiability of points with singularity degree > 1
§ SUBDIVISION
In this part, we prove Theorem <ref>. We will work under Assumption <ref> throughout. We will be exploiting the results of the preceding part, centered around points x∈_Q(T) (namely, applying the results therein to T_x,1).
It will be useful to produce a countable subdivision of the set _Q(T) as follows. First of all, we may write
_Q(T) ∩_1 = ⋃_K ∈_K,
for
_K := {y∈_Q (T) : (T, y)≥ 1+2^-K}∩𝐁̅_1 .
In light of Proposition <ref>, we will further decompose each piece _K based on the initial scale r_0. Let us rewrite the statement of this proposition applied to points in _K. Note that we may ensure that C=1, up to further decreasing r_0 if necessary (dependent on the exponent α).
Let T be as in Assumption <ref>, let K∈ and let x∈_K. For μ=2^-K-1, there exists r_0=r_0(x,m,n,Q,K)>0 such that
𝐄 (T, 𝐁_r(x)) ≤(r/s)^2μmax{𝐄 (T, 𝐁_s(x)), ε̅^2 s^2-2δ_2} ∀ 0<r< s < r_0 .
We may thus decompose _K as follows.
Let T be as in Assumption <ref> and let _4 be a small positive constant which will be specified later. For every K∈ℕ∖{0} define μ = μ (K) := 2^-K-1. Given K, J∈ℕ, let _K,J (which implicitly also depends on _4) denote the collection of those points x∈_K for which
𝐄 (T, _r (x)) ≤(r/s)^2μ𝐄 (T, _s (x))
∀ 0<r≤ s≤6√(m)/J
𝐄 (T, _6 √(m) J^-1) ≤ε_4^2.
Observe that for each K,J∈, the set _K,J is closed, in light of upper semicontinuity of the singularity degree.
Notice that by rescaling, it suffices to prove the (m-2) rectifiability of _K,J with K∈ fixed and J=1. More precisely, the remainder of this part will be dedicated to the proof of the following.
There exists _4(m,n,Q)>0 such that the following holds. Let T be as in Assumption <ref>. Then := _K,1 (which, recall, depends on _4) is (m-2)-rectifiable and has the (m-2)-dimensional local Minkowski content bound
|_r()| ≤ Cr^n+2 ∀ r∈ (0,1],
for some C=C(m,n, Q, T,K,γ_4,_4)>0.
The constant C in Theorem <ref> is implicitly also dependent on a uniform bound on Almgren's frequency function _x,k over all k∈, defined relative to center manifolds that will be adapted to a given geometric sequence of scales; see Corollary <ref> below.
§ ADAPTED INTERVALS OF FLATTENING, UNIVERSAL FREQUENCY, RADIAL VARIATIONS
In order to prove Theorem <ref>, we follow a strategy which is much analogous to that in <cit.>, relying on the celebrated rectifiable Reifenberg techniques of Naber & Valtorta <cit.>. We begin by decomposing the interval of scales (0,1] around each point x∈ into countably many sub-intervals whose endpoints are given by a fixed geometric sequence and construct a center manifold for each of them, hence use it to compute a corresponding frequency function. In <cit.>, since we treat separately the points for which there are finitely many intervals of flattening, these sub-intervals are be comparable in length to the intervals of flattening. Here, however, we treat the points with finitely many intervals of flattening together with those points that have infinitely many intervals of flattening. We may do this by “artificially" stopping and restarting the center manifold procedure around each point x∈_Q(T) with (T,x)> 2-δ_2, and simply setting the new center manifold to be the rescaling of the existing one, if there is no need to change the center manifold at a given endpoint of the chosen geometric sequence of scales. We will then define a corresponding universal frequency function as in Definition <ref>, but relative to this fixed geometric sequence of scales, in place of the original intervals of flattening. The frequency variation estimates and quantitative BV estimate in the preceding part, together with the excess decay of Proposition <ref>, will be key to providing us with the necessary quantitative bounds on the points in .
§.§ Center manifolds
Let us begin by adapting the intervals of flattening from Section <ref> around each point in , to a given fixed geometric sequence of scales.
Fix a constant γ∈ (0,12], whose choice will specified later, depending only on m, n, and Q.
Consider a point x∈ with corresponding intervals of flattening {(t_k+1,t_k]}_k≥ 0 that have associated center manifolds _x,k and normal approximations N_x,k, together with a geometric blow-up sequence of scales {γ^j}_j≥ 0.
For j=0, let _x,0 =_x,0 and Ñ_x,0 = N_x,0. For j=1, if γ lies in the same interval of flattening to t_0=1, let
_x,1 := ι_0,γ(_x,0), Ñ_x,1(x) := N_x,0(γ x)/γ.
Otherwise, let _x,1 be the center manifold associated to T_x,γ_6√(m), with corresponding normal approximation N_x,1.
For each j≥ 2, define _x,j inductively as follows. If γ^j lies in the same flattening to γ^j-1, let
_x,j := ι_0,γ(_x,j-1), Ñ_x,j(x) := N_x,j-1(γ x)/γ.
Otherwise, let _x,j be the center manifold associated to T_x,γ^j_6√(m), with corresponding normal approximation N_j.
It follows from Definition <ref> that around any x∈, we may replace the procedure in Section <ref> with the intervals (γ^k+1,γ^k] in place of (s_k,t_k], and with m_0,k therein instead defined by
m_x,k = 𝐄 (T_x, γ^k, _6√(m)) =
𝐄 (T, _6√(m)γ^k (x)) .
Observe that in particular, if x∈ originally has finitely many intervals of flattening with (0,t_j_0] being the final interval, it will nevertheless have infinitely many adapted intervals of flattening, but for all k sufficiently large, _x,k and Ñ_x,k are arising as rescalings of _x,j_0 and N_x,j_0 respectively.
Abusing notation, let us henceforth simply write _x,k for the center manifold _x,k, with its corresponding normal approximation N_x,k.
We will henceforth denote by d the geodesic distance on the center manifold ℳ_x,k, which is in fact dependent on x and k. However, since this dependence is not important and it will always be clear from context which center manifold we are taking the geodesic distance on, we will omit it. Let π_x,k denote the plane used to construct the graphical parametrization φ_x,k of the center manifold _x,k and let 𝒲^x,k denote the collection of Whitney cubes associated to _x,k as in <cit.>. Note that the center manifold ℳ_x,k does not necessarily contain the origin 0 = ι_x, γ^-k (x). However we use the point p_x,k:= (0, φ_x,k (0)) ∈π_x,k×π_x,k^⊥ as a proxy for it.
Fix η∈ (0,12], to be determined later. We observe the following simple consequence of our adapted intervals of flattening and associated center manifolds.
Let γ, η>0 be two fixed constants and let c_s = 1/64√(m). Upon choosing N_0 in <cit.> sufficiently large and adjusting accordingly the constants C_e, C_h and ε_2 in <cit.> accordingly, we can ensure that for every w∈ℳ_x,k and every r∈ [ηγ, 3], any L∈𝒲^x,k with L∩ B_r (𝐩_π_x,k (w), π_x,k)≠∅ satisfies ℓ (L) ≤ c_s r. Moreover, we have the following dichotomy. Either
(a) there is a positive constant c̅_s= c̅_s(K,m,n,Q,η) ∈ (0, c_s] such that B_γ (0, π_x,k) intersects a cube L∈𝒲^x,k with ℓ (L) ≥c̅_s γ, which violates the excess condition (EX) of <cit.>;
(b) _x,k+1= ι_0,γ(_x,k).
The proof follows immediately from the contruction.
We have the following estimate on the universal frequency function adapted to {γ^k}_k (cf. Theorem <ref>).
There exists (m,n,Q)∈]0,] such that for any _4∈ (0,], there exists C=C(m,n,Q,γ_4, K,γ) such that the following holds for every x∈:
|[dlog (1 + (x,·))/dr]_- |([0, 1]) ≤ C∑_k m_x,k^γ_4≤ C_x,1^γ_4
Observe that in light of Proposition <ref>, the estimate in Proposition <ref> follows by the same argument as that in Theorem <ref>. Indeed, for any k such that γ^k lies in the same original interval of flattening as γ^k-1, one may estimate the jump
|log (1 + (x,γ^k))^+ - log (1+ (x, γ^k)^-|
due to Proposition <ref>(a) (see <cit.>). Meanwhile, if instead _x,k+1= ι_0,γ(_x,k) holds, the adapted universal frequency function is absolutely continuous on (γ^k+1,γ^k-1], and we instead simply use the variation estimate of Corollary <ref>.
§.§ Universal frequency function
For each center manifold ℳ_x,k, we define the frequency function for the associated normal approximation, as in Part <ref>. We define analogous quantities to those therein, namely
_x,k(w,r) := ∫__x,k |D N_x,k(z)|^2 ϕ(d(w,z)/r) dz;
_x,k(w,r) := - ∫__x,k|∇ d(w,z)|^2/d(w,z) |N_x,k (z)|^2ϕ'(d(w,z)/r) dz;
_x,k(w,r) := ∑_i=1^Q ∑_l=1^m (-1)^l+1∫__x,k⟨ D_ξ_l (N_x,k)_i(z) ∧ξ̂_l(z) ∧ (N_x,k)_i(z), dω(z) ⟩ϕ(d(w,z)/r) d^m(z);
Γ_x,k(w,r) := _x,k(w,r)+ _x,k(w,r);
_x,k(w,r) := r Γ_x,k(w,r)/_x,k(w,r) .
We further define the quantities
_x,k(w,r) := -1/r∫__x,kϕ'(d(w,z)/r)∑_i (N_x,k)_i(z)· D(N_x,k)_i(z)∇ d(w,z) dz;
_x,k(w,r) := -1/r^2∫__x,kϕ'(d(w,z)/r) d(w,z)/|∇ d(w,z)|^2∑_i |D( N_x,k)_i(z) ·∇ d(w,z)|^2 dz;
Σ_x,k(w,r) := ∫__x,kϕ(d(w,z)/r)|N_x,k(z)|^2 dz.
We are now in a position to introduce the universal frequency function adapted to the geometric sequence {γ^k}_k, analogously to that defined in the preceding part.
For r ∈ (γ^k+1, γ^k] and x∈, define
(x, r) := _x,k(p_x,k,rγ^k),
(x,r) := _x,k(p_x,k, rγ^k),
(x,r) := _x,k(p_x,k, rγ^k),
(x,r) := _x,k(p_x,k, rγ^k).
§.§ Radial frequency variations
As an immediate consequence of the total variation estimate and the fact that is closed, we infer the existence of an uniform upper bound for the frequency 𝐈 (x,r) over all x∈. We also infer the existence of the limit 𝐈 (x,0) = lim_r↓ 0𝐈 (x,r). We can then argue as in Part <ref> to show that 𝐈 (x,0) = (x,0) ≥ 1+ 2^-K. In turn, upon choosing ε̃ sufficiently small we infer the following.
For as in Proposition <ref> and any _4∈ ]0,], there exists C=C(m,n,Q,γ_4,K,γ,) such that the following holds:
1+ 2^-K-1≤𝐈 (x,r) ≤ C ∀ x∈, ∀ r ∈ ]0, 1] .
By a simple contradiction and compactness argument, we obtain the same consequence as that in Corollary <ref> for points sufficiently close to at the appropriate scales.
There exists ^*∈(0,] such that for any _4∈ (0,^*] and any x∈, there exists C_0=C_0(γ,η,m,n,Q,^*,K)>0, such that the following holds for every w∈ℳ_x,k and every r ∈ (ηγ, 4]:
C_0^-1≤𝐈_x,k (w,r) ≤ C_0 .
Let us now record the following simplified variational estimates, which may be easily deduced from those in Corollary <ref>, Proposition <ref>, but for the intervals of flattening adapted to {γ^k}_k.
Let be as in Proposition <ref>. Suppose that T, _4, x, _x,k and N_x,k are as in Corollary <ref>. Then there exist constants C dependent on K, γ, η and but not on x, k, such that the following estimates hold for every w ∈_x,k∩_1 and any ρ, r ∈ (ηγ, 4].
C^-1≤_x,k (w,r) ≤ C
C^-1r _x,k(w,r) ≤ C^-1 r Γ_x,k (w,r) ≤_x,k (w,r) ≤ C r Γ_x,k (w,r) ≤ C r _x,k(w,r)
Σ_x,k (w,r) ≤ C r^2 _x,k (w,r)
_x,k (w,r) ≤ C _x,k (w,r)
_x,k(w,ρ)/ρ^m-1 = _x,k(w,r)/r^m-1exp(-C∫_ρ^r _x,k(w,s) ds/s - O (m_x,k) (r-ρ))
_x,k (w, r) ≤ C _x,k (w, r/4)
_x,k (w,r) ≤ C r^m+3 - 2δ_2
_x,k (w,r) ≤ C r^-1_x,k (w,r)
|∂_r _x,k (w,r)| ≤ C r^-1_x,k (w,r)
|∂_r _x,k (w,r)| ≤ C _x,k (w,r) .
In particular:
|Γ_x,k (w,r) - _x,k (w,r)| ≤ C m_x,k^γ_4 r^γ_4_x,k (w,r)
|∂_r _x,k (w,r) - m-2/r_x,k (w,r) - 2 _x,k (w,r)| ≤ C m_x,k^γ_4 r^γ_4-1_x,k (w,r)
|∂_r_x,k(w,r) - m-1/r_x,k(w,r) - 2_x,k(w,r)| ≤ C_x,k_x,k(w,r)
|_x,k(w,r)| ≤ C_x,k^γ_4 r_x,k(w,r)
|∂_r_x,k(w,r)| ≤ C_x,k^γ_4(r^-1∂_r_x,k(w,r)_x,k(w,r))^1/2
∂_r _x,k (w,r) ≥ - C m_x,k^γ_4 r^γ_4-1 .
Here, γ_4 is as in Part <ref>.
Observe that (<ref>) is the consequence of Corollary <ref>, while the remaining estimates are an easy consequence of Corollary <ref>, Proposition <ref>, combined with the construction of the sequence of adapted center manifolds and associated normal approximations. We refer the reader to <cit.> for a more in-depth explanation.
§ SPATIAL FREQUENCY VARIATIONS
A key aspect of the proof of Theorem <ref> is a quantitative control on how much a given normal approximation N = N_x,k deviates from being homogeneous on average between two scales, in terms of the frequency pinching. The latter is defined in the following way.
Let T and be as in Theorem <ref>, let x∈ and let _x,k and N_x,k be as in Section <ref>. Consider w ∈_x,k∩_1 and a corresponding point y = x + γ^k w. Let ρ, r>0 be two radii satisfying
ηγ^k+1≤ρ≤ r < 4γ^k .
We define the frequency pinching W_ρ^r(x,k,y) around y between the scales ρ and r by
W_ρ^r(x,k,y) :=|_x,k(w,γ^-k r ) - _x,k(w,γ^-kρ)|.
We have the following comparison to homogeneity for the normal approximations N_x,k.
Assume T and are as in Theorem <ref>. Let x∈ and k ∈. Then there exists C = C (m,n,Q,K,γ, η) such that, for any w ∈ℳ_x,k∩_1 and any radii r,ρ>0 satisfying
4 ηγ^k+1≤ρ≤ r < 2γ^k ,
the following holds. Let y= x + γ^k w and let _ρ/4^2r(w) :=(_2r/γ^k(w)∖_ρ/4γ^k(w))∩_x,k. Then
∫__ρ/4^2r(w)∑_i |D(N_x,k)_i(z)d(w,z)∇ d(w,z)/|∇ d(w,z)| - _x,k(w,d (w,z)) (N_x,k)_i(z)|∇ d(w,z)||^2 d z/d(w,z)
≤ C _x,k(w,2rγ^k) (W_ρ/8^4r(x,k,y) + m_x,k^γ_4(r/γ^k)^γ_4)log(16r/ρ).
The argument is analogous to that of <cit.>, but taking into account the additional term _x,k in the frequency. We include the details here for the purpose of clarity.
At the risk of abusing notation, we will omit dependency on x, w and k for all quantities, for simplicity. For instance, we simply write (s) for the quantity _x,k(w,γ^-k s), and W^4r_ρ/8 for the pinching W^4r_ρ/8(x,k,y).
Invoking the estimates of Lemma <ref>,
we obtain
W^4r_ρ/4(y) ≥∫_ρ/4^4r∂_s (s) ds = ∫_ρ/4^4rΓ(s) + s∂_sΓ(s) - (s)Γ(s)∂_s(s)/(s) ds
≥ 2∫_ρ/4^4rs(s)-(s)/(s) ds - C^γ_4∫_ρ/4^4rs^γ_4(s)/(s) + s^1+γ_4(s)^2/(s)^2 ds
≥ 2∫_ρ/4^4rs(s)-(s)/(s) ds - C^γ_4((4r)^γ_4 - (ρ/4)^γ_4).
Now notice that we may write
∫_ρ/4^4r s(s)-(s)/(s) ds
= ∫_ρ/4^4r1/s(s)∫_ -ϕ'(d(w,z)/s)1/d(w,z)(∑_j |DN_j d(w,z)∇ d(w,z)/|∇ d(w,z)||^2.
. - 2(s)∑_j N_j ·(DN_j d(w,z)∇ d(w,z)) + (s)^2|N(z)|^2|∇ d(w,z)|^2) d z d s
= ∫_ρ/4^4r1/s(s)∫ -ϕ'(d(w,z)/s)ξ (w,z,s)/d(w,z) d z d s,
where
ξ(w,z,s) = ∑_j | DN_j d(w,z)∇ d(w,z)/|∇ d(w,z)| - (s) N_j(z)|∇ d(w,z)||^2 .
Thus,
W_ρ/4^4r(y) ≥ 2∫_ρ/4^4r1/s(s)∫__s/2^s(w)ξ(w,z,s)/d(w,z) d z d s - Cm^γ_4(r^γ_4-ρ^γ_4).
Let
ζ(w,z) := ∑_j | D N_j (z)d(w,z)∇ d(w,z)/|∇ d(w,z)| - (d (w,z)) N_j(z)|∇ d(w,z)| |^2.
We then have
ζ(w,z) ≤ 2ξ(w,z,s) + 2|(s) - (d(w,z))|^2| N(z)|^2 ≤ 2ξ(w,z,s) + C W_d (w,z)^s(y) | N(z)|^2.
Let us now control W_d (w,z)^s(y) by W_ρ/8^4r(y). In light of the quantitative almost-monotonicity (<ref>) for , for any radii ηγ^k+1 < s < t ≤γ^k we have
(s) ≤(t) + Cm^γ_4 (t^γ_4- s^γ_4).
For s∈ [ρ4,4r] this therefore yields
W_d (w,z)^s(y) ≤ W_ρ/8^4r(y) +Cm^γ_4 ((4r)^γ_4 -(ρ/8)^γ_4).
Now observe that the estimate (<ref>) gives
∫__ρ/4^2r(w)∫_d(w,z)^2d(w,z)1/s^2(s)ζ(w,z) d s d z ≥1/(2r)∫__ρ/4^2r(w)ζ(w,z)∫_d(w,z)^2d(w,z)1/s^2 d s d z
≥1/2(2r)∫__ρ/4^2r(w)ζ(w,z)/d(w,z) d z.
Combining this with the preceding estimates, we arrive at
W^4r_ρ/4(y) + log(16r/ρ)W_ρ/8^4r(y)
≥ C∫_ρ/4^4r1/s(s)∫__s/2^s(w)ζ(w,z)/d(w,z) d z d s - Cm^γ_4r^γ_4log(16r/ρ) - Cm^γ_4r^γ_4
≥ C ∫__ρ/4^2r(w)∫_d(w,z)^2d(w,z)1/s^2(s)ζ(w,z) d s d z - Cm^γ_4r^γ_4log(16r/ρ) - Cm^γ_4r^γ_4
≥C/(2r)∫__ρ/4^2r(w)ζ(w,z)/d(w,z) d z - Cm^γ_4r^γ_4log(16r/ρ) - Cm^γ_4r^γ_4
≥C/(2r)∫__ρ/4^2r(w)ζ(w,z)/d(w,z) d z - Cm^γ_4r^γ_4log(16r/ρ) - Cm^γ_4r^γ_4.
Rearranging and again making use of (<ref>), this yields the claimed estimate.
We will further require the following spatial variation estimate for the frequency, with control in terms of frequency pinching.
Let T be as in Theorem <ref>, let x∈ and k ∈. Let x_1,x_2 ∈_1 ∩_x,k, y_i = x + γ^k x_i and let d(x_1,x_2) ≤γ^-k r/8, where r is such that
8η γ^k+1 < r ≤γ^k .
Then there exists C= C (m,n,Q,γ, η) > 0 such that for any z_1,z_2 ∈ [x_1,x_2], we have
|_x,k(z_1,rγ^k) - _x,k(z_2,rγ^k)|
≤ C [(W_r/8^4r(x,k,y_1))^1/2 + (W_r/8^4r(x,k,y_2))^1/2 + m_x,k^γ_4/2(rγ^k)^γ_4/2]γ^kd(z_1,z_2)/r .
To prove Lemma <ref>, we need the following spatial variational identities for _x,k, _x,k and _x,k.
Suppose that T is as in Theorem <ref>, let x∈ and let k ∈. Let v be a continuous vector field on _x,k. For any w∈_x,k∩_1 and any ηγ≤ r ≤ 2, letting ν_w(z):= ∇ d(w,z), we have
∂_v _x,k(w,r) = -2/r∫__x,kϕ'(d(w,z)/r) ∑_i ⟨∂_ν_w (N_x,k)_i(z), ∂_v (N_x,k)_i(z)⟩ d^m(z)
+ O(m_x,k^γ_4)r^γ_4 -1_x,k(w,r),
∂_v _x,k(w,r) = - 2 ∑_i ∫__x,k|∇ d(w,z)|^2/d(w,z)ϕ'(d (w,z)/r)⟨∂_v (N_x,k)_i(z), (N_x,k)_i(z) ⟩ d^m(z),
|∂_v _x,k(w,r)| ≤ C_x,k^1/2v_C^0(_x,k(w,r)∂_r _x,k(w,r))^1/2
The proof of (<ref>) and (<ref>) can be found in <cit.>. To see the validity of (<ref>), omitting dependency on x,k for simplicity, we simply write
∂_v (w,r) = ∑_i=1^Q ∑_l=1^m (-1)^l+1∫_⟨ D_ξ_l N_i(z) ∧ξ̂_l(z) ∧ N_i, dω(z) ⟩ϕ'(d(w,z)/r) ∇ d(w,z)/r· v(z) d^m(z).
Combining with an application of Cauchy-Schwarz, this in turn yields the estimate
|∂_v (w,r)| ≤ Cdω_C^0(_γ^k(x))v_C^0((w,r)∂_r (w,r))^1/2.
Recalling (<ref>), the conclusion follows immediately.
The majority of the proof follows in the same way as that of <cit.> (cf. <cit.>), but due to the additional error terms present when T is semicalibrated, we repeat the full argument here.
We will as usual omit dependency on x,k for all objects. Let x_1,x_2 be as in the statement of the lemma and let w lie in the geodesic segment [x_1,x_2]⊂. Given a continuous vector field v on and ρ∈ (8ηγ, 1], we have
∂_v (w,ρ) = ρ(∂_v (w,ρ) + ∂_v (w,ρ))/(w,ρ) - (w,ρ)∂_v(w,ρ)/(w,ρ).
Let μ_w be the measure on with density
dμ_w(z) = -|∇ d(w,z)|/d(w,z)ϕ'(d(w,z)/r)d^m(z).
Now let
η_w(z) := d(w,z)∇ d(w,z)/|∇ d(w,z)| = d(w,z)/|∇ d(w,z)|ν_w(z),
and choose v to be the vector field
v(z) = d(x_1,x_2)∇ d(x_1,z)/|∇ d(x_1,z)|.
Applying Lemma <ref> with this choice of v and exploiting the estimates of Lemma <ref>, for each ρ = rγ^-k∈ (8ηγ,1] we have
∂_v (w,ρ) = 2/(w,ρ)∫_∑_i ⟨∂_η_w N_i, ∂_v N_i ⟩ dμ_w -2(w,ρ)/(w,ρ)∫_ |∇ d(w,·)| ∑_i ⟨∂_v N_i, N_i ⟩ dμ_w
+ C^1/2ρ^2 (w,ρ)^-1/2(∂_ρ(w,ρ))^1/2 + C^γ_4ρ^γ_4.
Now observe that v parameterizes the geodesic line segment [x_1,x_2]⊂. Thus,
∂_v N_i(z) = d(x_1,z) ∇ d(x_1,z)/|∇ d(x_1,z)| DN_i(z) - d(x_2,z) ∇ d(x_2,z)/|∇ d(x_2,z)| DN_i(z)
=∂_η_x_1 N_i(z) - ∂_η_x_2 N_i(z)
= (∂_η_x_1 N_i(z) - (x_1, d (x_1, z))N_i(z))_:= _1,i - (∂_η_x_2 N_i(z) - (x_2, d (x_2, z))N_i(z))_:= _2,i
+ (x_1,d (x_1, z)) - (x_2,d (x_2, z))_:= _3N_i(z).
Combining this with the above calculation and once again the estimates in Lemma <ref>, we therefore obtain
∂_v (w,ρ) = 2/(w,ρ)∫_∑_i ⟨∂_η_w N_i, _1,i - _2,i⟩ d μ_w - 2(w,ρ)/(w,ρ)∫_ |∇ d(w,·)| ∑_i ⟨_1,i - _2,i, N_i ⟩ dμ_w
+ 2/(w,ρ)∫__3∑_i ⟨∂_η_w N_i, N_i ⟩ - 2(w,ρ)/(w,ρ)∫_ |∇ d(w,·)| _3∑_i |N_i|^2 dμ_w
+ C ^1/2ρ^2(w,ρ)^-1/2(∂_ρ(w,ρ))^1/2 + Cm^γ_4ρ^γ_4
= 2/(w,ρ)∫_∑_i ⟨∂_η_w N_i, _1,i - _2,i⟩ d μ_w - 2(w,ρ)/(w,ρ)∫_ |∇ d(w,·)| ∑_i ⟨ N_i, _1,i - _2,i⟩ dμ_w
+ 2_3/(w,ρ)(∫_∑_i ⟨∂_η_w N_i, N_i ⟩ dμ_w - ρ(w,ρ))
+ Cm^γ_4ρ^γ_4(w,ρ).
where in the last inequality, we have used that ∂_ρ(ρ) ≤1r(ρ) and (<ref>). Recalling that we aim to control the spatial frequency variation in terms of frequency pinching at the endpoints x_1 and x_2, let us rewrite _3 in the following form:
_3 ≤ |((x_1, d (x_1,z)) -(x_1,ρ))| + |((x_1,ρ) - (x_2,ρ))| + |((x_2,ρ) - (x_2, d(x_2, z)))|
= W^γ^k d (x_1, z)_r(y_1) + W_γ^k d (x_2, z)^r(y_2) + |(x_1,ρ) - (x_2,ρ)|.
Combining this with the Cauchy-Schwartz inequality and the estimates in Lemma <ref>, we have
∂_v (w,ρ) ≤ C[∫∑_i(|_1,i|^2 + |_2,i|^2) d μ_w]^1/2(1/(w,ρ)[∫∑_i |∂_η_w N_i|^2 dμ_w]^1/2 +(w,ρ)/(w,ρ)^1/2)
+ Cm^γ_4ρ^1+γ_4|(x_1,ρ) - (x_2,ρ)|/(w,ρ)((w,ρ) + |(w,ρ)|)
+ Cm^γ_4ρ^1+γ_4W^γ^k d (x_1, z)_r (y_1) + W_γ^k d (x_2, z)^r(y_2)/(w,ρ)((w,ρ) + |(w,ρ)|)
+ Cm^γ_4ρ^γ_4
≤ C[∫∑_i(|_1,i|^2 + |_2,i|^2) d μ_w]^1/2(1/(w,ρ)[∫∑_i |∂_η_w N_i|^2 dμ_w]^1/2 +(w,ρ)/(w,ρ)^1/2)
+ Cm^γ_4ρ^γ_4(|(x_1,ρ) - (x_2,ρ)| + W^γ^k d (x_1, z)_r (y_1) + W_γ^k d (x_2, z)^r(y_2))
+ Cm^γ_4ρ^γ_4
Applying Proposition <ref>, for ℓ = 1,2 we further have
∫∑_i|_ℓ,i|^2 d μ_w = - ∫∑_i|_ℓ,i|^2(z) |∇ d(w,z)|/d(w,z)ϕ'(d(w,z)/ρ) d^m(z)
≤ C(x_ℓ,2ρ) (W_r/8^4r(y_ℓ)+m^γ_4ρ^γ_4).
Together with the doubling estimate (<ref>) (which applies since d(x_ℓ,w)≤ρ) and the uniform upper frequency bound (<ref>), we thus obtain the estimate
∂_v (w,ρ) ≤ C[(W_r/8^4r(y_1)+m^γ_4ρ^γ_4)^1/2 + (W_r/8^4r(y_2)+m^γ_4ρ^γ_4)^1/2] + Cm^γ_4ρ^γ_4.
≤ C [W_r/8^4r(y_1)^1/2 + W_r/8^4r(y_2)^1/2] + Cm^γ_4/2ρ^γ_4/2.
Integrating this inequality over the geodesic segment [z_1,z_2]⊂, the proof is complete.
§ QUANTITATIVE SPINE SPLITTING
Following the notation of <cit.>, for a finite set of points X={x_0,…, x_k} we let V(X) denote the affine subspace given by
V(X) := x_0 + ({x_1-x_0,…,x_k-x_0}).
We recall the following quantitative notions of linear independence and spanning, first introduced in <cit.>.
We say that a set X = {x_0, x_1,…,x_k}⊂_r(w) is ρ r-linearly independent if
d(x_i, V ({x_0, …, x_i-1})) ≥ρ r
We say that a set F ⊂_r(w) ρ r-spans a k-dimensional affine subspace V if there is a ρ r-linearly independent set of points X= {x_i}_i=0^k ⊂ F such that V= V (X).
We have the following two quantitative splitting results (cf. <cit.>).
Suppose that T is as in Theorem <ref> and let ρ,ρ̅∈ (0,1], ρ̃∈ (η,1] be given radii. There exists ^*=^*(m,n,Q,γ,K,ρ,ρ̃,ρ̅)>0 such that for _4≤^*, the following holds. Suppose that for some x∈ and r∈ [γ^k+1,γ^k], there exists a collection of points X={x_i}_i=0^m-2⊂_r(x)∩ satisfying the properties
* X is ρ r-linearly independent;
* the nearest points z_i to x_i such that γ^-k(z_i-x) ∈_x,k satisfy
W^2r_ρ̃r(x,j(k),z_i) < ^*.
Then ∩ (_r∖_ρ̅r(V(X)))=∅.
Suppose that T is as in Theorem <ref>, and let ρ,ρ̅∈ (0,1], ρ̃∈ (η,1] be given radii. For any δ>0, there exists ^†=^†(m,n,Q,γ,K,ρ,ρ̃,ρ̅,δ)∈ (0, ^*] such that for _4≤^†, the following holds. Suppose that for some x∈ and r∈ [γ^k+1,γ^k], there exists a collection of points X={x_i}_i=0^m-2⊂_r(x)∩ satisfying the properties
* X is ρ r-linearly independent;
* the nearest points z_i to x_i such that γ^-k(z_i-x) ∈_x,k satisfy
W^2r_ρ̃r(x,j(k),z_i) < ^†.
Then for each ζ_1,ζ_2∈_r(x)∩_^† r(V(X)) and each pair of radii r_1,r_2∈ [ρ̅,1], letting w_j denote the nearest point to γ^-k(ζ_j-x) that belongs to _x,k, the following estimate holds:
|_x,k(w_1,r_1) - _x,k(w_2,r_2)| ≤δ.
Given the compactness argument in Section <ref>, the proof of both Lemma <ref> and Lemma <ref> follows in exactly the same way as that of <cit.> respectively. We therefore omit the arguments here.
§ JONES' Β_2 CONTROL AND RECTIFIABILITY
In this section, we combine all of the previous estimates of the preceding sections in this part, in order to gain control on Jones' β_2 coefficients associated to the measure ^m-2, providing a quantitative L^2-flatness control on the flat density Q singularities of T with degree strictly larger than 1.
We begin by recalling the following definition.
Given a Radon measure μ on ^m+n, we define the (m-2)-dimensional Jones' β_2 coefficient of μ as
β_2,μ^m-2(x,r) := inf_affine (m-2)-planes L[r^-(m-2)∫__r(x)((y,L)/r)^2 dμ(y)]^1/2.
We have the following key estimate on β_2,μ^m-2 for a measure μ supported on .
There exist α_0 = α_0(m,n,Q) > 0, η̂= η̂(m) ∈ (0, 18), =(m,n,Q,K)∈ (0,^†], C(m,n,Q,K) > 0 with the following property. Suppose that _4∈ (0,], η∈ (0,η̂] and let T and be as in Theorem <ref>. Suppose that μ is a finite non-negative Radon measure with (μ) ⊂ and let x_0 ∈. Then for all r ∈ (8ηγ^k+1 ,γ^k] we have
[β_2,μ^m-2(x_0, r/8)]^2 ≤ Cr^-(m-2)∫__r/8 (x_0) W^4r_r/8(x_0, k,𝐩_x_0, k (x)) dμ (x)
+ C m_x_0,k^α_0 r^-(m-2-α_0)μ(_r/8(x_0)).
With the estimates of Proposition <ref>, Lemma <ref> and Lemma <ref> at hand, the proof of Proposition <ref> follows by exactly the same reasoning as that of <cit.> (cf. <cit.>). We thus simply refer the reader to the argument therein.
§.§ Proof of Theorem <ref>
The proof of rectifiability and the content bound (<ref>) follows via the same procedure as that in <cit.>, crucially making use of Proposition <ref>, the quantitative splitting results of Section <ref> and the BV estimate of Proposition <ref>. Note that in order to establish the rectifiability alone, one may make use of <cit.> in place of the rectifiable Reifenberg arguments of Naber-Valtorta, but this does not allow one to obtain the Minkowski content bound (<ref>). We do not include the details here.
PART:
Points with singularity degree 1
In this part we conclude the proof of the main result of this work, Theorem <ref>, by showing rectifiability of the remaining part of 𝔉_Q(T), as well as the ℋ^m - 2-uniqueness of tangent cones. Namely, we prove Theorem <ref>. We follow the same outline as in <cit.>; a key preliminary result is a decay theorem for the excess to (m-2)-invariant cones formed from superpositions of planes, whenever T is much closer to such a cone than any single plane, under the assumption of no density gaps for T near the spines of such cones. Before coming to the statement of this theorem, let us first recall some notation introduced in <cit.>. We begin by defining the cones of interest. As done in the other parts, we will merely point out the differences with loc. cit, and explain the changes needed.
Let Q≥ 2 be a fixed integer. We denote by 𝒞 (Q) those subsets of ℝ^m+n which are unions of 1 ≤ N≤ Q m-dimensional planes (affine subspaces) π_1, …, π_N
for which π_i ∩π_j is the same (m-2)-dimensional plane V for every pair of indices (i,j) with i<j.
We will use the notation 𝒫 for the subset of those elements of 𝒞 (Q) which consist of a single plane; namely, with N=1. For 𝐒∈𝒞 (Q)∖𝒫, the (m-2)-dimensional plane V described in (i) above is referred to as the spine of 𝐒 and will often be denoted by V (𝐒).
Let us now recall the conical L^2 height excess between T and elements in (Q).
Given a ball _r(q) ⊂^m+n and a cone 𝐒∈𝒞 (Q), the one-sided conical L^2 height excess of T relative to in _r(q), denoted (T, 𝐒, _r(q)), is defined by
(T, 𝐒, _r(q)) := 1/r^m+2∫__r (q)^2 (p, 𝐒) dT(p).
At the risk of abusing notation, we further define the corresponding reverse one-sided excess as
(𝐒, T, _r (q)) := 1/r^m+2∫__r (q)∩𝐒∖_ar (V (𝐒))^2 (x, spt (T)) dℋ^m (x) ,
where a=a(Q,m) is a geometric constant, to be determined later (see the discussion preceding Remark <ref>). The two-sided conical L^2 height excess is then defined by
𝔼 (T, 𝐒, _r (q)) :=
(T, 𝐒, _r (q)) + (𝐒, T, _r (q)) .
We finally recall the notion of planar L^2 height excess, is given by
^p (T, _r (q)) = min_π∈𝒫 (q) (T, π, _r (q)) .
We may now state our key excess decay theorem. This is based on the excess decay theorem <cit.>, but in the latter, there is a built-in multiplicity one assumption, ruling out branch point singularities a priori. Such a decay theorem was more recently proven in <cit.> (see Section 13 therein and also <cit.> and <cit.>) in a higher multiplicity setting for stable minimal hypersurfaces, but in codimension 1, where one has a sheeting theorem. On the other hand, our version of this theorem is both in a higher multiplicity and higher codimension setting, thus requiring new techniques as in <cit.> that overcome the lack of sheeting.
Throughout this part, we will often work with error terms involving the quantity dω_C^0, where ω is as in Assumption <ref>. Thus, for the purpose of convenience, we will henceforth use the notation
Ω := dω_C^0(_6√(m)).
Let δ_3 = δ_2/2, for the positive parameter δ_2 fixed as in <cit.> (cf. Parts <ref> and <ref>). For every Q,m,n, and ς>0, there are positive constants ε_0 = ε_0(Q,m,n, ς) ≤1/2, r_0 = r_0(Q,m,n, ς) ≤1/2 and C = C(Q,m,n)>0 with the following property. Assume that
(i) T and ω are as in Assumption <ref>;
(ii) T (_1) ≤ (Q+1/2) ω_m;
(iii) There is 𝐒∈𝒞 (Q)∖ such that
𝔼 (T, 𝐒, _1) ≤ε_0^2 𝐄^p (T, _1)
and
_ε_0 (ξ) ∩{p: Θ (T,p)≥ Q}≠∅ ∀ξ∈ V (𝐒)∩_1/2 ;
(iv) Ω^2-2δ_3≤ε_0^2 𝔼 (T, 𝐒', _1) for any 𝐒'∈𝒞 (Q).
Then there is a 𝐒'∈𝒞 (Q) ∖𝒫 such that
(a) 𝔼 (T, 𝐒', _r_0) ≤ς𝔼 (T, 𝐒, _1)
(b) 𝔼 (T, 𝐒', _r_0)𝐄^p (T, _r_0)≤ 2 ς𝔼 (T, 𝐒, _1)𝐄^p (T, _1)
(c) ^2 (^'∩_1,∩_1) ≤ C 𝔼 (T, 𝐒, _1)
(d) ^2 (V (𝐒) ∩_1, V (𝐒')∩_1) ≤ C 𝔼(T,𝐒,_1)^p(T,_1) .
With Theorem <ref> at hand, the conclusion of Theorem <ref> follows by combining it with a covering procedure analogous to the one in <cit.>; see <cit.> for the details.
§.§ Outline of proof of Theorem <ref>
The proof of Theorem <ref> follows the same outline as that of <cit.>. We first establish an L^2-L^∞ height bound and tilt excess estimate, analogous to <cit.>. However, all instances of in the error terms are replaced by Ω and Ω^1-δ_3 in the height bound and tilt excess estimate respectively. This will be done in Section <ref>.
In Section <ref> we then use the height bound to verify that the graphical parameterization results of <cit.> relative to balanced cones in (Q) (see Definition <ref>) still hold true when T is semicalibrated, again with replaced by Ω^1-δ_3 in the errors. In Section <ref> we provide the analogues of the cone balancing results of <cit.>, which are required in order to guarantee the hypotheses on the cones ∈(Q) in order to build the graphical parameterizations of the preceding section. In Section <ref> we then verify that the Simon estimates at the spine <cit.> remain valid; the key difference is again the fact that all appearances of in the errors become Ω^1-δ_3. In Section <ref>, we conclude with a final blow-up procedure, analogous to that in <cit.>.
§ L^2-L^∞ HEIGHT BOUND
In this section we establish Allard-type tilt-excess and L^∞ estimates relative to disjoint collections of parallel planes, analogous to those in <cit.>, but for the class of semicalibrated currents. For the remainder of this section, we make the following additional assumption.
Q, m ≥ 3, n ≥ 2 are fixed positive integers. T and ω are as in Assumption <ref>. For some oriented m-dimensional plane π_0 ≡^m×{0}⊂^m+n passing through the origin and some positive integer Q, we have
(𝐩_π_0)_♯ T_2 = Q B_2 ,
and T(_2) ≤ (Q+1/2)ω_m 2^m.
The main result of this section is the following (note that a scaling argument gives the corresponding estimates for arbitrary centers and scales).
For every 1≤ r < 2, Q, and N, there is a positive constant C̅ = C̅ (Q,m,n,N,r)>0 with the following property. Suppose that T, ω and π_0 are as in Assumption <ref>, let p_1, … , p_N ∈π_0^⊥ be distinct points, and set π:= ⋃_i p_i+π_0.
Let
E := ∫_𝐂_2^2 (p, π) dT (p) .
Then
(T,_r, π_0)≤C̅ (E + Ω^2)
and, if E≤ 1,
(T) ∩𝐂_r ⊂{p : (p, π)≤C̅ (E^1/2 + Ω^1 - δ_3)} .
A consequence of Theorem <ref> is the following, which replaces <cit.>.
Let N be a positive integer. There is a positive constant δ = δ (Q,m,n, N) with the following properties. Assume that:
(i) T and ω are as in Assumption <ref>, and for some positive r ≤1/4 and q∈(T)∩_1 we have
∙ ∂ T 𝐂_4r (q) = 0;
∙ (𝐩_π_0)_♯ T _4r(q)= Q B_4r (q);
∙ T (𝐂_2r (q)) ≤ω_m (Q+1/2) (2r)^m;
(iii) p_1, …, p_N∈^m+n are distinct points with 𝐩_π_0 (p_i)= q and ϰ:=min{|p_i-p_j|: i<j};
(iv) π_1, …, π_N are oriented planes passing through the origin with
τ := max_i |π_i-π_0|≤δmin{1, r^-1ϰ} ;
(v) Upon setting π = ⋃_i (p_i+ π_i), we have
(rΩ)^2 + (2r)^-m-2∫_𝐂_2r (q)^2 (p, π) dT≤δ^2 min{1, r^-2ϰ^2} .
Then T𝐂_r (q) = ∑_i=1^N T_i where
(a) Each T_i is an integral current with ∂ T_i 𝐂_r (q) = 0;
(b) (q, π) = (q, p_i+π_i) for each q∈ (T_i);
(c) (𝐩_π_0)_♯ T_i = Q_i B_r (q) for some non-negative integer Q_i.
The proof of Corollary <ref> follows verbatim the one of <cit.>, replacing <cit.> with <cit.>.
We now recall the notion of non-oriented tilt-excess, previously introduced in <cit.>. More precisely, given an m-dimensional plane π and a cylinder 𝐂 = 𝐂_r (q, π), recall that the non-oriented tilt excess is given by
𝐄^no (T, 𝐂):= 1/2ω_m r^m∫_𝐂 |𝐩_T - 𝐩_π|^2 dT ,
where T (x) denotes the (approximate) tangent plane to T at x. Note that here, neither plane is oriented for the projections. In particular, we have ^no(T,) ≤ C(T,). In contrast, the reverse inequality is more subtle due to possible cancellation phenomena. Nonetheless, we have the following for semicalibrated currents.
For every 1≤ r<2 there is a constant C̅= C̅(Q,m,n, r) such that, if T, and ω are as in Assumption <ref>, then
𝐄 (T, 𝐂_r) ≤C̅ (𝐄^no (T, 𝐂_2) + Ω^2) .
The proof of this follows that of <cit.> (see also <cit.>), replacing the height bound <cit.> with <cit.> in the case in which the supports of the currents are not equibounded. Furthermore, instead of Almgren's strong Lipschitz approximation for area-minimizing integral currents, we invoke its variant <cit.> for Ω-minimal currents (see Definition 1.1 therein). One can then refine the approximation in the same way to deduce the desired contradiction, and conclude the proof.
In the case N = 1, the conclusions of Theorem <ref> are given by Allard's tilt excess estimate for varifolds with bounded generalized mean curvature (see <cit.>), together with <cit.>. It suffices to verify that the generalized mean curvature of T can be controlled uniformly by Ω, which indeed is the case by the following reasoning. Recall that T satisfies the first variation identity (<ref>) for any test vector field χ∈ C_c^∞(_6√(m);^m+n). Thus, after applying the Riesz Representation Theorem, we infer
- ∫χ·H⃗_T d‖ T ‖ = T(dωχ) = ∫⟨ dωχ, T⃗⟩ d ‖ T ‖≤‖ dω‖_C^0∫|χ| d‖ T ‖.
Taking the supremum in the above, and recalling the definition of L^∞ norm in terms of its dual L^1 norm (both with respect to the local Radon measure T), we deduce that the generalized mean curvature vector H⃗_T of T satisfies the estimate
‖H⃗_T ‖_L^∞(_6√(m),T)≤‖ dω‖_C^0(_6√(m)) = Ω .
Note that the error term in the latter is indeed quadratic in Ω, unlike Theorem 1.5 therein. In addition, note that <cit.> does not require any smallness on the tilt excess.
Theorem <ref> holds when N=1, for any Q and N.
The remainder of this section is dedicated to the proof of Theorem <ref>. First of all, observe that the results of <cit.>, handling the proof of Theorem <ref> in the case where the planes in π are well-separated, remain valid when T is semicalibrated. Indeed, they are all either reliant on the following variant of <cit.>, or are proven by analogous reasoning to it.
For every 1≤r̅<2 there is a constant σ_2 = σ_2 (Q,m,n,2-r̅)>0 with the following property. Let T and ω be as in Assumption <ref>, suppose that p_1, …, p_N∈π_0^⊥ are distinct points, and let π:=⋃_i p_i+π_0. Assume that E is as in (<ref>), let H:= min{|p_i-p_j|:i ≠ j} and suppose that
E ≤σ_2 H ≥ 1 .
Then
spt (T) ∩𝐂_r̅⊂{q: (q, π)≤H/4},
and, in particular, all the conclusions of Theorem <ref> hold in _r̅.
We defer the reader to <cit.> for the proof of this, which remains completely unchanged in the setting herein, in light of the almost-monotonicity of mass ratios that holds for all almost-minimizing currents (see, for instance, <cit.>).
As in <cit.>, we then prove the estimates (<ref>) and (<ref>) separately. For the former, we need two approximate estimates on the oriented tilt-excess; <cit.> and <cit.>, but rewritten for a semicalibrated current in ^m+n, in which case the all instances of are replaced with Ω. More precisely, the former estimate reads as follows.
For every pair of radii 1≤ r<R ≤ 2 there are constants C̅=C̅(Q,m,n,R-r)>0 and γ = γ (Q,m,n)>0 such that the following holds. Let T and ω be as in Assumption <ref> and let π, E, H be as in Lemma <ref>. Then
𝐄 (T, 𝐂_r)≤C̅ (E+ Ω^2) + C̅(E/H^2)^γ𝐄 (T, 𝐂_R) + C̅𝐄 (T, 𝐂_R)^1+γ .
Observe that the majority of the proof of <cit.> is in fact written for currents with bounded generalized mean curvature with H⃗_T _L^∞≤ C, where is the second fundamental form of the ambient Riemannian manifold. Thus, armed instead with the estimate (<ref>) (which allows us to replace ^2 by Ω^2) and replacing the use of Almgren's strong excess estimate with <cit.> we are able to follow the proof verbatim, obtaining the desired estimate (<ref>). Note that, analogously to <cit.>, we may indeed apply <cit.> since we may assume that the tilt excess (T, _r_2) falls below the threshold _21 therein, for r_2=2R+r3 as in the proof of <cit.>. Indeed, the reasoning for this remains unchanged, given Lemma <ref> (in place of <cit.>).
The estimate of Lemma <ref> in turn yields the following bootstrapped estimate, under the assumption that E is sufficiently small relative to the minimal separation of the planes in π, which is the analogue of <cit.>.
For every pair of scales 1≤ r<r_0 < 2, there are constants C̅=C̅(Q,m,n,N,r_0-r,2-r_0)>0 and σ_4=σ_4 (Q,m,n,N,r_0-r,2-r_0)>0 with the following properties.
Let T and ω be as in Assumption <ref> and let π, E, H be as in Lemma <ref>. If in addition we have
E ≤σ_4 min{H^2, 1} ,
then
𝐄 (T, 𝐂_r) ≤C̅ (E + Ω^2) + C̅(E/H^2) 𝐄 (T, 𝐂_r_0) .
Observe that the proof of Proposition <ref> remains unchanged, given Lemma <ref>, Proposition <ref> and the semicalibrated analogues of the results from <cit.> (recall the discussion above regarding the latter).
§.§ Tilt excess estimate
Given Proposition <ref> and the combinatorial lemmas of <cit.>, the tilt excess estimate (<ref>) of Theorem <ref> follows exactly as in <cit.>.
§.§ L^2-L^∞ height bound
Following <cit.>, the proof of the L^∞-bound (<ref>) is proven by induction on N, relying on the validity of the tilt excess estimate (<ref>) that we have just established.
Let N≥ 2 and suppose that (<ref>) holds for any N' ≤ N - 1 and any Q' ≤ Q. Then it holds for N and Q.
The key to the proof of Proposition <ref> is the following adaptation of <cit.> to the semicalibrated setting.
There are constants ρ_0 = ρ_0 (m,n,Q)>0 and C = C(Q,m,n)>0 such that, for every fixed 0<ρ≤ρ_0, there are
σ_5 = σ_5 (Q,m,n,N, ρ) ∈ (0,1] and 0 < β_0 = β_0(Q,m) < 1 such that the following holds.
Assume T, E, and π are as in Theorem <ref> with P= {p_1, …, p_N} and that
E + Ω^2 - 2 δ_3≤σ_5.
Then there is another set of points P':= {q_1, …, q_N'} with N'≤ Q such that:
(A) (q_i, P) ≤ C (E+ Ω^2-2δ_3)^1/2 for each i;
(B) If we set π':= ⋃ (q_i+π_0), then
∫_𝐂_2ρ^2 (x, π') dT (x)
≤ρ^m+2β_0 (E+ Ω^2-2δ_3).
Given Lemma <ref>, Proposition <ref>, the combinatorial lemma <cit.> and <cit.>, the proof of Proposition <ref> follows exactly by exactly the same reasoning as that in <cit.>, so we omit this concluding argument here. Observe that the semicalibrated analogue of <cit.> follows by making the same minor modifications to the proof as those in Lemma <ref>; see the discussion there for more details.
The main difference in the proof of Lemma <ref> relative to its counterpart in the area-minimizing setting is the application of <cit.>, which yields a nearby harmonic approximation for a given strong Lipschitz approximation to T as given by <cit.>. To apply the former, instead of checking the hypothesis ≤^1/4 + δ̅ (as is done in the area-minimizing case), we need to make sure that Ω≤_23^1/2, for the geometric constant _23 therein (with η_1 fixed appropriately). We thus need to amend the case analysis within the proof of Lemma <ref> accordingly, and so we provide an outline of the proof here, for the benefit of the reader.
As in <cit.>, we divide the proof in two cases, only here it will be based on the relative sizes of := (T,_1) and Ω^2. Note that now we have the validity of the tilt-excess estimate (<ref>), i.e. ≤ C(E + Ω^2). Thus, for σ_5 small enough (depending on Q,m,n), we may assume that < _21, where _21 is as in <cit.>, allowing us to obtain a map f: B_1/4(0,π_0) →_Q(π_0^⊥) satisfying
(i) Lip (f) ≤ C 𝐄^β≤ C(E+Ω^2)^β;
(ii) There is a closed set K⊂ B_1/4 with ^m(K) ≤1/2^m(B_1/4) such that 𝐆_f (K×ℝ^n)=T (K×ℝ^n) and
T ((B_1/4∖ K)×ℝ^n) ≤ C (𝐄+Ω^2)^1+β≤ C(E+Ω^2)^1+β ;
where C = C(Q,m,n), β=β(Q,m,n) ∈ (0, 12m) and we have used (<ref>) to obtain the estimates in terms of E. This in turn yields
∫_B_1/4 |Df|^2 ≤ C ∫_K |Df|^2 + C(+Ω^2)^1+β
≤ C + C(E+Ω^2)^1+β≤ C(E + Ω^2) .
Fix ρ_0 ∈ (0,18) to be determined at the end of Case 1 below, and fix ρ∈ (0,ρ_0] arbitrarily. Fix η_1, also to be determined at the end of Case 1 (dependent on Q,m,n,ρ). Let _23 denote the parameter of <cit.>, applied in _1 with this η_1 and the map f above taken to be the E^β-approximation therein. Note that, in particular, _23 depends on η_1 and thus on ρ. We may take σ_5 even smaller such that that < _23 (now additionally depending on ρ).
Case 1: Ω^2 ≤_23(ρ).
Applying <cit.> as mentioned above, we obtain a Dir-minimizer g:B_1/4(0,π_0)→_Q(π_0^⊥) satisfying
∫_B_1/4(f,g)^2 ≤η_1 ω_m ;
∫_B_1/4 |Dg|^2 ≤ C(E+Ω)^2 .
We then proceed as in <cit.>, letting q_i:=g_i(0), where g_i are the distinct functions in a selection for g (possibly with multiplicities) and propagating the decay coming from the α-Hölder regularity of this Dir-minimizer g to T. This choice of q_i satisfy the desired estimate (A), and we further arrive at the final estimate
∫__2ρ^2(x,π') dT(x) ≤ Cσ_5^β(E+Ω^2) + Cη_1(E+Ω^2) + Cρ^m+2α(E+Ω^2) ,
where C=C(Q,m,n). Note that α=α(Q,m) and let β_0 = α/2. Now choose ρ_0 ≤ (3C)^1/α and η_1 ≤ρ^m+2β_0/3C. Given this choice of η_1, we may now further take σ_5 ≤(ρ^m+2β_0/3C)^1/β. This yields the estimate (B), therefore completing the proof in this case. Note that we are now fixing this choice of η_1(ρ), and therefore also fixing _23(ρ), throughout this proof.
Case 2: Ω^2 > _23(ρ)
In this case, we use the estimate (<ref>) combined with the assumption, to obtain
∫_B_1/4| Df |^2 ≤ C + C( + Ω^2)^1 + β≤ C( + Ω^2 + 2β) ≤ C_23^-2σ_5^2δ_3Ω^2-2δ_3 .
Combining this with the Poincaré inequality for Q-valued functions, and Hölder's inequality, we infer
∫_B_1/4𝒢(f, Y) ≤ C_23^-2σ_5^2δ_3Ω^2-2δ_3 ,
for some point Y = ∑_i Q_i q_i ∈_Q, where the q_i are distinct. Setting π':= ⋃_i (q_i+π_0) and combining (<ref>) with (ii) and <cit.> (which, as previously remarked, remains valid here, since merely almost-monotonicity of mass ratios in place of monotonicity suffices) we thus have
∫_𝐂_2ρ^2 (x, π') dT (x)
≤ C _23^-2σ_5^2δ_3Ω^2-2δ_3 + T ((B_1/4∖ K)×ℝ^n)
≤ C_23^-2σ_5^2δ_3Ω^2-2δ_3 ,
where K⊂ B_1/4 is the closed set over which T over which T is graphical, as in <cit.>. In particular, for β_0 fixed as in Case 1, we may choose σ_5≤(_23^2ρ^m+2β_0/C)^1/2δ_3.
∫_𝐂_2ρ^2 (x, π') dT (x) ≤ρ^m+2β_0Ω^2-2δ_3 .
This proves conclusion (B) of the lemma, in this case. Given (<ref>), the proof of (A) in this regime follows in the same way as in <cit.>.
Note that Lemma <ref> is the reason behind the fact that we have Ω^2-2δ_3 in our error estimates in Theorem <ref>, and thus throughout the majority of Part 3, rather than Ω^2. Indeed, notice that Case 2 in the proof above requires the tilt excess of T to be sufficiently small relative to the relevant power of Ω; this cannot be ensured if such a power is quadratic. However, at the cost of decreasing this power slightly, we obtain the desired conclusion.
§ GRAPHICAL APPROXIMATIONS
Given Theorem <ref>, the graphical approximation results of <cit.> relative to a balanced cone follow immediately, after merely replacing any application of Almgren's strong Lipschitz approximation <cit.> with its semicalibrated variant <cit.>. We provide the main conclusions here, for clarity.
Let us begin by recalling the notions of Morgan angles and M-balanced cones, which are key for the results in this section.
Given two m-dimensional linear subspaces α, β of ℝ^m+n whose intersection has dimension m-2, we consider the two positive eigenvalues λ_1 ≤λ_2 of the quadratic form Q_1: α→ given by Q_1(v) := ^2(v,β). The Morgan angles of the pair α and β are the numbers θ_i (α, β) := arcsin√(λ_i) for i=1,2.
Let M≥ 1, N∈. We say that = α_1∪⋯∪α_N∈(Q) is M-balanced if for every i≠ j, the inequality
θ_2 (α_i, α_j) ≤ M θ_1 (α_i, α_j)
holds for the two Morgan angles of the pair α_i, α_j.
Furthermore, for = α_1∪⋯∪α_N∈(Q), recall the following notation for the minimal separation between the planes in :
σ() := min_1 ≤ i<j ≤ N(α_i∩_1,α_j∩_1) .
We additionally recall the layering subdivision of <cit.>. More precisely, we apply <cit.> with a parameter δ̅, to be fixed in Assumption <ref> below, in place of δ therein. This yields a family of sub-cones 𝐒 = 𝐒_0 ⊋𝐒_1 ⊋⋯⊋𝐒_κ where 𝐒_k consists of the union of the planes α_i with i∈ I (k) for the set of indices I(k) therein. We then distinguish two cases:
(a) if max_i<j∈ I (κ) (α_i ∩_1, α_j ∩_1) < δ̅, we define an additional cone 𝐒_κ+1 consisting of a single plane, given by the smallest index in I (κ) and we set κ̅:= κ+1 and I(κ̅):= {min I(κ)};
(b) otherwise, we select no smaller cone and set κ̅:= κ.
In this section, we work with the following underlying assumption.
Suppose T and ω are as in Assumption <ref> and T (_4) ≤ 4^m (Q+1/2) ω_m. Suppose 𝐒=α_1∪⋯∪α_N ∈𝒞 (Q)∖𝒫 is M-balanced, where M≥ 1 is a given fixed constant.
Firstly, we denote by δ^* the minimum of the parameters δ needed to ensure that the semicalibrated versions (in ambient space ^m+n) of <cit.> are applicable to all the cones 𝐒_k, k∈{0,1,…,κ̅}; note that all the _k are M-balanced by construction and that therefore δ^* = δ^*(m,n,Q,M)>0. Subsequently, we fix a parameter τ = τ(m,n,Q,M)>0 smaller than c δ^* for the small constant c = c(m,n,Q)>0 determined by <cit.> (with n̅ = n and Σ = ^m+n); note that this constant remains completely unchanged in the setting herein.
We then fix the parameter δ̅ smaller than c τ for this same constant c; so δ̅= δ̅(m,n,Q,M)>0. In particular, δ̅≤δ^*. Finally, ε = ε(m,n,Q,δ^*,δ̅,τ)>0 is determined in Proposition <ref> below, and will be smaller than both c δ̅ for the same parameter c above, and the parameter of <cit.>. Note that is implicitly additionally dependent on the two parameters σ,ς of Proposition <ref>, which are fixed arbitrarily. We assume that {Θ (T,·) ≥ Q}∩_ε (0) ≠∅ and suppose that
𝔼 (T, 𝐒, _4) + Ω^2-2δ_3≤ε^2 σ ()^2 ,
where 𝔼(T,,_4) is defined as in Definition <ref>.
§.§ Whitney decomposition
We recall here the main aspects of the Whitney decomposition of <cit.> and the associated notation. We refer the reader therein for more details, including figures illustrating the decomposition and associated regions in the ambient space.
Let =α_1∪⋯∪α_N∈(S). Let L_0 be the closed cube in V=V() with side-length 2/√(m-2) centered at 0 and let R be the rotationally invariant (around V) region given by
R :={p: 𝐩_V (p) ∈ L_0 and 0<|𝐩_V^⊥ (p)|≤ 1} .
We recall here that we are assuming m≥ 3 (cf. Assumption <ref>); note that this is the only reason why such a restriction is necessary.
For every ℓ∈ℕ denote by 𝒢_ℓ the collection of (m-2)-dimensional cubes in the spine V obtained by subdividing L_0 into 2^ℓ (m-2) cubes of side-length 2^1-ℓ/√(m-2), and we let 𝒢 = ⋃_ℓ𝒢_ℓ. We write L for a cube in 𝒢, so L∈𝒢_ℓ for some ℓ∈. When we want to emphasize the dependence of the integer ℓ on L we will write ℓ (L). We use the standard terminology parent, child, ancestor, descendant to describe relations of cubes; see <cit.> for details. For every L∈𝒢_ℓ we let
R (L) := {p: 𝐩_V (p)∈ L 2^-ℓ-1≤ |𝐩_V^⊥ (p)|≤ 2^-ℓ} .
For each L∈𝒢_ℓ we let y_L∈ V be its center and denote by (L) the ball 𝐁_2^2-ℓ(L) (y_L) (in ^m+n)
and by 𝐁^h (L) the set (L) ∖ B_ρ_*2^-ℓ(L) (V), where ρ_* is as in <cit.>. We identify three mutually dijoint subfamilies of cubes in ; outer cubes, central cubes and inner cubes, defined precisely in <cit.>. These families of cubes will be denoted by 𝒢^o, 𝒢^c, and 𝒢^in, respectively. By construction, any cube L∈𝒢 is either an outer cube, or a central cube, or an inner cube, or a descendant of inner cube. We in turn define three subregions of R:
* The outer region, denoted R^o, is the union of R (L) for L varying over elements of 𝒢^o.
* The central region, denoted R^c, is the union of R (L) for L varying over elements of 𝒢^c.
* Finally, the inner region, denoted R^in, is the union of R(L) for L ranging over the elements of 𝒢 which are neither outer nor central cubes.
For every i∈{1, …, N} we further define
R^o_i := ⋃_L∈𝒢^o L_i ≡α_i∩⋃_L∈𝒢^o R (L)
and let Q_i := Q_L_0, i.
We refer the reader to <cit.> for key properties about the Whitney cubes and the conical excess of T associated to each of them; the conclusions remain unchanged herein.
Observe that the results of <cit.> remain valid here also, with all instances of replaced by Ω^1-δ_3 in the estimates, given the conclusions of Theorem <ref>. In particular, note that the choice of a(Q,m) is determined by <cit.>, with the proof and the constant ρ_*(Q,m) remaining unchanged herein; namely, a(Q,m) = ρ_*4. Indeed, the compactness procedure therein still yields a limiting area-minimizing current in this setting, since Ω_k converges to zero along the sequence, and thus we may proceed to exploit the monotonicity of mass ratios in the same way. Since such a compactness will be exploited numerous times in the following sections, we elaborate on it in the following remark, which we will refer back to.
We make a note on a procedure that will be often used in the rest of the article. When proving some statements, for instance the crude splitting lemma building up to Proposition <ref> above and Proposition <ref> below, we argue by contradiction. In particular, we consider a sequence of currents T_k, semicalibrated by forms ω_k, and extract a converging subsequence limiting to a certain T_∞, (a valid procedure under our standing hypothesis, e.g. mass bounds on the T_k's). As written, we have no hope to obtain further information on the limit T_∞. However, in all the statement we will prove, we will also have the corresponding Ω_k converging to zero, whence allowing us to deduce that T_∞ is area-minimizing, and proceed as in the relevant proofs of <cit.>.
The final conclusion of the results in <cit.>, rewritten for a semicalibrated current T is the following, which is the analogue of <cit.>.
Let T, ω and 𝐒=α_1∪⋯∪α_N be as in Assumption <ref>. Then, for every σ, ς>0 there are constants C = C(m,n,Q,δ^*, τ, δ̅)>0 and ε = ε(m,n,Q,δ^*, τ, δ̅, σ, ς)>0 such that the following properties hold.
(i) R∖ B_σ (V) is contained in the outer region R^o.
(ii) There are Lipschitz multi-valued maps u_i : R^o_i →𝒜_Q_i (α_i^⊥) and closed subsets K̅_i (L)⊂ L_i satisfying
T_L,i𝐩_α_i^-1 (K̅_i(L))= 𝐆_u_i𝐩_α_i^-1 (K̅_i (L)) ∀ L∈𝒢^o ,
as well as the estimates <cit.>, and
∫_R_i |Du_i|^2 ≤ C σ^-2𝐄̂ (T, 𝐒, _4) + C Ω^2-2δ_3 ,
for R_i := (R∖ B_σ(V))∩α_i.
(iii) If additionally Ω^2-2δ_3≤ε^2 𝐄̂(T, 𝐒, _4) and we set v_i := 𝐄̂ (T, 𝐒, _4)^-1/2 u_i, then there is a map w_i: R_i →𝒜_Q_i (α_i^⊥) which is Dir-minimizing and such that
d_W^1,2 (v_i, w_i) ≤ς ,
where d_W^1,2 is the W^1,2 distance between Q-valued maps; see for instance <cit.> for a definition.
Note that in order to obtain the conclusion (iii) of Proposition <ref>, we apply <cit.> in place of <cit.>, which is used to establish this conclusion when T is area-minimizing.
§ CONE BALANCING
In this section, we observe that all of the results of <cit.> remain valid in the case when T is semicalibrated, with all instances of replaced with Ω^1- δ_3. This is again due to an application of the height bound from Lemma <ref> when proving Proposition <ref> below assuming Proposition <ref>. For the convenience of the reader, and since it will be useful for the succeeding sections, we provide the statements here. First of all, we recall the minimal separation σ() (see (<ref>)) between the planes within a given cone = α_1∪⋯∪α_N as introduced in the preceding section, as well as the maximal separation
μ() := max_1≤ i<j ≤ N(α_i∩_1,α_j∩_1) .
Assume that T and ω are as in Assumption <ref>, 𝐒 = α_1∪⋯∪α_N ∈(Q). Then, there are constants M = M(Q,m,n)>0 and ε_0 = ε_0(Q,m,n)>0 with the following property. Assume that
Ω^2-2δ_3≤ε_0^2 𝔼 (T, 𝐒, _1) ≤_0^4 ^p(T,_1) .
Then there is a subset {i_1, … , i_k}⊂{1, … , N} with k≥ 2 such that, upon setting 𝐒' = α_i_1∪⋯∪α_i_k, the following holds:
(a) 𝐒' is M-balanced;
(b) 𝔼 (T, 𝐒', _1) ≤ M 𝔼 (T, 𝐒, _1);
(c) ^2(∩_1,^'∩_1) ≤ M𝔼(T,,_1);
(d) M^-1^p (T, _1) ≤μ ()^2 =
μ (^')^2≤ M ^p (T, _1).
The proof of Proposition <ref> reduces to the validity of the following proposition, by exactly the same reasoning as that in <cit.>, given that the crude approximation <cit.> remains valid in this setting with replaced by Ω^2-2δ_2.
Assume that
T, ω, and 𝐒 are as in Proposition <ref>. Then there are constants C = C(m,n,N) and ε = ε (m,n,N) with the following property. If we additionally have that
N≥ 2 and Ω^2-2δ_3 + 𝔼 (T, 𝐒, _1) ≤ε^2 σ()^2 ,
then 𝐒 is C-balanced.
To prove Proposition <ref>, one argues as in <cit.>, namely one first proves a version of the proposition when μ() and σ() are comparable (see <cit.>) and arguing later by induction. The argument follows by contradiction, assuming that the balancing assumption is failing along a sequence of semicalibrated currents T_k with associated semicalibration forms ω_k and cones _k. Note that we have Ω_k^2-2δ_3≤_k^2 σ(_k) with _k ↓ 0. Thus, we still obtain a limiting current T_∞ that is area-minimizing (see Remark <ref> for a similar argument) and supported in some _∞∈(Q)∖ in the non-collapsed case, i.e. when up to subsequence we have lim_k→∞σ(_k) > 0. In light of the main result in <cit.> (see also <cit.>), this contradicts the failure of balancing. Similarly, in the collapsed case lim_k→∞σ(_k) = 0, the compactness argument in <cit.> still yields a limiting Dir-minimizer (the domain of which depends on the case analysis therein), and thus <cit.> may be applied to contradict the failure of balancing.
§ ESTIMATES AT THE SPINE
This section is dedicated to the nonconcentration estimates for T near the spine of , much analogous to those in <cit.>, but appearing first in a multiplicity one setting in the seminal work <cit.> of Simon. Our underlying assumption throughout this section will be the following.
Suppose T and ω are as in Assumption <ref> and T (_4) ≤ 4^m (Q+1/2) ω_m. Suppose 𝐒=α_1∪⋯∪α_N is a cone in 𝒞 (Q)∖𝒫 which is M-balanced, where M≥ 1 is a given fixed constant, and let V denote the spine of 𝐒. For a sufficiently small constant = (Q,m,n,M) smaller than the -threshold in Assumption <ref>, whose choice will be fixed by the statements of Theorem <ref>, Corollary <ref>, and Proposition <ref> below, suppose that
𝔼 (T, 𝐒, _4) + Ω^2-2δ_3≤ε^2 σ ()^2 .
Assume T, ω, and 𝐒 are as in Assumption <ref>, suppose that Θ (T, 0) ≥ Q, and set r= 1/3√(m-2). Then there is a constant C=C(Q,m,n,M)>0 and a choice of ε = ε(Q,m,n,M)>0 in Assumption <ref> sufficiently small such that
∫__r|q^⊥|^2/|q|^m+2 dT (q)
≤ C (Ω^2-2δ_3 + (T, 𝐒, _4))
∫__r |𝐩_V∘𝐩_T⃗^⊥|^2 dT ≤ C (Ω^2-2δ_3 + (T, 𝐒, _4)) .
This is the analogue of <cit.>. However, note that in order to get the quadratic Ω error improvement in the estimates (<ref>) and (<ref>), we crucially exploit the nature of the error in the first variation of T, and we recall the generalized mean curvature estimate (<ref>). This is very much analogous to the observation made in <cit.> in the case when T is area-minimizing, but since the main term on the right-hand side of (<ref>) and (<ref>) is in terms of the conical excess of T rather than the tilt excess, we provide a sketch proof, for the purpose of clarity.
For any 0 < s < ρ≤ 4, we may test the first variation identity (<ref>) for T with the radial vector field η(| p |τ) p, where τ∈ [s, ρ] and η is a smooth cut-off function that we take to converge to the characteristic function on [0,1], which yields the classical monotonicity formula estimate (see e.g. <cit.>). Now for any 0< r < R <4, let χ∈ C^∞([0, ∞); ℝ) be monotone non-increasing with χ≡ 1 on [0,r] and χ≡ 0 on [R, ∞). Taking s↓ 0, exploiting the hypothesis Θ(T,0) ≥ Q, multiplying by ρ^m and differentiating in ρ, then multiplying by χ (ρ)^2 and integrating over ρ∈[0,R], we obtain the estimate
∫__r|q^⊥|^2/|q|^m+2 dT (q)
≤ C[∫χ^2 (|q|) dT (q) - ∑_i Q_i ∫_α_iχ^2 (|q|) dℋ^m (q)]
+ C ∫__R|Γ (|q|)q^⊥·H⃗_T (q)|/|q|^m dT (q) ,
where C=C(m,r)>0 and Γ is a suitable locally bounded function with Γ_L^∞≤C̅ (R^m-r^m) for C̅ = C̅(m). See <cit.> for a more detailed computation. Combining this with the elementary identity ab ≤δ a^22 + b^22δ for δ>0 (to be determined) and the estimate (<ref>), the last term on the right-hand side can then be estimated as follows:
∫_𝐁_R|Γ(|q|) q^⊥·H⃗_T(q) |1/| q |^m d‖ T ‖(q) ≤δ/2∫_𝐁_R|q^⊥|^2/| q |^m+2 d‖ T ‖(q)
+ C(δ,m) r^m H⃗_T_L^∞^2 ∫_𝐁_R1/| q |^m-2 d‖ T ‖(q)
≤δ/2∫_𝐁_R|q^⊥|^2/| q |^m+2 d‖ T ‖(q) + C(δ,m) Ω^2 .
Inserting this into (<ref>), we obtain
∫__r|q^⊥|^2/|q|^m+2 dT (q)
≤ C[∫χ^2 (|q|) dT (q) - ∑_i Q_i ∫_α_iχ^2 (|q|) dℋ^m (q)]_(I)
+ Cδ∫_𝐁_R|q^⊥|^2/| q |^m+2 d‖ T ‖(q) + C(δ,m) Ω^2 .
Note that this is of the form <cit.> but with ^2 replaced with Ω^2, and the extra term on the right-hand side, which we will eventually absorb into the left-hand side, but due to the domain being larger, this is slightly delicate and needs to be done at the end. The remainder of the proof therein deals with estimating the term (I) above. Observe that the estimate <cit.> may in fact be more precisely written as
∫χ^2(|q|) dT (q) - ∑_i Q_i ∫_α_iχ^2(|q|) d^m(q)
≤ C ∫ div_T⃗ X (q) dT (q) + C ∫__R |x^⊥|^2 dT + ∫χ (|q|) x ·∇_V^⊥χ (|q|) dS(q)
- ∫χ (|q|) 𝐩_T⃗ (x)·∇_V^⊥χ (|q|) dT (q)
,
for the vector field X (q) = χ (|q|)^2 𝐩_V^⊥ (q) and S=(). Thus, in order to follow the arguments of <cit.> verbatim, it merely remains to verify that
∫_T⃗ X (q) dT (q) ≤δ̃∫__R|q^⊥|^2/|q|^m+2 dT(q) + C(δ̃) Ω^2 ,
for δ̃>0 small enough such that, after combining (<ref>) with (<ref>), the first term on the right-hand side of (<ref>) may be reabsorbed into the left-hand side of (<ref>).
To see that (<ref>) holds, we use the first variation of T to write
∫ div_T⃗ X dT = - ∫ X^⊥·H⃗_T dT ,
and argue as in (<ref>) to obtain the estimate
∫ |X^⊥·H⃗_T | dT≤δ̃∫__R|q^⊥|^2/|q|^m+2 dT(q) + C(δ̃) H⃗_T _L^∞^2 ∫__R |q|^m+2 dT(q) ,
from which (<ref>) follows immediately. We may then use the estimates corresponding to those in <cit.>, for which we must use the semicalibrated analogues of the relevant results in <cit.> (with Σ = ^m+n and all occurrences of replaced with Ω^1-δ_3). The majority of such results were omitted in Section <ref> herein for brevity, but nevertheless hold true.
In summary, when combined with (<ref>), we arrive at the final estimate
∫__r|q^⊥|^2/|q|^m+2 dT (q)
≤ C(T,,_4) + C(δ,δ̃,m)Ω^2-2δ_3
+ C(δ +δ̃) ∫_𝐁_R|q^⊥|^2/| q |^m+2 d‖ T ‖(q) .
Invoking, for instance, <cit.>, for δ,δ̃ sufficiently small (depending on Q,m,n), we may indeed absorb the final term on the right-hand side above into the left-hand side, concluding the proof.
In the proof of Theorem <ref>, one can alternatively notice that the following identity holds, for any vector field X∈ C_c^∞(^m+n∖∂ T;^m+n):
⟨ dω X, T⃗⟩ = ⟨ dω, X ∧T⃗⟩ = ⟨ dω, X^⊥∧T⃗⟩,
where the first equality follows by definition of restriction, and where perpendicularity is with respect to the approximate tangent space of T. In particular, this can be used in place of the estimate (<ref>) when establishing (<ref>) and (<ref>) above.
From Theorem <ref>, we further deduce the following, which corresponds to <cit.>.
Assume T, ω, 𝐒 and r are as in Theorem <ref>. Then, there is a choice of ε = ε(Q,m,n,M) in Assumption <ref>, possibly smaller than that in Theorem <ref>, such that for every κ∈ (0,m+2),
∫__r^2 (q, 𝐒)/|q|^m+2-κ dT (q)
≤ C_κ (Ω^2-2δ_3 + (T, 𝐒, _4)) ,
where here C_κ = C_κ(Q,m,n,M,κ).
Corollary <ref> follows immediately from the following lemma, combined with Theorem <ref>.
Let T, ω and be as in Assumption <ref> with 𝐁_1 ⊂Ω and Θ (T, 0)≥ Q. Then we may choose sufficiently small in Assumption <ref> such that for each κ∈ (0,m+2) we have
∫__1 ^2(q, 𝐒)/|q|^m+2-κ dT (q)
≤ C_κ∫__1|q^⊥|^2/|q|^m+2 dT (q)+ C_κ (𝐄̂ (T, 𝐒, _4) + Ω^2) ,
for some constant C_κ=C_κ(Q,m,n,M,κ)>0.
Fix κ∈ (0,m+2). As in the proof of Theorem <ref>, we aim to follow the reasoning of the area-minimizing counterpart <cit.> of this lemma, which we may indeed do, provided that we verify
∫ div_T⃗ X dT≤δ∫__1^2(q,)/|q|^m+2-κ dT(q) + C(δ) Ω^2 ,
for the vector field
X (q) := ^2 (q,𝐒) (max{r, |q|}^-m-2+κ - 1)_+ q ,
supported in _1 (and constant on _r). To see the validity of (<ref>), we simply exploit the first variation identity (<ref>) and an estimate analogous to (<ref>) to obtain
∫ |X^⊥·H⃗_T| dT≤δ∫__1 ^2(q,)/|q|^m+2-κ dT(q) + C(δ) Ω^2 ∫__1^2(q,) |q^⊥|^2 dT(q) .
Finally we have the following shifted estimates around a given point of high multiplicity (cf. <cit.>).
Assume T, ω, and 𝐒 are as in Assumption <ref> and in addition {Θ (T, ·)≥ Q}∩_ε (0) ≠∅. Then there is a radius r=r(Q,m,n) and a choice of ε = ε(Q,m,n,M) in Assumption <ref>, possibly smaller than those in Theorem <ref> and Corollary <ref>, such that for each κ∈ (0,m+2), there are constants C̅_κ=C̅_κ(Q,m,n,M,κ)>0 and C=C(Q,m,n,M) such that the following holds. If q_0∈_r (0) and Θ (T, q_0)≥ Q, then
∫__4r (q_0)^2 (q, q_0 + 𝐒)/|q-q_0|^m+2-κ dT (q) ≤C̅_κ (Ω^2-2δ_3 + (T, 𝐒, _4)) .
|𝐩_α_1^⊥ (q_0)|^2 + μ (𝐒)^2 |𝐩_V^⊥∩α_1 (q_0)|^2 ≤ C (Ω^2-2δ_3 + (T, 𝐒, _4)) .
The proof of Proposition <ref> is entirely analogous to that of <cit.>, again recalling that the variant of <cit.> herein has Ω^1-δ_3 in place of each instance of .
§ FINAL BLOW-UP AND CONCLUSION
We are now in a position to outline how to conclude the validity of Theorem <ref>, given everything in the preceding sections of this part.
We begin with the following weaker conical excess decay result (see <cit.>), whose validity implies that of Theorem <ref>.
Fix Q,m,n as before, and let M≥ 1 be as in Proposition <ref>. Fix also ς_1>0. Then, there are constants ε_1 = ε_1(Q,m,n,ς_1)∈ (0,1/2], r^1_1 = r^1_1(Q,m,n,ς_1)∈ (0,1/2] and r^2_1 = r^2_1(Q,m,n,ς_1)∈ (0,1/2], such that the following holds. Suppose that
(i) T and Σ are as in Assumption <ref>;
(ii) T(_1)≤ (Q+1/2)ω_m;
(iii) There is ∈𝒞(Q) which is M-balanced, such that
𝔼(T,,_1)≤_1^2σ()^2
and
__1(ξ)∩{p:Θ(T,p)≥ Q}≠∅ ∀ξ∈ V()∩_1/2 ;
(iv) Ω^2-2δ_3≤_1^2 𝔼 (T, , _1) for every ∈𝒞 (Q).
Then, there is ^'∈𝒞(Q)∖𝒫 such that for some i∈{1,2} we have
𝔼(T,^',_r_1^i) ≤ς_1𝔼(T,,_1) .
To prove that if <ref> holds, then Theorem <ref> holds, one may proceed exactly as in <cit.>. Indeed, the argument remains unchanged, after replacing with Ω^1-δ_3 everywhere.
It thus remains to demonstrate that Theorem <ref> holds. With this in mind, we show decay at one of two possible radii as stated therein by considering two possible cases for the cone ∈(Q); collapsed and non-collapsed.
For every Q, m, n and ς_1>0 there are positive constants ε_c = _c(Q,m,n,ς_1)≤ 1/2 and r_c = r_c(Q,m,n,ς_1) ≤ 1/2 with the following property. Assume that
(i) T and ω are as in Assumption <ref>, and T (_1) ≤ω_m (Q+1/2);
(ii) There is a cone 𝐒∈𝒞 (Q) which is M-balanced (with M as in Proposition <ref>), such that (<ref>) and (<ref>) hold with ε_c in place of ε_1, and in addition
μ (𝐒) ≤ε_c;
(iii) Ω^2-2δ_3≤_c^2𝔼(T,,_1) for every ∈𝒞(Q).
Then, there is another cone 𝐒'∈𝒞 (Q) ∖𝒫 such that
𝔼 (T, 𝐒', _r_c)
≤ς_1 𝔼 (T, 𝐒, _1) .
For every Q, m, n, ε^⋆_c >0 and ς_1>0, there are positive constants ε_nc = _nc(Q,m,n,ε^⋆_c,ς_1)≤ 1/2 and r_nc = r_nc(Q,m,n,ε^⋆_c,ς_1) ≤1/2 with the following property. Assume that
(i) T and ω are as in Assumption <ref> and T (_1) ≤ω_m (Q+1/2);
(ii) There is 𝐒∈𝒞 (Q) which is M-balanced (with M as in Proposition <ref>), such that
(<ref>) and (<ref>) hold with ε_nc in place of ε_1 and in addition
μ (𝐒) ≥ε^⋆_c;
(iii) Ω^2-2δ_3≤^2_nc𝔼(T,,_1) for every ∈𝒞(Q).
Then, there is another cone 𝐒'∈𝒞 (Q) ∖𝒫 such that
𝔼 (T, 𝐒', _r_nc)
≤ς_1 𝔼 (T, 𝐒, _1) .
Since the proofs of Proposition <ref> and Proposition <ref> exploit the results of Sections <ref>-<ref> in the same way as their counterparts in <cit.>, we will merely provide a brief outline of the argument here.
We will argue by contradiction in both the collapsed and the non-collapsed case.
Namely, we suppose that we have a sequence of currents T_k, corresponding semicalibrations ω_k and cones _k∈(Q) satisfying the hypotheses of either Proposition <ref> with _c^(k) = 1k→ 0 or Proposition <ref> for some fixed _c^⋆>0 but _nc^(k) = 1k→ 0, but such that the respective decay conclusions (<ref>), (<ref>) fail for any possible choice of radii r_c,r_nc.
In particular,
(Ω^2-2δ_3/𝔼 (T_k, 𝐒_k, _1) + 𝔼 (T_k, 𝐒_k, _1)/σ (𝐒_k)^2)→ 0 .
For the non-collapsed case, we recall the coherent outer approximations u_i^k of Proposition <ref>. Meanwhile, in the collapsed case, we introduce transverse coherent approximations as in <cit.> as follows. Up to extracting a subsequence, we may write _k=α_1∪⋯∪α_N^k for α_1 and N fixed, and write each plane α_i^k as the graph of a linear map A_i^k:α_1 →α_1^⊥. We may in addition reparameterize u_i^k over α_1 to obtain a map v_i^k : R^o_1 →𝒜_Q_i^k (α_1^⊥) whose graph coincides with that of u_i^k over 𝐩_α_1^-1 (R_1^o), where R^o_1 is a suitable graphicality region, defined rigorously in <cit.>. The transverse coherent approximations are then defined to be the collection of maps
w_i^k := v_i^k⊖ A_i^k: R̃_1^o →_Q_i^k(α_1^⊥) , i=1,…, N ,
where we use the notational shorthand g⊖ f for the multivalued map ∑_i g_i - f. Observe that the estimates of <cit.>, with _k replaced by Ω_k^1-δ_3, are valid for A_i^k, v_i^k and w_i^k. Furthermore, the nonconcentration estimates <cit.> hold, again with Ω_k^1-δ_3 in place of _k in the errors.
We in turn define the normalizations
u̅^k_i := u^k_i/√(𝔼 (T_k, 𝐒_k, _1))
w̅^k_i := w^k_i/√(𝔼 (T_k, 𝐒_k, _1)) ,
on (_r̅∩α^k_i) ∖ B_1/k (V) and (_r̅∩α_1)∖ B_1/k (V) respectively, for r̅ := r4 with r=r(Q,m,n) as in Proposition <ref> (which we may assume is contained in the domains of definition for u_i^k and w_i^k). Arguing as in <cit.>, we may then assume that, up to subsequence, u̅^k_i and w̅^k_i converge strongly in W^1,2 locally away from V and strongly in L^2 on the entirety of B_r to W^1,2 maps u̅_i and w̅_i that are Dir-minimizing on (_r̅∩α^k_i) ∖ V and (_r̅∩α_1)∖ V, and since V has capacity zero, they are in fact Dir-minimizing on (_r̅∩α^k_i) and (_r̅∩α_1) respectively. Moreover, the conclusions of <cit.> are satisfied. Exploiting <cit.> and proceeding as in <cit.> to propagate this decay (up to subtracting an appropriate superposition of linear maps), we arrive at the desired contradiction.
PART:
Failure of monotonicity for semicalibrated intrinsic planar frequency
The aim of this last part is to provide a simple example of a semicalibrated current with good decay properties towards a flat tangent plane, but that does not exhibit an almost monotone planar frequency function as introduced in <cit.>. This therefore illustrates a difference between the present approach to Theorem <ref> when compared to trying to adapt the corresponding one in <cit.>, which we have been unable to adapt to the semicalibrated setting. En route to this, we will point out some differences between the setting herein and the area-minimizing one.
§ PLANAR FREQUENCY FUNCTION
We begin by introducing the analogue of the intrinsic planar frequency function of <cit.> for a semicalibrated current T. Let z ∈ℝ^n + m, let π⊂^m+n be an m-dimensional plane and let ρ_0 > 0.
We will henceforth work under the following assumption.
Let ρ_0>0. Suppose that T is a semicalibrated rectifiable current in 𝐂_ρ_0(z, π) satisfying
∂ T 𝐂_ρ_0(z, π) = 0, and sup_p ∈ T ∩𝐂_ρ_0(z, π)(p, π + z) < ∞.
Let ϕ [0, ∞) → [0,1] be the monotone Lipschitz cutoff function defined in (<ref>). For r ∈ (0, ρ_0], we can define the intrinsic L^2 height of T at scale r around z with respect to the plane π to be
H_T, π, z(r) := 2/r^m-1∫_𝐂_r(z, π) ∖𝐂_r/2(z, π)^2(p, π + z) |∇_T⃗ |𝐩_π(p-z)| |^2/|𝐩_π(p-z)| d‖ T ‖(p) ,
Note that (<ref>) can be rewritten as
H_T, π, z(r) = - 1/r^m-1∫^2(p, π + z) |∇_T⃗ |𝐩_π(p-z)| |^2/ |𝐩_π(p-z)|ϕ^'(|𝐩_π(p-z)|/r) d‖ T ‖(p) .
Furthermore, we can introduce the intrinsic Dirichlet energy of T at scale r around z with respect to the plane π as
D_T, π, z(r) := 1/2r^m-2∫|𝐩_T(p) - 𝐩_π (p) |^2 ϕ(|𝐩_π(p-z)|/r) d ‖ T ‖(p) .
Note that D_T,π,z(r) may be considered as a regularization of the non-oriented tilt excess of T in the cylinder _r(z,π), cf. (<ref>). Pertinent to our setting, we define the corresponding intrinsic semicalibrated term
L_T,π,z(r):= 1/2r^m-2∫ T(d ω_π^⊥(p))ϕ(|𝐩_π(p-z)|/r) d ‖ T ‖(p)
and
Γ_T,π,z(r) := D_T,π,z(r) + L_T,π,z(r).
Notice that the additional term T(d ω_π^⊥(p)) above arises from testing the first variation of T with the variation vector field X(p)=ϕ(|𝐩_π(p-z)|r)_π^⊥(p); see <cit.>. In analogy with <cit.>, whenever H_T, π, z(r) > 0, we may define the intrinsic planar frequency function N_T, π, z(r) of T at Z relative to P by
N_T, π, z(r) = Γ_T, π, z(r)/H_T, π, z(r) .
Note that when T is calibrated (and in particular area-minimizing), i.e. when dω≡ 0, the above frequency indeed reduces to the one introduced in <cit.> for area-minimizing currents. In <cit.>, it is shown that the intrinsic planar frequency associated to area-minimizing integral currents is almost-monotone under a suitable decay hypothesis (see Section <ref> below for a more precise statement), thus laying the foundation for a more refined analysis of the singular set of area-minimizers.
The main result of this part is the following.
There exists a smooth, radially symmetric function f: B_1 ⊂π≡ℝ^m ×{0}→ such that the associated semicalibrated current _f has the property
N_T, π, 0(r) → + ∞ as r ↓ 0.
Namely, we provide the construction of an example that not only violates almost-monotonicity of the intrinsic planar frequency function in the semicalibrated setting, but allows for it to diverge to +∞ as the scale goes to zero. Before we proceed with the construction, let us provide a more detailed heuristic explanation, together with a comparison with what happens in the area-minimizing setting.
§.§ Comparison with Proposition <ref>
One may consider the statement of Proposition <ref> as a conclusion that a posteriori, one does not require center manifolds to take blow ups at points x∈_Q(T) with singularity degree (T,x) ∈ [1,2-δ_2) (and indeed, this is the case in the work <cit.> when the planar frequency is in [1,2)). This suggests that one may possibly use a planar frequency function like N_T,π,z (in place of a frequency relative to center manifolds) to analyze such points. This is not inconsistent with the validity of Theorem <ref>. Indeed, in the latter, the smooth submanifold (f) coincides with the center manifold associated to T=_f locally around the origin. On the other hand, our choice of f will have infinite order of vanishing at the origin, which corresponds to blow-up of the planar frequency function there. Note, however, that 0 is not a flat singular point of _f in this case; in fact _f has no singularities. For the same reason, no single-sheeted example will be inconsistent with the validity of Proposition <ref>. However, since it is not clear how to meaningfully restrict the notion of intrinsic planar frequency to such a specific scenario in order to improve its properties, we do not pursue this any further.
§.§ Comparison with <cit.>
The area-minimizing hypothesis is crucially used in <cit.> to control the error terms arising when differentiating H_T, π, z, and D_T, π, z, which in turn produce errors for the radial derivative of N_T,π,z. More precisely, the area-minimizing property of T is exploited therein to infer the following bounds, cf. Lemma 3.9 and Lemma 3.11 of loc. cit.,
| H_T, π, z^'(r) + 2 r^- m∫|∇^⊥ |𝐩_π(p-z)| |^2 |𝐩_π(p-z)| ϕ'(|𝐩_π(p-z)|/r) d‖ T ‖(p) |
≤ C η^2γ r^2 αγ - 1 H_T, π, z(r) ,
where ∇^⊥≡∇_T^⊥, and
| D_T, π, z^'(r) + 2 r^- m∫|_π^⊥(∇_T |𝐩_π(p-z)|) |^2/|∇_T |𝐩_π(p-z)| |^2 |𝐩_π(p-z)| ϕ'(|𝐩_π(p-z)|/r) d‖ T ‖(p) |
≤C/rD_T,π,z(r)^γ((m - 1)D_T, π, z(r) + r D_T, π, z^'(r)),
for some γ=γ(Q,m,n)>0 and C=C(Q,m,n)>0, whenever T as in Assumption <ref> satisfies the mass ratio bounds
Θ(T,z) ≥ Q, T(_7ρ_0/4(z,π)) ≤ (Q+δ) ω_m (7ρ_0/4)^m ,
for some δ=δ(Q,m,n) > 0 and the additional planar decay hypothesis
1/ω_m (7s /4)^m+2∫__7s /4(z,π)^2(p, π +z) dT(p) ≤η^2 (s/ρ_0)^2α ∀ s ∈ [σ_0,ρ_0] ,
for some σ_0∈ (0,ρ_0), some η_0=η_0(Q,m,n)>0 and η∈ (0,η_0].
The estimates (<ref>) and (<ref>), together with the variational identities for H_T,π,z, D_T,π,z, in turn can be used to prove the almost-monotonicity
N_T,π,z(r) ≤ e^Cη^γ (s/ρ_0)^αγ N_T,π,z(s) ∀σ_0 ≤ r < s ≤ρ_0 ,
of the intrinsic planar frequency function in the area-minimizing case, provided that H_T,π,z(τ) > 0 on [r,s]. The first challenge when trying to adapt this argument to the semicalibrated setting is that the terms H_T, π, z^' and D_T, π, z^' will now contain extra errors depending on the semicalibration dω (due the fact that the first variation of a semicalibrated current has a non-vanishing right-hand side). Moreover, one must additionally consider the behavior of L_T, π, z^' when controlling the variational error terms. Complications arise when one tries to bound all of these error terms by powers of the intrinsic Dirichlet energy and L^2 height, as in (<ref>) and (<ref>).
§ PROOF OF THEOREM <REF>
We are now in a position to construct a counterexample to the almost-monotonicity of the intrinsic planar frequency N_T,π,0 associated to a semicalibrated current T=_f associated to the graph of a smooth radially symmetric function f relative to the plane π≡^m×{0}⊂^m+1, as claimed in Theorem <ref>.
To this end, consider f ∈ C^∞(B_1), where B_1 ⊂^m ×{0}⊂^m+1. Consider then the graph of f as a submanifold of ℝ^m + 1:
(f) = {(x, f(x)); x ∈ B_1}.
We claim that (f) is a semicalibrated submanifold of ℝ^m +1. Indeed, consider the unit normal to (f) given by
ν_x = 1/√(1 + |∇ f(x) |^2)(- ∇ f(x), 1),
and define the m-form ω(X_1, …, X_m) = (X_1, …, X_m, ν) for vectors X_1, …, X_m. It is then straightforward to check that |ω(Z_1, …, Z_m) |≤ 1 for unit vectors Z_1, …, Z_m. Furthermore, we have that ω(Y_1, …, Y_m) = 1, if Y_i are orthonormal vectors belonging to T_(x, f(x))(f), so that the submanifold (f) ⊂^m+1 is semicalibrated. In particular, the current T=_f associated to it (see e.g. <cit.>) is also semicalibrated. From this, it is clear that not every semicalibrated current can have an almost-monotone planar frequency function. We will however provide a natural example to illustrate the blow-up of planar frequency. We recall the quantities D_T, π, 0(r), H_T, π, 0(r), and L_T,π,z(r) introduced in the preceding section for this particular choice of T, with π≡ℝ^m×{0}.
§.§ Intrinsic quantities for a graph
We start by unpacking the definitions of intrinsic Dirichlet energy, L^2 height, and the semicalibrated term in the case where T=_f for f as above; we will define f later. Let us begin with the energy D_T, π, 0(r). The projection matrix associated with π is given by
_π = [ _m × m 0; 0 0 ],
while the projection matrix for the tangent space T_(x, f(x))(f) is
_T_(x, f(x))(f) = _(m + 1) × (m + 1) - ν_x ⊗ν_x
In particular, we can compute
1/2|_T_(x, f(x))(f) - _π|^2 = 1- (_T_(x, f(x))(f) : _π) = |∇ f |^2/1 + |∇ f |^2,
where A:B denotes the Hilbert-Schmidt inner product between matrices A,B, so that
D_T, π, 0(r) = r^2 - m∫ϕ(|x|/r) |∇ f(x) |^2/1 + |∇ f(x) |^2 d ‖ T‖(x,f(x)) .
Letting ϕ converge to the characteristic function of the unit interval from below, and recalling the area formula for a graph, we obtain
D_T, π, 0(r) = r^2 - m∫_B_r|∇ f |^2/√(1 + |∇ f |^2) dℒ^m.
We can now turn to the height H_T, π, 0(r). Write
H_T, π, 0(r) = - r^1 - m∫| f(x)|^2 |∇_T |x| |^2/|x|ϕ'(|x|/r) d ‖ T ‖(x,f(x)),
and, after recalling ∇ r = (x, 0)/|x|, we can compute
|∇_T |x| |^2 = |_T_(x, f(x))(f) (∇ |x|) |^2 = 1 + |∇_θ f |^2/1 + |∇ f |^2,
where ∇_θ denotes the angular part of the gradient. Thus, after letting ϕ converge to the characteristic function of the unit interval, we infer
H_T, π, 0(r) = r^1 - m∫_∂ B_ρ(0)| f |^2 1 + |∇_θ f |^2/√(1 + |∇ f |^2) d ℋ^m - 1.
Finally, we rewrite the definition of the semicalibrated term
L_T,π,0(r) = 1/2r^m-2∫⟨ dω (0,y), T⃗⟩ϕ(|x|/r) d‖ T‖(x,y),
where we write ^m+1∋ p=(x,y)∈π×π^⊥. Letting ϕ converge to the characteristic function of the unit interval and again using the area formula, we obtain
L_T,π,0(r) = 1/2r^m-2∫_B_r⟨ dω (0, …, 0, f), T⃗⟩√(1 + |∇ f |^2) dℒ^m ,
where T⃗(x) is the m-vector τ_1 ∧…∧τ_m, for {τ_i}_i a basis of the tangent space T_(x, f(x))(f). Note that here we have also used that in this particular setting, _π^⊥(x,y) = (0, …, 0, f(x)). Note that
τ_i = e_i + (∇ f · e_i) e_m + 1/√(1 + (∇ f · e_i)^2),
for i ∈{1, …, m}, where {e_i}_i is a basis of ℝ^m + 1. The (m +1)-form dω is given by
dω = (-1)^m( ∇ f/√(1 + |∇ f |^2)) dx_1 ∧ dx_2 ∧…∧ dx_m +1 .
Note that one can alternatively use Cartan's magic formula to arrive at the same final expression for dω. Then,
(0, …, 0, f) ∧T⃗ = (f e_m + 1) ∧τ_1 ∧…∧τ_m = (- 1)^m f/Π_i = 1^m√(1 + (∇ f · e_i)^2) e_1 ∧ e_2 ∧…∧ e_m + 1,
so that
⟨ dω (0, …, 0, f), T⃗⟩ =( ∇ f/√(1 + |∇ f |^2)) f/Π_i = 1^m√(1 + (∇ f · e_i)^2)
Thus, the semicalibrated term is
L_T,π,0(r) = 1/2r^m-2∫_B_r( ∇ f/√(1 + |∇ f |^2)) f/Π_i = 1^m√(1 + (∇ f · e_i)^2)√(1 + |∇ f |^2) dℒ^m .
§.§ Definition of f
We are now in a position to define our radially symmetric function. Let f given by
f(x) =
e^-1/| x |^2 x ≠ 0
0 x=0 .
Note that ∇ f(0) = 0 and that f is indeed radially symmetric, so that introducing polar coordinates we can write (abusing notation) f(r) = e^-1/r^2, for r ≥ 0. By the argument at the beginning of this section, the current T=_f associated to the graph of this function is semicalibrated. Furthermore, note that the hypothesis of <cit.> are satisfied. More precisely, let ρ_0 > 0, and consider T𝐂_7ρ_0/4(0, π) for π=^m×{0}⊂^m+1. In particular, Θ(T, 0) = 1, and the almost-monotonicity of mass ratios (see e.g. <cit.>) guarantees that ‖ T ‖(𝐂_7ρ_0/4(0, π)) ≤ (1 + δ)ω_m (7ρ_0/4)^m for ρ_0 sufficiently small, where δ is the parameter of <cit.>. In addition, thanks to the exponential decay of f towards 0, there exist η > 0, σ_0 ∈ (0, ρ_0), and α∈ (0, 1) such that the decay hypothesis (<ref>) holds about the origin, namely
1/ω_m (7s/4)^m + 2∫_𝐂_7s/4(0, π)^2(p, π) d‖ T ‖(p) ≤η^2 ( s/ρ_0)^2 α,
for all ρ∈ [σ_0, ρ_0].
Consider now the planar frequency function N_T,π,0 and use (<ref>) and (<ref>) to write the energy and the height for this particular function:
D_T, π, 0(ρ) = ρ^2 - mω_m - 1∫_0^ρ4e^-2/r^2 r^-6/√(1 + 4e^-2/r^2 r^-6) r^m - 1 dr ,
and
H_T, π, 0(ρ) = ω_m - 1e^-2/ρ^2/√(1 + 4e^-2/ρ^2ρ^-6) .
Note that H_T, π, 0(ρ) > 0 for all ρ > 0, implying that N_T, π, 0 is always well-defined. Thus, for any ρ>0 sufficiently small, the classical planar frequency function can be estimated from below as follows:
D_T, π, 0(ρ)/H_T, π, 0(ρ) = e^2/ρ^2√(1 + 4e^-2/ρ^2ρ^-6)ρ^2 - m∫_0^ρ4e^-2/r^2 r^-6/√(1 + 4e^-2/r^2 r^-6) r^m - 1 dr
= ρ^-1 - m e^1/ρ^2√(e^2/ρ^2ρ^6 + 4)∫_0^ρ4e^-1/r^2 r^-3/√(4 + e^2/r^2 r^6) r^m - 1 dr
≥ C ρ^2 - m e^2/ρ^2∫_0^ρ e^- 2/r^2r^m - 7 dr
≥ C ρ^2 - m e^2/ρ^2 2^m/2 - 4Γ(3 - m/2, 2/ρ^2),
where Γ(s, x) is the incomplete gamma function
Γ(s, x) = ∫_x^∞ t^s - 1 e^- t dt.
Recalling now the asymptotic Γ(s, x)x^- s + 1e^x→ 1, as x →∞, we deduce that
N_T, π, 0(ρ) ≥C/ρ^2( Γ(3 - m/2, 2/ρ^2) e^2/ρ^2( 2/ρ^2)^1 + m/2 - 3) =: 1/ρ^2η(ρ),
where η(ρ) → 1 as ρ→ 0, which yields
D_T, π, 0(ρ)/H_T, π, 0(ρ)→∞ as ρ→ 0 ,
as desired. We now wish to compute the semicalibrated term in the intrinsic planar frequency, namely L_T,π,0(ρ)/H_T, π, 0(ρ). We record the minimal surface equation for a radial function on ℝ^m:
( ∇ f/√(1 + |∇ f |^2)) = 1/√(1 + (f^')^2)( f^''/1 + (f^')^2 + m-1/rf^').
Thus, we can estimate
L_T,P,0(ρ)/H_T, P, 0(ρ) = e^1/ρ^2/ρ^m+1√(e^2/ρ^2ρ^6 + 4)∫_0^ρr^m - 1e^- 2/r^2√(1+(f'(r))^2)/Π_i = 1^m√(1 + (∇ f · e_i)^2)[2(m-1)/r^4 + 4 - 6r^2/r^6 + 4 e^-2/r^2] dr
≥C e^1/ρ^2/ρ^m+1√(e^2/ρ^2ρ^6 + 4)∫_0^ρ r^m - 1e^- 2/r^2[2(m-1)/r^4 + 4 - 6r^2/r^6 + 4 e^-2/r^2] dr.
Splitting then the square bracket and analyzing the two integrands separately via the incomplete Gamma function again, one deduces that L_T,π,0(ρ)/H_T, π, 0(ρ) also diverges as ρ→ 0.
amsalpha
|
http://arxiv.org/abs/2409.03361v1 | 20240905090658 | Gaia/GSP-spec spectroscopic properties of gamma Doradus pulsators | [
"P. de Laverny",
"A. Recio-Blanco",
"C. Aerts",
"P. A. Palicio"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
Université Côte d'Azur, Observatoire de la Côte d'Azur, CNRS, Laboratoire Lagrange, Bd de l'Observatoire, CS 34229, 06304 Nice cedex 4, France
Institute of Astronomy, KU Leuven, Celestijnenlaan 200D, 3001, Leuven, Belgium
Department of Astrophysics, IMAPP, Radboud University Nijmegen, PO Box 9010, 6500 GL Nijmegen, The Netherlands
Max Planck Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany
The third Data Release of the ESA mission has provided
a large sample of new gravity-mode pulsators, among which more than 11,600 are stars.
The goal of the present work is to present the spectroscopic parameters of these pulsators estimated by the the module that analysed millions of spectra. Such a parametrisation could help to confirm their nature and provide their chemo-physical properties.
The Galactic positions, kinematics, and orbital properties of these new pulsators were examined in order to define a sub-sample belonging to the Milky Way thin disc, in which these young stars should preferentially be found. The stellar luminosities, radii, and astrometric surface gravities were estimated without adopting any priors from uncertain stellar evolution models. These parameters, combined with the effective temperatures, spectroscopic gravities, and metallicities were then validated by comparison with recent literature studies.
Most stars are found to belong to the Galactic thin disc, as expected. It is also found that the derived luminosities, radii, and astrometric surface gravities are of high quality and have values typical of genuine pulsators. Moreover, we have shown that and of pulsators with high enough spectra or slow to moderate rotation rates are robust. This allowed to define a sub-sample of genuine slow-rotating pulsators. Their are found between ∼6,500 and
∼7,800 K, around 4.2 and luminosities and stellar radii peak at ∼5 L_⊙ and ∼1.7 L_⊙, The median metallicity is close to the Solar value, although 0.5 dex more metal-poor and metal-rich are identified.
The content is fully consistent with the chemical properties of the Galactic disc.
/DR3 spectroscopic properties of stars therefore confirm the nature of these pulsators and allow to chemo-physically parametrise a new large sample of such stars.
Moreover, future data releases should drastically increase the number of stars with
good-precision spectroscopically derived parameters.
/GSP-spec spectroscopic properties of γ Doradus
pulsators
P. de Laverny 1
A. Recio-Blanco1
C. Aerts 2,3,4
P.A. Palicio1
Received ?? ; accepted ??
====================================================================================================================
§ INTRODUCTION
The European Space Agency mission with the release of its third catalogue <cit.> has already revolutionised different fields of astrophysics. In particular our view of the Milky Way stellar populations
is being upgraded substantially <cit.>.
Regarding stellar physics and variable stars,
<cit.> assessed the fundamental parameters and mode properties of 15,602 newly found /DR3 gravity-mode
(g-mode hereafter)
pulsator candidates, among the more than 100,000 new pulsators along the main sequence identified from their photometric light-curves
by <cit.>.
These g-mode pulsators are low- and intermediate-mass main-sequence stars (masses between
1.3 and 9 M_) and are ideal laboratories for
asteroseismology, within the broad landscape of such modern studies
<cit.>.
Meanwhile <cit.> extracted light curves assembled with the Transiting Exoplanet Survey Satellite <cit.> for more than 60,000 of the candidate pulsators discovered by <cit.>. They confirmed the pulsational nature for the large majority and found them to be multiperiodic, with about 70% of them even sharing the same dominant frequency in the totally independent DR3 and TESS data. By comparing the astrophysical
properties of the g-mode pulsators with those studied
asteroseismically from Kepler data by
<cit.> and by
<cit.>, <cit.> classified them either as
Slowly Pulsating B (SPB) or γ Doradus () candidate stars.
Our current work is focused on this last category of pulsating F-type dwarfs, covering masses
1.3M_⊙ M 1.9 M_⊙. In the Hertzsprung-Russell diagram (HRD), they are
found in a rather small main-sequence area
<cit.>,
close to the cool limit of δ Scuti instability strip <cit.>.
However, we do point out that many of the stars actually turn out to be hybrid pulsators when observed in high-cadence space photometry. Indeed a good fraction of these pulsators exhibit not only high-order g modes, but also acoustic waves known as pressure (or p) modes <cit.>. This makes their instability strip overlap with the one of the δ Scuti stars <cit.>.
One of the new products published with /DR3 are the stellar atmospheric parameters
derived from the analysis of the /Radial Velocity Spectrometer (RVS) spectra by the DPAC/ module <cit.>.
RVS spectra cover the Ca II IR domain (846–870 nm) and have a resolution around 11,500.
By automatically analysing these spectra, <cit.> parametrised about 5.6 million single low-rotating stars
belonging to the FGKM-spectral type. Hotter stars (>8000 K) or highly-rotating stars (more than ∼30-50 km.s^-1, depending on the stellar type) were disregarded for the /DR3 (this will be updated for the next Data Releases). These limitations come from the adopted reference grids upon which rely the parametrisation algorithms of .
The derived stellar atmospheric parameters are: the effective temperature , the surface gravity , the global metallicity , and the enrichment in α-element with respect to iron . Moreover, up to 13 individual chemical abundances were also estimated for most of these stars. This analysis led to the first all-sky spectroscopic catalogue and the largest compilation of stellar chemo-physical parameters ever published. Moreover, radial velocities () of about 33 millions stars were published in the DR3 catalogue <cit.>. All these data allow to study the Galactic kinematics and orbital properties
of this huge number of stars, along
with their atmospheric and chemical characteristics,
keeping in mind that they belong
to various populations in the Milky Way. This multitude of data also facilitates constraining stellar evolution models for
a broad range of masses. In particular, the spectroscopic data give us the opportunity to unravel the properties of different kinds of variable stars, among which the g-mode pulsators
characterised photometrically by <cit.>.
One of the main goals of the present study is to apply
spectroscopic techniques to characterise the g-mode
and hybrid
pulsators identified in <cit.>
and <cit.>.
This perspective adds to their photometrically-deduced properties and may help future asteroseismic modelling of the most promising pulsators.
Some contamination by other or additional
variability could also occur, such as
rotational modulation. This is expected since
<cit.> and <cit.>
could infer only one secure frequency for many of these 15,602 candidates. Moreover, <cit.> found
a fraction of the g-mode
and hybrid
pulsators to reveal Ap/Bp characteristics in addition to their pulsational behaviour.
Therefore, some of them might be Ap/Bp pulsators with anomalous chemical abundances from spots, as previously found from Kepler space photometry <cit.>.
/ spectroscopic parameters could therefore help to confirm the nature of these pulsators and facilitate to build sub-catalogues of g-mode
and hybrid
pulsating stars with respect to their chemical properties.
In addition, the main properties of these stars can be constrained from the spectroscopy and compared with the values deduced from the photometry, such as their effective temperature, surface gravity, luminosities, radius, etc.
Another goal is to explore the metallicity (and, possibly, chemical abundances) of genuine g-mode
and hybrid
pulsators. This is an important input for asteroseismic modelling, once a sufficient number of oscillation frequencies has been identified<cit.>.
Aside from the Kepler sample of pulsators modelled by <cit.>, the confirmed g-mode
and hybrid
pulsators from <cit.> are currently under study
to derive their global parameters
(mass, convective core mass, radius, and evolutionary stage;
Mombarg et al., submitted), as well as
all their significant oscillation mode frequencies from high-cadence
TESS photometry. Future studies will point out whether their TESS data allow us to find period spacing patterns to assess their suitability for asteroseismology, as has been possible for stars in the TESS Continuous Viewing Zones <cit.>.
We note that the present work focuses only on the stars
or hybrid pulsators with dominant g modes
in the catalogue by <cit.>, for several reasons.
First, only ∼12% of the SPB candidates
in that paper have a DR3 radial
velocity () and, when available, the associated uncertainties are quite large,
suggesting possible problems in their RVS spectra.
Moreover, most of the SPB candidates
are too faint to have a high enough to be analysed meaningfully with
. Finally, we remind that
was initially constructed to achieve proper parametrisation of slowly-rotating rather cool stars. In particular,
the reference training grid does not contain model spectra for stars
hotter than ∼8000 K and the lines identified in the stellar spectra are assumed not to be too broadened. These restrictions led to the rejection of A-type or hotter stars and/or to lower quality parametrisation for the
fastest
rotators, as indicated by specific quality flags (and even rejection for the extreme cases). This is confirmed by examining the few SPB
stars with available parameters: most of them are indeed flagged as doubtful. We therefore postpone the spectroscopic study of these SPB to the next Data Releases in which the spectral parametrisation of hotter and/or fast rotators, such as many of the and SPB pulsators <cit.>,
will be optimised.
This article is structured as follows.
In Sect. <ref>, we present the sample of pulsators with available spectroscopic data and we study their spatial distribution, kinematics, and orbital properties in the Milky Way. This led to the definition of a sub-sample of stars with very-high probability of being genuine stars belonging to the Galactic thin disc.
Subsequently, we present in Sect. <ref> the physical parameters of the candidates analysed by the module and discuss their properties in Sect. <ref>.
Our conclusions are summarised in Sect. <ref>.
§ THE /DR3 SPECTROSCOPIC SAMPLE OF STARS
Among the 11,636 pulsator candidates presented in <cit.>, 4,383 stars (38%) have a published /DR3 radial velocity <cit.> and only 650 of them are found in the catalogue.
These numbers can be explained by taking into account that (i) 58% of these pulsating stars are too faint for having a high enough spectrum,
necessary to estimate their and/or to be parametrised by (G13.5 mag for this later case) and (ii) many of them are too hot, preventing their parametrisation (see the limitation caused by the reference grid hot boundary, described in the introduction).
In the following, we will discuss the Galactic properties of these candidate stars, derived from their available distances and , in order to
define a sub-sample of bona-fide stars based only on kinematic and dynamical criteria.
§.§ Spatial distribution, kinematics, and Galactic orbits
We derived the spatial (Cartesian coordinates) and kinematic properties of all the 4,383 candidate stars from their coordinates, proper motions and , adopting the
distances of <cit.>. Their Galactic orbital properties (eccentricity) were computed as described in <cit.> using the Solar Galactic constants presented in <cit.>.
We also adopted for the LSR velocity of the Sun V_ LSR=238.5 km/s.
Among all these stars with Galactic data, 560 were parametrised by . We note that the parametrisation is available for 90 more stars (see Sect. <ref>) because parametrised some spectra whose was finally not published within /DR3, hence their Galactic properties were not computed.
The following quality selections were then applied to define a sub-sample of 2,721 stars (405 of them with parameters) with high-quality Galactic parameters. (1) The best astrometric data were selected thanks to the ruwe parameter (ruwe < 1.4) and the identification of the non-spurious
solutions <cit.>.
This filters out 521 stars.
(2) We then rejected 152 stars having a distance uncertainty larger than 10%.
(3) The determination of several of these candidates was found to be of poor quality, mostly because of the low of their spectra. We therefore disregarded 1,258 stars having a relative error larger than 50%.
The Galactic location of this high-quality sub-sample of 2,721 candidates with astrometric and information is presented in Fig.<ref>. The colour-code of the middle and right panels represents their rotational velocity in the Galactic plane (V_ϕ, whose typical value for thin disc stars in the Solar vicinity is around ∼240 km/s). In the following, we will adopt for this last quantity the
velocity of the LSR at the Sun's position.
The location of the same stars in a Toomre diagram, colour-coded with their Galactic orbit eccentricity is shown in Fig.<ref>. The regions enclosed by the circular dotted lines
denote those populated by stars with thin disc kinematics, i.e. typical total velocity found within ±40-50 km/s around the LSR value
<cit.>.
From these figures,
it is clear that most stars have thin disc kinematics as expected, although some candidates belong actually to the Galactic thick disc or halo (low V_ϕ-values, high Galactic latitudes, large eccentricities or total velocities).
Among those having a total velocity larger than ±50 km/s compared with the LSR value (i.e. too large velocities for belonging to the thin disc, first dotted line in Fig. <ref>), very few have published atmospheric parameters. Their spectra is indeed low
(around 25) leading to rather large uncertainty. Moreover, their rotational rate is high, and their metallicity could be too low for thin disc stars; but, more importantly, the associated uncertainties are very large. All of this reveals that their spectra were probably not properly analysed by .
In any case, the fact of not belonging kinematically to the thin disc is contradictory to our understanding of the evolutionary stage of this class of variable stars and they will be rejected from the studied sample (see below).
On the contrary, most of the brightest candidates with parameters (i.e. with the highest spectra) are found closer to the Galactic plane with kinematics and orbital properties typical of thin disc stars (circular orbits, V_ϕ∼ 215-260 km/s and/or abs(V_ Tot-V_ LSR)25 km/s.
§.§ candidates belonging to the thin disc
stars are known to be late A- to early F-type stars, located on the main-sequence below the
classical
instability
strip.
As already mentioned, they
have masses between ∼1.3 and ∼1.9 M_⊙ <cit.> and have therefore rather young ages. We refer to
<cit.> who deduced the ages of the 37 best characterised stars from asteroseismic modelling of their identified oscillation modes and found ages ranging from ∼0.15 up to 2 Gyr.
Similarly, <cit.> also report asteroseismic ages for 490 that are always smaller than 3.0 Gyr, their mean age being close to 1.5 Gyr with a dispersion of 0.5 Gyr.
On the contrary, thick disc stars are older than ∼8 Gyr <cit.> whereas halo stars are even older. Therefore, most of the Galactic stars are expected to belong to the thin disc of the Milky Way.
One can thus use the above described kinematic and orbital properties of the candidates from <cit.> to select those with the highest probability of being bona-fide pulsators.
This is an entirely complementary and independent selection to the one based on the DR3 or TESS light curves.
We have therefore selected all the above candidates having a high probability to belong to the thin disc, i.e. those having close to circular orbits (eccentricity lower than 0.2)
and (V_ Tot-V_ LSR)<25 km/s. These kinematic criteria[Adding a filter based on the distance from the Galactic Plane did not modify this selection.] led to the selection of
2,245 candidates belonging to the Galactic thin disc (i.e. 83% of the stars with high-quality Galactic parameters), 385 of them having parameters. This sub-sample, called the Thin Disc-sample hereafter, is discussed below. We can therefore conclude that most stars of the initial sample are thin disc members.
§ PHYSICAL PARAMETERS OF THE PULSATORS
All the 650 pulsators with parameters from have a published effective temperature () and a surface gravity (). In addition, one can also access their global metallicity () and abundances in -elements with respect to iron () for 602 and 595 of them, respectively. We remind that, within , is estimated from all the available atomic lines in the RVS spectra and is a good proxy of . Similarly, -element abundances are derived from all the available lines of any α-elements. The abundance ratio is, however, strongly dominated by the huge infrared lines that are present in the /RVS wavelength domain and is thus strongly correlated to .
Moreover, because of the complex analysis of these rather hot and, usually,
fast-rotating stars, very few other atomic lines are present in their RVS spectra. Therefore, individual chemical abundances were derived for only very few tens of them. Only 20 stars have an estimate of their and the number of stars with other published chemical abundances is even lower. As a consequence, we will only consider hereafter the abundance ratio for this sample.
We calibrated all the above mentioned atmospheric parameters and abundances as a function of , adopting the prescriptions recommended in <cit.>.
In addition, we remind that, associated to the parameters, there are several quality flags () that have to be considered to assess the quality of this parametrisation.
We also adopted other parameters derived from the analysis of the RVS spectra. For example,
we partially used the /DR3 spectral type provided by the spectraltype_esphs parameters. In addition,
409 of the candidates have a published line-broadening measurement <cit.>, confirming the fast-rotating nature for most of them. We refer to <cit.> for a detailed study of the vbroad properties of the g-mode pulsators.
Even if vbroad was in the end not published for most stars in /DR3,
has published three quality flags <cit.> that depend directly on vbroad (internally delivered within DPAC for the spectra analysis). The possible biases in the parameters that could be induced by rotational line-broadening can therefore be explored by future users thanks to these three vbroadTGM flags.
Contrarily to vbroad,
these flags are available for the whole sample and are used below to complement the rotational broadening information, when necessary. In the following and for convenience, all these line-broadening quantities will be referred to as rotational velocities.
Finally, we remind that the /DR3 spectroscopic parameters were derived by the module by assuming that the rotation rate of the analysed stars are rather low. Therefore, highly rotating stars were rejected: depending on the
stellar type, vbroad limitations are around ∼30-40 km.s^-1, and the parametrisation quality degrades quickly above ∼25 km.s^-1.
Because of this parametrisation limitation, we warn the reader that the candidates parametrised by have thus lower rotational velocities than typical values for these variable stars.
Indeed, the vbroad distribution of our stars has a mean value around ∼25 km.s^-1, associated to a standard deviation equal to ∼10 km.s^-1, and a maximum value of 58 km.s^-1. As a comparison,
these stars are known to rotate at 40-100 km.s^-1 and some can reach up to ∼150 km.s^-1, see for instance <cit.>. This rotational velocity of the stellar atmosphere is related to the internal rotation rate, as discussed for instance by <cit.>. As a consequence, we are conscious that our sample is biased towards with rather low rotation rate, and should therefore not be fully representative of these specific class of variable stars.
Stellar luminosities and radii:
To complement this spectral parametrisation, we computed the luminosity (L_⋆) and radius (R_⋆)
for each star.
For that purpose, we first estimated the extinction E(B_P-R_p) in the bands by subtracting the observed (B_P-R_p) colour from a theoretical one. The latter was calculated from the , and , inverting the <cit.> relation that predicts stellar colour from atmospheric parameters <cit.>. This procedure did not converge for a few stars, hence these were rejected hereafter.
We then estimated the coefficients k_TGMA = A_G / E(B_P-R_p), A_G being the absorption in the G-band. These coefficients depend on the four atmospheric parameters and have been estimated thanks to the tables provided with the stellar parameters[https://www.cosmos.esa.int/web/gaia/dr3-astrophysical-parameter-inference]. The value of A_G then allows to deduce the absolute magnitude in the G-band
from the DR3 G-magnitude and the <cit.> geometric distances.
We derived the luminosities, adopting the bolometric corrections (BC) from <cit.>. We note that the considered relations to estimate k_TGMA
and BC do not depend on . When available, we adopted the relation of <cit.> to include the α-element content into the global metallicity.
Finally, thanks to the one directly obtains the stellar radius. The quality of these radii simply computed from photometry, distances and spectroscopic parameters is excellent.
We refer to de Laverny et al. (2024, to be submitted) and <cit.> for a detailed comparison with interferometric and/or asteroseismic radii, that confirmed the high-quality of our R_⋆ estimates.
One could then also get the stellar mass (M_⋆) from the surface gravities, but we favoured to fix this quantity thanks to the known typical masses of (see below the discussion on the adopted ). For all these parameters, the uncertainties were estimated by performing 1000 Monte-Carlo
realisations, propagating the uncertainties on each atmospheric parameters (that reflect the spectra), distance
and magnitudes.
Effective temperatures:
Since stars are known to be early-F spectral type and in order to define a sub-sample of
high-quality parametrised stars, we first checked their spectral type provided by the /DR3.
Among the 650 candidates parametrised by , 562, 82 and 6 were found by the DPAC/ESP-HS module to belong to the 'F', 'A' or 'B' spectral types, respectively. The also confirmed the too hot temperature with respect to typical stars of a few other stars <cit.>, hence they were filtered out.
Moreover, we also found that ∼6% of the candidates have a much cooler and not compatible with an early spectral type. All the spectra of these outliers suffer from large uncertainties and/or very large rotational velocities and/or low that may lead to an erroneous parametrisation. Therefore, all these too cool stars were also rejected. The remaining 598 spectroscopic candidates have a (B_p-R_p)_0 colour (corrected from extinction by us) fully compatible with
their . The median of their (B_p-R_p)_0 colour is 0.5 and the associated dispersion is found to be extremely small (0.04 mag.), confirming their early F-type nature.
These 598 stars will be called F-type sample, hereafter.
Stellar surface gravities: Knowing the stellar luminosity, the effective temperature and adopting a typical mass,
the surface gravity can simply be estimated for almost all the pulsators (called _ Lum, hereafter). Practically, we randomly chose the mass of each star within the mass range 1.3 to 1.9 M_⊙ <cit.>, assuming an uniform distribution. The associated mass uncertainty has been fixed to half of this range (±0.3 M_⊙).
We note that varying the mass over the entire range covered by genuine stars changes our _ Lum estimates by about 0.1 dex and, thus, do not affect our conclusions. The _ Lum uncertainties were computed from 1000 Monte-Carlo realisations, propagating the uncertainties on L_⋆, and M_⋆.
We finally point out that 30 stars (among the 650) have no _ Lum because no extinction was available for them (see above).
It was found that most of these _ Lum are in very good agreement with the for stars with a rather low rotational velocity (vbroad15-20 km/s) or for stars with slightly larger rotational rate but with high spectra (100).
However, larger discrepancies between the two surface gravity estimates are found for the highest rotators and/or stars with low spectra (mean difference of log(g_/g_ Lum)∼0.35 dex, with a standard deviation of 0.4 dex). This results from the DR3 pipeline that is optimised for non-rotating stars and some parameter biases could exist for stars with large rotation rates[This will be addressed for /DR4.]
In the following and in order to avoid surface gravities potentially affected by the stellar rotation,
we adopted these _ Lum and their associated uncertainties. Thanks to this procedure, we have a sub-sample of about 600 pulsators with rather accurate surface gravity (found in the range from ∼3.5 to ∼4.5) and effective temperatures typical of stars, according to <cit.>. We emphasize that all the above derived quantities do not rely on any stellar isochrones, but only on astrometric and photometric data and RVS spectra.
Global metallicities: We simply adopted the calibrated metallicities for all the pulsators having a and a _ Lum, as defined above.
Parameter uncertainties: The RVS spectra of all these pulsators belonging to the
F-type sample
have rather low ratios. The median of these is 34 and only 9% of the spectra have a >100.
The resulting median uncertainties[Estimated by propagating the RVS spectra flux errors on the parameter determination through Monte-Carlo realisations <cit.>.] on (, , , L_⋆, R_⋆) are therefore rather high (182 K, 0.13 dex, 0.21 dex, 0.13 L_⊙, 0.11 R_⊙) with a dispersion although still reasonable of (62 K, 0.02 dex, 0.07 dex, 0.07 L_⊙, 0.04 R_⊙).
§ PROPERTIES OF THE PULSATORS
The distribution of the main stellar parameters derived above , (both derived from the spectrum analysis and from the luminosity), , L_⋆, and R_⋆ are shown in Fig. <ref>.
In the different panels of this figure, we considered the sub-sample of 371 slowly-rotating stars belonging to both the F-type and Thin Disc samples, that is those having effective temperature typical of stars and having typical thin disc kinematics as defined in Sect. <ref>. This sub-sample will be called F-type Thin Disc (FTD) hereafter. It is shown as light-blue histograms in Fig. <ref> and should contain good candidate stars.
For comparison purposes, we have over-plotted in Fig. <ref> the parameter distributions derived by three recent studies that provide parameters for large numbers of stars. First, <cit.> report , and
estimated from the analysis of high-resolution spectra (R ∼ 85,000)
for 77 bona-fide stars with asteroseismic modelling from Kepler observations. None of these stars were found in the F-type sample. Secondly, <cit.> published luminosities and asteroseismic radii of 490 derived from and Kepler observations.
Their parameter distributions are shown as green lines in Fig. <ref>. There is only one star of <cit.> that was parametrised by , but it is not included in the F-type sample because of its too large rotational broadening and associated low-quality flags.
Finally, we also show the , , L_⋆ and R_⋆ distributions of the candidates of <cit.> as another comparison. Some of their stars suffer from rather large uncertainties on their parameters and we therefore filtered out the stars with and errors larger than 100 K and 0.2 dex, respectively, before constructing the black histograms. Moreover, stars with relative uncertainties in L_⋆ and R_⋆ larger than 25% and 50% were also rejected.
Some other stars were also analysed by high-resolution spectroscopy as, for instance, in <cit.>. The spectra collected by these authors have a resolution of 32 000, 85 000, and 25 000/45 000, respectively, but their sample are much smaller than ours. None of our stars are found in these samples but one discusses below the atmospheric parameter properties derived by these studies with respect to the ones (see also the discussion about Fig. <ref>).
Regarding the effective temperatures, it can be seen in Fig. <ref> that
our distribution is nearly compatible with the estimates reported in the literature.
We do report a large number of cooler stars than <cit.> and <cit.>. We checked that most of these cooler stars had bad quality vbroadTGM flags,
meaning that their parametrisation was not optimal because of the broadened lines in their spectra. Moreover, we remind that most RVS spectra are of rather low-quality, leading to large uncertainties. As a consequence, if one selects only
the 151 stars belonging to the F-type sample and having a error less than 200 K (the mean ratio of the selected spectra is then around 60) plus their three vbroadTGM flags strictly smaller than 2, we then obtain a distribution fully compatible with the two comparison ones. This High-Quality (HQ) sub-sample of stars is shown as dark-blue histograms in Fig. <ref>. Our distribution is also fully compatible with the ones derived from high-resolution spectroscopy (see above cited references), if one excludes binaries and/or hybrid stars: their are always found in the ∼6700 – ∼8000 K range.
Our stellar luminosity distributions (FTD and HQ sub-samples) are peaked around 5 L_⊙ with few bright stars up to ∼25 L_⊙. They are close to the distribution of the two comparison samples (similar peak in L_⋆) although <cit.> and <cit.> report a larger number of more luminous stars. We have checked that considering only stars with good astrometric ruwe parameters (ruwe < 1.4) in the different comparison samples
(about 10% stars would be rejected) does not modify these distributions. Moreover, we remind that these two other studies computed their stellar luminosity adopting a /DR3 interstellar reddening
that could differ from our own absorption estimate. Since we did not find any
specific differences between these two reddening flavours, the lack of high-luminosity stars in our sample could be due to selection bias effects: either a lack of too hot and, hence more luminous stars, not parametrised by ; and/or a lack of low surface gravity stars, hence with high-radius and luminosity, as seen in the and radius panels of Fig. <ref>.
The FTD and <cit.> spectroscopic surface gravities are
in good agreement (left panel, second row in Fig.<ref>). surface gravities derived
by other analysis of high-resolution spectra studies also agree, although we note that the samples of <cit.> have within ∼3.6–∼4.0, i.e. surface gravities slightly smaller than ours and those of <cit.> that reach up to =4.4. However, more numerous lower gravity values are seen in Fig. <ref> compared to <cit.>. Such stars with a surface gravity lower than ∼3.5 should have left the main sequence (or are close to this evolutionary stage).
The agreement between the HQ spectroscopic gravities and those of <cit.> is however excellent, confirming again that low-quality and line-broadened spectra are more difficult to parametrise.
Moreover, the results from <cit.> are also in good agreement with our surface gravities estimated from the stellar luminosity (_ Lum shown in the right panel, second row) and assuming masses in the range 1.3-1.9 M_⊙. This is quite normal since the <cit.> values were estimated by the DPAC GSP-phot module <cit.> from the luminosity, and stellar isochrones, a rather similar method to ours, except for using isochrones. From these different comparisons, we conclude that surface gravities derived from the stellar luminosity and assuming typical masses should be preferred over the spectroscopic ones because of the difficulty to analyse spectra of fast-rotating stars, except if one considers the HQ stars for which the spectroscopic gravities are excellent.
As for the stars' mean metallicities, our FTD sample covers a larger range in than the comparison samples. More importantly, it reveals an excess of low-stars with respect to the spectroscopic sample of <cit.>. The latter peaks around -0.2 dex, whereas ours peaks at about -0.7 dex.
We note that the confirmed of <cit.> with parameters derived from high-resolution spectra have that cover a distribution close to the one of <cit.>.
We remind that such low-metallicities are not expected to be numerous for thin disc stars. This shift towards lower metallicities could again be partially explained by the stellar rotation and/or low-quality RVS spectra, as it might induce a bias in the parametrisation.
Again, rejecting such complex spectra and considering HQ sub-sample stars with a uncertainty less than 0.25 dex leads to a metallicity distribution that is more compatible with <cit.>.
We still have however more stars with higher and lower metallicities than them by about ±0.5dex. They could be present since our sample has a larger spatial coverage in the Milky Way.
Finally, the compared stellar radius FTD and HQ distributions are in very good agreement with the asteroseismic radii determined by <cit.>, which are expected to be the most accurate ones.
The spectroscopic radii, whatever the sample considered, are indeed very well representative of those expected for stars.
Our radius median value is close to 1.7 R_⊙ and 90% of the sample is found in the range
1.35 – 2.35 R_⊙. The largest stars in our sample have R∼3.5-4 R_⊙, so they must be rather close to leave the main-sequence, which is confirmed by their
effective temperatures and high luminosities (≃7000 K and L ≃25 L_⊙).
In summary, our reported , , , L_⋆ and R_⋆ are in very good agreement with recent literature values, in particular when considering the High-Quality sub-sample defined by selecting high-quality spectra or low-rotating stars. Furthermore, our L_⋆,
_ Lum, and R_⋆ values are trustworthy, even without considering the membership of the Galactic thin disc as an extra good measure of the star being a genuine pulsator and/or their spectra properties.
Moreover, since we have shown that the agreement on the effective temperature and mean metallicity is improved by
selecting high-quality stellar spectra or stars without large rotational broadening velocities,
we show these HQ stars
in a luminosity - effective temperature diagram (Fig. <ref>),
colour-coded with their metallicity.
This figure is similar to those found in the literature, as for example in <cit.>. Excluding binaries and hybrid stars, all our stars are very well concentrated in the same small region of the L- diagram (or -, in some of the above cited works).
We note that there is no stars in this figure with hotter than ∼7750 K[A few hotter can be found in the literature <cit.>]. This bias is caused by the parametrisation that was optimised for
FGKM-type stars (we recall that the reference grid
is based on spectra models cooler than 8000 K).
We therefore cannot exclude that hotter could exist but they were rejected during the parameterisation (this will be updated for /DR4).
Finally, it can be seen in Fig.<ref> that more metal-rich are found at higher , whatever their luminosity is. This could be partly due to some possible parametrisation biases: for instance, metal-poor hot star spectra show very few lines and are thus more difficult to parameterise particularly when their rotation rate is high, explaining probably the absence of such stars in the present sample. But this could also be real and could be a signature of the different evolution of stars with slightly different masses and metallicity. For instance, by exploring BaSTI evolutionary tracks <cit.>, it can be seen that metal-rich stars with masses around 1.6-1.9 M_⊙ appear hotter than lower mass (∼1.3 M_⊙) more metal-poor stars. More specifically, we have estimated that a difference of ∼0.3 M_⊙ implies a shift in of similar amplitude as a difference of ∼0.7 dex in metallicity. This rather well corresponds to what is seen in Fig.<ref>.
The same HQ sub-sample but filtering out stars with uncertainty greater than 0.15 dex is plotted in an versus diagram in Fig. <ref>. Such a filtering again reduces the number of stars
but reveals more accurate chemo-physical properties of pulsators.
We remind that only the global abundances (which is a good indicator of [Ca/Fe] for RVS spectra) of these stars parametrised by are available in the DR3 catalogue because of too low statistics of individual chemical abundances (see above). Fig. <ref>
confirms that the selected sample has chemical properties consistent with the Galactic disc population, that is a constant decrease of with the metallicity for >-1.0 dex. Moreover, since the membership
of most of these stars to the thin disc was based on purely kinematics and dynamical criteria, such a versus trend is an independent proof that the chemo-physical properties of the pulsators can be safely adopted.
Thus, the /DR3 analysis of pulsators provides useful physical and/or chemical parameters once an optimized filtering of the poorly parametrised spectra (low or too fast rotating stars) is performed. The parameters of these HQ stars are provided in an electronic table which content is presented in Table. <ref>.
§ CONCLUSIONS
We have studied the /DR3 spectroscopic parameters derived from the analysis of the RVS spectra for the large sample of candidate pulsators composed by <cit.> and confirmed in <cit.>. About 38% of these stars have a published radial velocity and ∼6% of them were actually analysed by the module in charge to analyse their spectra.
Thanks to the available and astrometric information, we have been able to compute kinematics and orbital information for all these stars. This allowed us to identify that 2,245 of them (i.e. most of the candidates with high-quality kinematics) belong to the thin disc of the Milky Way, which is expected since these gravity mode pulsator should have typical ages lower than 2-3 Gyr.
We then computed their luminosity and stellar radius from astrometric and photometric data, adopting the effective temperature
and without considering any stellar evolutionary models or isochrone priors.
A comparison with recently published values of well studied stars reveals
that the derived luminosities, stellar surface gravities derived from L_⋆ and
assuming typical masses, as well as the stellar radii, are of high quality.
Moreover, a strict filtering rejecting stars with large uncertainty
(caused by too low RVS spectra) or high rotational velocity led to pulsators
with the best derived parameters, including , , and . All of these observables were found to be fully consistent with typical values of genuine slowly-rotating pulsators. Indeed, the High-Quality stars have effective temperatures between ∼6,500 and
∼7,800 K and surface gravities around 4.2.
Their luminosities and stellar radii peak at ∼5 L_⊙ and ∼1.7 L_⊙, whereas their metallicity distribution is centered close to the Solar value, covering the range [-0.5, +0.5] dex.
Their properties
is consistent with the chemical properties of the Galactic disc population. We note that the final number
of parametrised stars is smaller compared to the initial sample because of the low spectra of many of them, together with the fact that most of them
are fast rotator for which the analysis pipeline was not optimised for the /DR3. Anyway, the number of newly spectroscopically parametrised presented in this work is about a factor two larger than in previous studies.
Finally, it can be concluded that the analysis of stars provides a significant added value to the study of these gravity-mode pulsators, delivering their physical and chemical properties.
This will be even more important with the future data releases for which the RVS spectra values and the number of analysed stars will be significantly increased.
Indeed, it is expected that the increase between DR3 spectra and those released in DR4 and DR5 will correspond to a factor
√(2) and 2, respectively.
Moreover, it is also anticipated that the analysis of fast-rotating stars by the module will be improved by considering reference grids of synthetic spectra representative of a wide variety of stellar rotational values, contrarily to the present study that is biased towards slowly-rotating . Finally, hot star spectra will also be considered for reference, allowing a better parametrisation for stars with high . These anticipated future improved analyses of gravity-mode pulsators performed by the module will therefore be
of prime interest as input for asteroseismology of such stars. They will indeed allow to define a much larger and thus more statistically significant number of bona-fide stars with physical and chemical properties. Furthermore, they will also allow to study the spectroscopic parameters of hotter gravity-mode pulsator, such as the Slowly Pulsating B stars with the aim to improve their asteroseismic modelling <cit.>.
This work has made use of data from the European Space Agency (ESA)
mission (https://www.cosmos.esa.int/gaia), processed by the Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Multilateral Agreement.
This work has also made use of the SIMBAD database, operated at CDS, Strasbourg, France <cit.>, the IPython package <cit.>, NumPy <cit.>, Matplotlib <cit.>, Pandas and TOPCAT <cit.>.
PdL and ARB acknowledge funding from the European
Union’s Horizon 2020 research and innovation program under SPACE-H2020
grant agreement number 101004214: EXPLORE project.
CA acknowledges funding
from the European Research Council (ERC)
under the Horizon Europe programme (Synergy Grant agreement
number 101071505: 4D-STAR project). While partially funded by the European
Union, views and opinions expressed are however those of the authors
only and do not necessarily reflect those of the European Union or the
European Research Council. Neither the European Union nor the granting
authority can be held responsible for them.
Finally, we are
grateful to the anonymous referee for their constructive
remarks.
aa
|
http://arxiv.org/abs/2409.03327v1 | 20240905080347 | Normal forms in Virus Machines | [
"A. Ramírez-de-Arellano",
"F. G. C. Cabarle",
"D. Orellana-Martín",
"M. J. Pérez-Jiménez"
] | cs.CL | [
"cs.CL",
"cs.FL",
"68Q07 (Primary) 68Q10, 68R01 (Secondary)",
"F.0; F.1.1"
] |
Normal forms in Virus MachinesSupported by the Zhejiang Lab BioBit Program (Grant No. 2022BCF05).
Antonio Ramírez-de-ArellanoCorresponding author: Universidad de Sevilla, Avda. Reina Mercedes s/n, Seville, 41012, Spain
Dept. of Computer Science and Artificial Intelligence
University of Seville
SCORE Laboratory, I3US, University of Seville
[email protected]
Francis George C. CabarleQUAL21 008 USE
project, “Plan Andaluz de Investigación,
Desarrollo e Innovación” (PAIDI) 2020 and “Fondo Europeo de Desarrollo Regional” (FEDER) of the European Union, 2014-2020 funds.
SCORE Laboratory, I3US, University of Seville
Dept. of Computer Science
University of the Philippines Diliman
[email protected] [email protected]
David Orellana-Martín
Dept. of Computer Science and Artificial Intelligence
University of Seville
SCORE Laboratory, I3US, University of Seville
[email protected]
Mario J. Pérez-Jiménez
Dept. of Computer Science and Artificial Intelligence
University of Seville
SCORE Laboratory, I3US, University of Seville
[email protected]
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
A. Ramírez-de-Arellano, F. G. C. Cabarle, D. Orellana-Martín, M. J. Pérez-JiménezNormal Forms in VM
In the present work, we further study the computational power of virus machines (VMs in short).
VMs provide a computing paradigm inspired by the transmission and replication networks of viruses.
VMs consist of process units (called hosts) structured by a directed graph whose arcs are called channels and an instruction graph that controls the transmissions of virus objects among hosts.
The present work complements our understanding of the computing power of VMs by introducing normal forms; these expressions restrict the features in a given computing model.
Some of the features that we restrict in our normal forms include (a) the number of hosts, (b) the number of instructions, and (c) the number of virus objects in each host.
After we recall some known results on the computing power of VMs we give our normal forms, such as the size of the loops in the network, proving new characterisations of family of sets, such as the finite sets, semilinear sets, or NRE.
Normal forms in Virus MachinesSupported by the Zhejiang Lab BioBit Program (Grant No. 2022BCF05).
Antonio Ramírez-de-ArellanoCorresponding author: Universidad de Sevilla, Avda. Reina Mercedes s/n, Seville, 41012, Spain
Dept. of Computer Science and Artificial Intelligence
University of Seville
SCORE Laboratory, I3US, University of Seville
[email protected]
Francis George C. CabarleQUAL21 008 USE
project, “Plan Andaluz de Investigación,
Desarrollo e Innovación” (PAIDI) 2020 and “Fondo Europeo de Desarrollo Regional” (FEDER) of the European Union, 2014-2020 funds.
SCORE Laboratory, I3US, University of Seville
Dept. of Computer Science
University of the Philippines Diliman
[email protected] [email protected]
David Orellana-Martín
Dept. of Computer Science and Artificial Intelligence
University of Seville
SCORE Laboratory, I3US, University of Seville
[email protected]
Mario J. Pérez-Jiménez
Dept. of Computer Science and Artificial Intelligence
University of Seville
SCORE Laboratory, I3US, University of Seville
[email protected]
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
In the present work, we consider some normal forms for virus machines, this computing paradigm was introduced in <cit.> are unconventional and natural computing models inspired by networks of virus replications and transmissions.
More information on unconventional and natural computing is found in <cit.> and <cit.>, respectively.
From <cit.> it is shown that VMs are Turing complete, that is, they are algorithms capable of general-purpose computations.
From such works some VMs for computing classes of (in)finite sets of numbers are also shown.
Providing normal forms for VMs allows a more refined or deeper view of their computations: What features and their values for VMs can be increased or decreased to increase or decrease computing power?
Virus machines consists of three graphs: a directed and weighted host graph with nodes and edges referred to as hosts and channels, respectively;
a directed and weighted instruction graph where nodes are instructions and edge weights determine which instruction to prioritise and next activate;
an instruction-channel graph which connects an edge between instructions and channels in the previous graphs.
Hosts contain zero or more virus objects, and activating an instruction means opening a channel since the channels are closed by default.
Opening a channel means that virus objects from one host are replicated and transferred to another host.
Even from this high-level view of VMs and their features, it can be evident that the source of their computing power can be further restricted or refined.
Thus, a deeper understanding of VMs as algorithms is gained in order to aid in developing applications.
Briefly, the idea of a normal form for some computing model is to consider restrictions in the model while maintaining its computer power.
That is, considering lower bounds for ingredients in a computing model is a natural direction for investigation.
For instance, a well-known normal form in language theory is the Chomsky normal form, CNF in short, from <cit.>.
Instead of having an infinite number of forms to write rules in a grammar for context-free sets, CNF shows that two forms are enough.
Normal forms in unconventional or bio-inspired models include spiking neural P systems <cit.> and cellular automata <cit.>, with recent and optimal results in <cit.>, and a recent survey in <cit.>.
In addition to restriction while maintaining the same power, normal forms can provide frontiers.
For instance, by giving some lower bound for some value, further decreasing the value can mean that we only compute a proper subset of problems as before.
This paper contributes the following to the study of virus machines and their computing power.
Normal forms, some of which are optimal bounds, for VMs are provided in the following sense: (a) providing characterisations (previously were inclusions) for generating families of finite sets;
(b) showing new characterisations for finite sets of numbers using restrictions on the number of required hosts, instructions, or viruses; and (c)
new characterisations are also given for singleton sets of numbers and some linear progressions or combinations.
We also consider new restrictions: limiting or not the host or instruction graphs to be a tree graph, that is, an acylcic directed graph. Moreover, the instruction-channel graph can also be limited in the sense that one channel can be attached to at most one instruction.
We show, for instance, that some VMs with a tree instruction or host graph and with some lower bounds on the number of hosts, instructions, and viruses can only compute finite sets. Lastly, we highlight a new (and better) characterisation of semilinear sets of numbers, in short SLIN, with virus machines.
Our results on normal forms are then used to ask new questions regarding other normal forms and restrictions on VMs.
The present work is a much extended and revised version of the preliminary report in <cit.>.
For instance, results on the family SLIN is especially interesting from the point of view of theory and applications:
SLIN is a class enjoying the fact that it is above the family of finite sets NFIN and below the family of Turing computable sets NRE, with SLIN known to be decidable <cit.>;
the decidability of SLIN helps toward the computational complexity of machines for it;
applications can benefit from the decidability of SLIN, including formal verification, proof assistant <cit.>.
The organisation of the present work is as follows. First, some brief definitions and the state of the art are presented in Section <ref>. After that, novel results are presented with the old ingredients in Section <ref>. We continue with novel results with the new ingredients proposed in Section <ref>. Lastly, some conclusions with open remarks are shown in Section <ref>.
§ DEFINITIONS
§.§ Virus Machines
Let a virus machine Π of degree (p,q) with p,q ≥ 0 defined as:
Π = (Γ, H, I, D_H, D_H, G_C, n_1,…,n_p,i_1,h_out)
where:
* Γ={v} is the singleton alphabet.
* H={h_1,…,h_p} is the ordered set of hosts, h_out can be either in H or not (for this work, we will suppose always h_out∉ H, I={i_1,…, i_q} the ordered set of instructions.
* D_H = (H∪{h_out}, E_H, w_H) is the weighted and directed (WD) host graph, where the edges are called channels and w_H : H× H∪{h_out}→.
* D_I = (I, E_I, w_I) is the WD instruction graph and w_I : I× I→{1,2}.
* G_C = (E_H ∪ I, E_I) is a unweighted bipartite graph called channel-instruction graph, where the partition associated is {E_H∪ I}.
* n_1,…,n_p∈ are the initial number of viruses in each host h_1,…, h_p, respectively.
Regarding the semantics, a configuration at an instant t≥0 is the tuple _t = (a_1,t,a_2,t,…,a_p,t,u_t,a_0,t)
where for each j∈{1,…, p}, a_j,t∈ represents the number of viruses in the host h_j at instant t, and u_t∈ I∪{#} is the following activated instruction, unless u_t= # that is a halting configuration. Lastly, _0=(n_1,…,n_p,i_1,0) is the initial configuration.
From a configuration _t, _t+1 is obtained as follows. The instruction that will be activated is u_t if u_t∈ I, otherwise _t is a halting configuration. Let us suppose that u_t∈ I and that it is attached to the channel (h_j,h_j')∈ E_H with weight w∈, then the channel is opened and two possibilities holds:
* If a_j,t>0, then there is virus transmission, that is, one virus is consumed from h_j and is sent to the host h_j' replicated by w. The next activated instruction will follow the highest weight path in the instruction graph. In case the highest path is not unique, it is chosen nondeterministically. In case there is no possible path, then u_t+1 = #
* If a_j,t=0, then there is no virus transmission and the next instruction follows the least weight path. For the other cases, it is analogous to the previous assumption.
This paper is focused on the computational power of VMs in the generating mode, and for that we fix the same notation as in the foundational paper <cit.>. Let
NVM(p,q,n) be the family of sets of natural numbers generated by virus machines with at most p hosts, q instructions, and n viruses in each host at any instant of the computation. For unbounded restrictions, they are replaced by *.
§.§.§ Formal Verification
One technique to mathematically formal verify the integrity of these devices is by designing invariant formulas which highlight interesting properties of the most relevant loops. For the seek of simplicity, some formal verifications will show the invariant formula and how it works; the mathematical proof of these formulas can be easily obtained with the induction technique. Lastly, we propose a novel technique for formal verification, that is, by fixing the sizes of the loops in the instruction graph.
§.§ Topology
For this study, several notation and clarifications related to discrete topology must first be presented.
A path in a directed graph G = (V,E) is an ordered tuple
(v_1,…,v_n) of vertices such that (v_i,v+1) ∈ E for i=1,…, n-1 and v_i≠ v_j, for i,j= 1,…,n, and i≠ j. Under the same conditions, if (v_n,v_1)∈ E, then the path is called a cycle. A graph without cycles is called a tree. The depth of a tree is the longest path of the tree.
We say that v_1 is connected to v_n if there is a path w= (e_1,…,e_n-1), whose sequence of vertices is (v_1,…,v_n). We denote by V(v_i)⊆ V the subset of vertices that are connected by a path from v_i.
A graph G= (V,E) is connected if there are paths that contain each pair of vertices v_i,v_j with i≠ j. A rooted tree is a graph with a distinguished
node v_i, called the root, such that for each v_j∈ V, with i≠ j, v_i is connected to v_j. We will note as I(v_i)⊆ V the subset of vertices that forms the rooted tree G_v_i = (I(v_i),E(v_i)) which is a subgraph of G.
If the instruction graph D_I of a virus machine Π of degree (p,q), with p,q≥ 1,
Π = (H,I,D_H,D_I,G_C,n_1,…,n_p,i_1,h_out),
is not a rooted tree with root i_1, then there exists another virus machine Π' of degree (p,q'), with q'≤ q, which has the same computation.
Let Π be the virus machine fixed in the statement, setting the instruction graph to D_I=(I,E_I,w_I), as it is not a rooted tree with root i_1; then I(i_1)≠ I. Let Π' be the virus machine of degree (p,q')=|I(i_1)|), defined as Π but with a new instruction graph D_I(i_1)= (I(i_1),E_I(i_1),w_I(i_1)).
Due to the semantics associated with virus machines, any instruction that can be activated must be connected by a path from the initial instruction; thus, the set of instructions of Π that can be activated at some instant of the computation is contained in I(i_1), therefore Π' has the same computation.
Using this result, from now on, all defined virus machines are supposed to be rooted trees with root i_1, which is i_1 the initial instruction. In addition, the same notation of the components of a virus machine Π is used for the following results.
§.§ State-of-the-art
This subsection is devoted to reviewing results prior to this work on the computing power of VMs with respect to certain classes or families of computable numbers.
The state-of-the-art is presented in Table <ref>. The virus machines in the generating mode are Turing Universal; that is, they can generate recursively enumerable sets of numbers (NRE) <cit.> for unbounded restrictions. This power is severely reduced when the last ingredient is reduced; more precisely, a semilinear set characterisation (SLIN) is proved for NVM(*,*,2) <cit.>. From now on, not characterisations but inclusions have been proven, for finite sets (NFIN) they are contained in NVM(1,*,*) and NVM(*,*,1) <cit.>. Finally, the set of power of two numbers is contained in NVM(2,7,*) <cit.>.
An interesting and natural question is can we further restrict or provide better lower bounds, for known results about VMs?
That is, provide “better” characterisations of finite sets or even other families of sets such as the singleton sets, see, for instance, Table <ref>.
As we focus on finite sets later, let us see the VMs used in <cit.> to generate finite sets. For NVM(1,*,*) the VM presented in Figure <ref>, and for NVM(*,*,1) the Figure <ref>. The corresponding lemmas were called (viruses) and (hosts), respectively, and we follow the same notation in this work.
§ NOVEL RESULTS WITH OLD INGREDIENTS
§.§ Finite sets
Let F = {m_1,…,m_k} a nonempty finite set of natural numbers that m_i>0. Then F can be generated by a virus machine of 2 hosts, 2k+1 instructions, and the 2 virus in each host at most.
Let Π be the virus machine of degree (2,2m_k) defined as
Π = (Γ, H, I, D_H, D_I, G_C, n_1, n_2, i_1,h_out), where:
* Γ = {v};
* H = {h_1,h_2};
* I = {i_1, … , i_2m_k};
* D_H = (H ∪{h_out}, E_H={(h_1, h_2),(h_1, h_out),(h_2, h_1),(h_2, h_out)}, w_H), where
w_H((h_1, h_2)) = w_H((h_2, h_1)) =2 and
w_H((h_1, h_out)) = w_H((h_2, h_out)) = 1;
* D_I = (I , E_I , w_I ), where E_I={(i_a, i_a+1) | a∈{1,…, 2m_k-1}}∪
{(i_2m_i-1,i_2m_k) | m_i∈ F},
w_I((i_j, i_j')) = 1, for each (i_j,i_j')∈ E_I;
* G_C = (I ∪ E_H,E_C), where
[ E_C = ⋃_j ∈{0,…, m_k}, j even ({i_2j+1,(h_1,h_out)},{i_2j,(h_1,h_2)})∪; ⋃_j ∈{0,…, m_k}, j odd ({i_2j+1,(h_2,h_out)},{i_2j,(h_2,h_1)}); ]
* n_1 =2 and n_2= 0;
A visual representation of this virus machine can be found in Figure <ref>. Let us prove that for each m_i∈ F,
there exists a computation of Π such that it produces m_i viruses
in the environment in the halting configuration. Let m_i be the generated number; the following invariant holds:
φ(x) ≡{[ _2x = (2,0,i_2x+1,x) x even,; _2x = (0,2,i_2x+1,x) x odd, ].
for each 0≤ x≤ m_i-1. In particular, φ(m_i-1) is true, let us suppose that m_i is odd, then the following computation is verified:
_2(m_i-1) = (2,0,i_2m_i-1, m_i-1),
_2m_i = (1,0,i_2m_k, m_i),
_2m_i+1 = (1,0,#, m_i),
For m_i even the computation is analogous, hence the computation halts in 2m_i+1 steps and the number generated is m_i.
Another interesting result is that this inclusion is strict.
NFIN⊊ NVM(2,*,2).
Inclusion is direct by the Lemma <ref>. Let us now focus on inequality; for that, we construct a virus machine from <cit.>, extending the work from <cit.>, which generates the set of all natural numbers except the zero, which verifies the restrictions of the proposition.
Let Π_Nat = (Γ, H, I, D_H, D_I, G_C, 1,0, i_1, h_out), where:
* Γ = {v};
* H = {h_1,h_2};
* I = {i_1, … , i_4};
* D_H = (H ∪{h_out}, {(h_1, h_2),(h_2, h_out),(h_2, h_1)}, w_H), where
w_H((h_1, h_2)) = 2 and
w_H((h_2, h_out)) = w_H((h_2, h_1)) = 1;
* D_I = (I , E_I , w_I ), where E_I={(i_1,i_2),(i_2,_3),(i_3,i_1),(i_3,i_4)}, and
w_I((i_j, i_j')) = 1 ∀ (i_j,i_j')∈ E_I;
* G_C = (I ∪ E_H,E_C), where E_C ={{i_1,(h_1,h_2)},{i_2,(h_2,h_1)},
{i_3,(h_2,h_out)}};
* h_out = h_0;
A visual representation of this virus machine can be found in
Fig. <ref>. Now, let us prove that for each n∈, there exists a halting computation generating the number n. For generating this number, the following invariant holds:
φ(k) ≡_3k =(1,0,i_1,k), for each 0≤ k≤ n-1
In particular, φ(n-1) is true, then the following configuration is verified _3(n-1) = (1,0,i_1,n-1), from here, after the 4 transition steps the halting configuration is reached _3n+1 =(1,0,#,n), whose output is the natural number n.
§.§ Singleton Sets
Now let us move to the second family of sets, the Singleton sets, these are sets of natural numbers with only one element, in this work we include the empty set in this family.
The following sets of numbers are equivalent to singleton sets:
* NVM(1,*,1);
* NVM(*,1,*);
* NVM(1,1,1)
The proof of equivalence is done by the double inclusion technique.
* Let us start with the inclusion of the left side, let Γ = {v} be a singleton set of natural number v∈, then it can be generated by the VM Π_sing_1 of degree (1,1) depicted in Figure <ref>, the initial configuration is _0 = (1,i_1,0) and in the following configuration, a virus is consumed and replicated by the weight of the arc, that is, v, and sent to the environment, leading to the halting configuration _1 =(0,#, v). Thus, after one transition step, the set generated is {v}.
For reverse inclusion, suppose any VM with only one host and one virus: the host can only be attached to the environment, and let us fix that the weight of that channel is w∈.
Thus, the only number generated is w or none, depending on the instruction graph (if the computation halts or not). Thus, we generate a singleton set.
* For the inclusion on the left side we can use the VM Π_sing_1 depicted in Figure <ref> as it only has one instruction and the inclusion has already been proven.
Let us focus on the inclusion of the right side. With only one instruction, there are two possibilities in the instruction graph:
* The node with a self-arc, which creates an infinite loop, thus a non-halting computation and generating the empty set.
* The node with no arcs, thus the machine, halts after only one transition step as there is no other possible path. In this sense, two options can be separated:
* The instruction is attached to a channel which is attached to the environment, generating a singleton set.
* The instruction is not attached to a channel which is attached to the environment, thus the set generated is {0}.
* Lastly, the left side inclusion is again using Π_sing_1, and the right side is direct as the restrictions are stronger than the previous statements.
§.§ Finite linear progressions
The computing power of virus machines highly depends on the instruction graph, to show this, let us see the following result when we bound by 2 the amount of instructions. For this, we will fix the following notation:
Let NLinFIN = ⋃_x∈⋃_n∈⋃_N∈({x + n· i: 0≤ i≤ N}) ∪{ ∅}, the family of finite linear progressions. The following result holds.
NVM(p,2,*) = NLinFIN, for each p≥ 2.
For the right-side inclusion, for any x,n,N∈, let us see that there exists a VM Π_Lin of degree (2,2) that generates the set {x + n· i: 0≤ i≤ N}. The virus machine can be depicted in Figure <ref>.
Suppose that the number generated is x + n· k with 0≤ k ≤ N, then the following computation holds.
_0 = (N,1,i_1,0),
_1 = (N-1,1,i_1,n),
⋮
_k-1 = (N-(k-1),1,i_1,n·(k-1)),
_k = (N-k,1,i_2,n· k),
_k+1 = (N-k,1-1,#,x + n· k),
Therefore, the number is generated, the other inclusion is straightforward.
For the inclusion on the left side, some previous considerations should be taken into account. First, let us fix that I={i_1,i_2} is the set of instructions and i_1 is the initial instruction. We will consider the instruction graphs where there is at least one halting computation. On the other hand, we will consider those in which there is more than one computation, that is, there is a least one non-deterministic decision, otherwise only the singleton sets can be generated (which are included in NLinFIN).
With all of this in mind, only one instruction graph remains, the same as depicted in Figure <ref>. What can be different is the instruction-channel graph, if i_1 is attached to channel not connected to the environment, then only singleton sets can be generated, otherwise the arithmetic progression as stated before.
§.§ Finite linear combinations
Continuing with the idea of the previous subsection, let us see that with 3 instructions we still have a strong limitation in computational power. First, we define a family of sets that we will try to characterise, let:
a_w_1,w_2,N_1,N_2 = {w_1x+w_2y+r | 1≤ x≤ N_1, 1≤ y≤ N_2};
b_w_1,w_2,N_1,N_2 = {f_w_1,w_2^N_1,N_2(x,y) | 1≤ x≤min(N_1,N_2),
1≤ y≤ |N_2-N_1|};
f_w_1,w_2^N_1,N_2(x,y) = {[ (w_1+w_2)x+r, x < min(N_1,N_2);; (w_1+w_2)N_1 + w_2y + r, x=N_1∧ N_1<N_2,; (w_1+w_2)N_2 + w_1y + r, x=N_2∧ N_2<N_1,; ].
For each w_1,w_2,N_1,N_2∈. Finally, let
A = {a_w_1,w_2,N_1,N_2}_w_1,w_2,N_1,N_2∈,
B = {b_w_1,w_2,N_1,N_2}_w_1,w_2,N_1,N_2∈, be two family of sets of natural numbers, we define finite linear combinations NCombFIN = {∅}∪ A ∪ B.
NCombFIN = NVM(p,3,*), for each p≥ 3.
⊆ The idea of this inclusion is to prove that there exists a virus machine for generating: (i) the empty set, (ii) the family of sets A, and (iii) the family of sets B.
(∅) This is trivial, any VM with no halting computations generates the empty set, for instance, a VM of degree (p,3) with p≥ 1, that has a loop of size 3, thus there cannot be a halting computation and the empty set is generated.
(A) Let Π_1 be the virus machine of degree (3,3) visually presented in Figure <ref>. Let us suppose that the number generated is m = w_1x'+ w_2y' +r; then, the following computation holds. The initial configuration is _0 = (N_1,N_2,1,
i_1,0), from here, one virus is transmitted from host h_1 to the environment replicated by w_1, the following instruction is non-deterministically chosen between i_1 and i_2, here we choose i_1. This process is repeated N_1-x'-1 times. Thus we reach the following configuration, _x'-1 = (N_1-
x'+1,N_2, 1, i_1,w_1(x'-1)). Here we choose instruction i_2, leading to the configuration C_x'= (N_1-x',N_2,1,i_2,w_1x'). Now the process is analogous with instruction i_2 and host h_2. After y' transition steps, we choose instruction i_3, which will open the channel (h_3,h_0), reaching the halting configuration: C_x'+y'+2= (N_1-x',N_2-y', 0, #, w_1x'+w_2y'+r), that is, after x'+y'+2 steps, the machine halts and sends w_1x'+w_2y'+r viruses to the environment.
(B) Let Π be the VM of degree (3,3) presented in Figure <ref>. Let us see that it can generate any b_w_1,w_2,N_1,N_2, for each w_1,w_2,N_1,N_2∈. The main difference with the previous VM is the loop between instruction i_1 and i_2, that means that we are sending the same amount of viruses from host h_1 and host h_2 to the output region, unless the loop is repeated more than min(N_1,N_2) times, then we are sending viruses only from the host that remains some viruses. After that we send 1 viruses from host h_3 and replicated by r. Finally, the halting configuration is reached, having sent any of the elements from b_w_1,w_2,N_1,N_2.
⊇ The idea of this proof is based on the possible loops that can exist in a virus machine with 3 instructions. It is important to note that the way to maximise the amount of numbers that can be generated by an explicit virus machine depends on the amount (or size) of nondeterministic loops in the device.
We will fix that the initial configuration is always denoted by i_1, and there has to be at least one instruction with out-degree zero; otherwise, we are generating the empty set (which is trivially included). We will fix the instruction with out-degree zero is i_3.
Lastly, it is important to note that the finite linear progression is included in this family of sets, we just need to fix that w_2=0.
Size 0 With no loops, we generate singleton sets and sets of size two, fixing that i_1 is attached to both i_2 and i_3 with weight 1.
Size 1 Here we have the VM presented the previous VM presented in Figure <ref>. The loops can only be in i_1 and i_2. If we fix only one, we can only generate a subset of the previous VM.
Regarding the host graph, we will suppose that at least one of the instructions from i_1 and i_2 is not attached to a channel associated with the environment. But in those cases we are generating finite linear progressions, that is included in this family of sets.
Size 2 Fixed that the out-degree of instruction i_3, the only possibility is that the loop of size 2 is between instructions i_1 and i_2. In addition to this, there can exist other loops of size 1, these are omitted because we are reaching the previous case.
Regarding the host graph, to generate bigger sets than the singleton sets, we need that at least one instruction of i_1 and i_2 send viruses to the environment. We will suppose both of them send viruses from different channels as in other case we are generating the finite linear progressions that are included.
With all previous cases discarded, there is only one kind of virus machine remaining, this is represented in Figure <ref>.
Size 3 This is trivial, as there is no halting computation, thus the empty set is generated.
§.§ Discussion of old ingredients
In this subsection a brief discussion of the results obtained with the previous results is presented in Table <ref>.
§ NOVEL RESULTS WITH NEW INGREDIENTS
For virus machines in generating mode, the notation used in previous works <cit.> was NVM(p,q,r) where it denotes the family of natural number sets generated by virus machines with at most p hosts, q instructions and r viruses at most at each instant in the computation. The computational completeness has been proven when these ingredients are unbounded <cit.>, that is, NRE = NVM(*,*,*), but the power decreases when one of them is bounded, for example, a characterisation of semilinear sets has been proven in <cit.>, that is, SLIN= NVM(*,*,r) for each r≥ 2. However, the three ingredients mentioned seem to be poor information. We propose the following notation:
NVM_β(h_p,i_q,nvh_r,wc_s,outd_t,α_ħ^u,α_ι^v),
where:
* p,q,r≥ 1 represent the same as before,
* β∈{T,F} represents if each channel is attached to only one instruction, that is, if there is a bijection between instructions and channels, then β = T, otherwise we have β = F,
* s≥ 1 is the maximum weight of the arcs in the host graph,
* t is the maximum out-degree of each host in the host graph,
* u≥ 0 is the greatest loop in the host graph.
* v≥ 0 is the greatest loop in the instruction graph.
The rationale for why we chose these new ingredients will be clearly shown in the following sections; however, let us take a brief look at an introductory idea behind each new ingredient.
First, we believe that the instruction-channel graphs have something to say in the sense of computational power, that is why we fix bijection that will show a new frontier of computational power when we combine it with other ingredients.
Secondly, fixing the directed graphs as trees is a first intention to approach the topology structure of the graphs, fixing them as trees (when u or v are zero) will allow us to fix characterisations to finite sets, as we will see later. Nevertheless, we believe that not only whether or not there is a loop is important, but also the size of the loop. The relation between these sizes and the power of the devices will have interesting results. Note that u=1 is the same as u=0 as there are no self-arcs in the host graph.
In addition, the weight of the arcs is our unique way to increase the amount of viruses, fixing this weight to one where the power falls substantially, but if it is 2 we can get novel universality results.
Lastly, the out-degree of the hosts, we believe that it is also crucial in the restrictions of the computing power, it will be shown in combination with other ingredients.
§.§ Finite sets
If a VM generates an infinite set of natural numbers, then its host graph has at least a cycle.
Let Π a VM of degree (p,q) whose host graph is acyclic and where n_1,…, n_p∈ is the initial number of hosts. Let us see that the greatest number that can be generated is bounded.
First, remark that as there are no cycles, the host graph is a tree, to generate the greatest number, we will always choose to transmit the viruses through the maximum weight channel. This idea is shown in Figure <ref>.
Note that in order to obtain more replication with the fixed amount of hosts, that is, to maximise the depth, the out-degree of each host will be fixed to 1. Thus, the host graph will have the structure shown in Figure <ref>.
Thus, the greatest number of viruses that can be generated is sending all viruses from host h_1 to host h_2, and then all of those viruses to host h_3 and so on until h_out. The number of viruses that reach the output region is:
∑_i=1^p((∏_j=i^pw_j)· n_i).
In summary, the greatest number that can be generated is bounded, and thus the set generated is finite.
NVM_F(h_*,i_*, nvh_*, wc_*,outd_*,α^ħ_*,α^ι_0) ⊆ NFIN.
With finite sets, there are some interesting results; you get the characterisation as wc_1 limits to a finite number of viruses.
The following family of sets are equal to NFIN:
* NVM_F(h_1,i_*,nvh_*,wc_s,outd_t,α^ħ_0,α^ι_0), for each s,t≥ 1.
* NVM_T(h_*,i_*,nvh_r,wc_*,outd_t,α^ħ_0,α^ι_0), for each r,t≥ 1.
Separating by the two cases, we have:
* The proof is followed by the previous corollary and the inclusions of <cit.>. In a simple glance, we can see in Figure <ref> that both the instruction and the host graphs are trees.
* On the other hand, in Figure <ref> we can highlight that we have one channel associated with only one instruction, that is, β = T.
Now we move to similar results with the instruction graph fixed as a tree. For this, the following result holds:
If the instruction graph is a tree, then all computations halt. In addition, the number of transition steps is bounded by the depth of the tree.
NVM_F(h_*,i_*, nvh_*, wc_*,outd_*,α^ħ_0,α^ι_*) ⊆ NFIN
The inclusion is direct by Proposition <ref>, as all computations halt, then every virus machine halts in a finite number of steps, thus the set of numbers that can be generated is finite.
The following family of sets are equal to NFIN:
* NVM_F(h_1,i_*,nvh_*,wc_s,outd_t,α^ħ_0,α^ι_0), for each s,t≥ 1.
* NVM_T(h_*,i_*,nvh_r,wc_*,outd_t,α^ħ_0,α^ι_0), for each r,t≥ 1.
* NVM_F(h_p,i_*,nvh_r,wc_s,outd_t,α^ħ_u,α^ι_0), for each p,r,s,t,
u≥ 2.
The proof of the first two statements is analogous to the proof of Theorem <ref>. For the third statement, it can be proved by applying the Lemma <ref> and the Corollary <ref>.
It is interesting to note that now with v = 0, we have gone from strict inclusion in Proposition <ref> to characterisation in the last statement in Theorem <ref>.
§.§ Semilinear sets
The authors in <cit.> characterise semilinear sets by VMs in generating mode, that is, SLIN = (*,*,r), for all r≥ 2. Reviewing how the proof was constructed, we can assure the following result with new ingredients:
SLIN = NVM_T(h_*,i_*, nvh_r,wc_*,outd_t,α^ħ_u,α^ι_v)
for all r,t,u≥2 and v≥ 3.
The question that arises is can we get another trade-off in the unbounded ingredients? In this section, we will prove that we can by the demonstration of the following theorem:
SLIN = NVM_F(h_p,i_*, nvh_r,wc_s,outd_t,α^ħ_u,α^ι_*)
for each p,r,s,t,u≥2.
⊇ This part of the proof is equal to the technique applied by the authors in <cit.>. That is simulating the right-linear grammar, this is possible due to the amount of viruses of each host at each moment of the computation is bounded, thus the number of possible configurations is finite. We still are in the same conditions so the same proof can be made.
⊆ For simplicity, this part of the proof has been divided in two Lemmas. Applying Lemma <ref>, the arithmetic progressions are generated by this family of VMs. Lastly, Lemma <ref> shows the closure under union. Thus, the inclusion is formally proved.
For each n,r≥ 1, we have the following inclusion {n· i + r | i≥ 1}∈ NVM_F(h_2,i_3(n+r), nvh_2,
wc_2,outd_2,α^ħ_2,α^ι_*). More precisely, it will be generated by the virus machine Π_arith of degree (2,3n+3r) defined as:
Π_arith = (Γ, H = {h_1,h_2}, I , D_H, D_I, G_C, 0,1,i_1,h_0),
where,
* I = {i_1,…,i_3n+3r};
* D_H = (H∪{h_0}, E_H = {(h_1,h_2),(h_1,h_0),(h_2,h_1)}, w_H), where
w_H(h_1,h_0) = w_H(h_2,h_1) = 1, and w_H(h_1,h_2) = 2;
* D_I = (I, E_I, w_I), where {(i_k,i_k+1) | k∈{1,…, 3n+3r-1}}∪{(i_3n,i_1)}, and w_I (i_k,i_k') = 1 for each (i_k,i_k')∈ E_I;
* G_C = (E_H ∪ I, E_C), where {{i_j,f(j)} | j ∈{1,…, 3n+3r}}, being
f(j) = {[ (h_1,h_0), j ≡ 0 3,; (h_1,h_2), j ≡ 1 3,; (h_2,h_1), j ≡ 2 3. ].
Suppose that the number generated is m· n+r with m≥ 1, then the following invariant holds:
φ(k) ≡ C_3k = (1,0,i_1,k· n), for each 0≤ k < m
The idea of this invariant is taking the non-deterministic decision at instruction i_3n, that is, to go back to instruction i_1. In particular, φ(m-1) is true. From this, a instruction i_3n is reached, and we take the decision to go to instruction i_3n+1 with the following configuration C_3m = (1,0,i_3n+1,m· n). After that, it is straightforward to reach the halting configuration.
C_3m+3r = (1,0,#,m· n + r)
Thus, there exists a computation that generates the number m· n +r.
Regarding the other inclusion, that is, for each halting computation of the machine, the number generated is in the set proposed. It can be seen in a simple glance that for any generated number we will have the previous invariant formula, and after that, we send r viruses to the environment. Thus, the other inclusion is proved.
Let Q_1,Q_2,…,Q_m⊆ with m>0 arithmetical progressions, then ∪_j=1^mQ_j ∈ NVM_F(h_p,i_*, nvh_r,wc_s,outd_t,α^ħ_u,α^ι_*) for each p,r,s,t,u≥2.
The main idea of the proof is that we can also keep the host graph in the union of each subset. The induction technique will be used for this purpose.
Q_1∪Q_2 As both Q_1,Q_2 are arithmetic progression, we can apply the Lemma <ref>, we will denote that Q_j is generated by Π_j for each j∈{1,2}, this sub-index notation is extended to the heterogeneous networks of each machine. Note that the host graphs and the initial amount of viruses of each virus machine remains equal. The construction is visually explained in Figure <ref>. From which it can be seen in a simple glance that it generates Q_1∪ Q_2, proving that Q_1∪ Q_2 is in the families of the statement.
∪_j=1^m-1Q_j ∪Q_m For the inductive step, let us suppose that a virus machine Π_m-1 generates the union with host graph mentioned in the base case. Applying again Lemma <ref>, let Π_m the virus machine that generates the set Q_m. The construction of the virus machine that generates the union of both sets will be analogous to the previous one, visually explained in Figure <ref>, thus it has been proved that ∪_j=1^m Q_j is in the families of the statement.
§.§ Universality
Lastly, it is interesting to note that new frontiers of universality can be obtained with these novel ingredients.
NRE = NVM_F(h_*,i_*,nvh_*,wc_s,out_*,α^ħ_*,α^ι_*) for each s≥ 2.
For the proof, we refer to <cit.>, where the Turing completeness was proven by simulating register machines in a modular way. In that simulation, one can see on a simple glance that the weight of the channels are bounded by 2, thus the theorem is proved.
§.§ Discussion of new ingredients
Here, table <ref> summarises the previous results in addition to the new results obtained in this work with the proposed new ingredients. First, the singleton sets were characterised in the previous section with the old ingredients; note that including the new ingredients is straightforward. With finite sets, it is interesting to note that the new characterisations were previously inclusions, just fixing the host graph and/or the instruction graph to a tree. Showing the strict inclusion with the loops of size two in the host graph and size three in the instruciton graph. These loops restrictions are the same for characterising the semilinear sets, but unbounding the number of hosts instead, note that despite the β ingredient seems to be very restrictive, we can characterise this family of sets. The most interesting result of this section arises when we unbound the size of the loops in the instruction graph and β = F, which allow us to reduce the number of necessary hosts to two. Last but not least, the universality result from <cit.> has been revised with the new ingredients, showing that there is a huge frontier in computational power, from finite sets with weight 1, to NRE with weight 2.
§ CONCLUSIONS
Some directions for future work include the extension to parallel VMs e.g. <cit.>.
In such VMs more than one instruction can be active at each computation step, so some of the restrictions in the present work may not apply.
Another natural extension of the present work is to apply them to VMs for accepting inputs or as transducers, that is, computing functions using both input and output <cit.>.
The results in normal forms from the present work can better inform the design of applications or implementations of VMs.
For instance, perhaps with ideas from <cit.>, sequels of the simulator from <cit.>, or applications of VMs such as edge detection <cit.>, or cryptography <cit.> can be improved by applying simplifications based on normal forms.
Similarly, knowing which types of instruction, host, or channel graphs are used, the reachability of one configuration from another and other properties, can be decidable: in this case, searching for efficient algorithms and applications to formal verification are open problems.
The results of the classes below NRE help inform the computational complexity of problem solving with a model.
In the case of VMs, a future direction is to investigate deeper into “sub-NRE” classes: in this way a better view of efficiency or lack of it, can be given for VMs.
Lastly, new and optimal lower bounds are expected to improve the results of the present work, summarised in Table <ref>.
New normal forms can perhaps include new ingredients or semantics not previously considered, such as a deeper focus on deterministic versus nondeterministic computations.
fundam
|
http://arxiv.org/abs/2409.03402v1 | 20240905103816 | Game On: Towards Language Models as RL Experimenters | [
"Jingwei Zhang",
"Thomas Lampe",
"Abbas Abdolmaleki",
"Jost Tobias Springenberg",
"Martin Riedmiller"
] | cs.AI | [
"cs.AI",
"cs.RO"
] |
Accelerating multipartite entanglement generation in non-Hermitian superconducting qubits
H. H. Jen
September 9, 2024
=========================================================================================
*Equal contribution.
§ ABSTRACT
We propose an agent architecture that automates parts of the common reinforcement learning experiment workflow, to enable automated mastery of control domains for embodied agents.
To do so, it leverages a VLM to perform some of the capabilities normally required of a human , including the monitoring and analysis of experiment progress, the proposition of new tasks based on past successes and failures of the agent, decomposing tasks into a sequence of subtasks (skills), and retrieval of the skill to execute – enabling our system to build automated curricula for learning.
We believe this is one of the first proposals for a system that leverages a VLM throughout the full experiment cycle of reinforcement learning.
We provide a first prototype of this system, and examine the feasibility of current models and techniques for the desired level of automation.
For this, we use a standard Gemini model, without additional fine-tuning,
to provide a curriculum of skills to a language-conditioned Actor-Critic algorithm,
in order to steer data collection so as to aid learning new skills.
Data collected in this way is shown to be useful for learning and iteratively improving control policies in a robotics domain.
Additional examination of the ability of the system ability to build a growing library of skills, and to judge the progress of the training of those skills, also shows promising results, suggesting that the proposed architecture provides a potential recipe for fully automated mastery of tasks and domains for embodied agents.
§ INTRODUCTION
Recent progress on leveraging large (vision) language models (VLMs/LLMs) for reinforcement learning and robotics has demonstrated their usefulness for learning robot policies <cit.> as well as for high-level reasoning <cit.>,
and they have also aided research into automated generation of reward functions for policy learning <cit.>.
In doing so, LLMs have reduced the amount of domain-specific knowledge that an RL researcher would normally need to provide.
Yet there are still many steps within the experiment workflow of training policies via reinforcement learning (RL) that currently require human intervention;
such as deciding when an experiment has concluded or building a curriculum of tasks <cit.> to facilitate the learning of a target task.
While some work exists in the literature that attempts to automate some of these steps (e.g. automated training and evaluation of standard machine learning tasks <cit.> or automated curriculum building <cit.>
within the community of automated machine learning), these more automated systems usually consider the individual steps in isolation, using models specifically trained to automate a single step.
In this work, we set out to give a first sketch how a large vision language model (VLM) could be used to automate most of the missing capabilities that would be required to automate a reinforcement learning (RL) experiment.
We propose a system architecture that
uses a VLM for automating most parts of the reinforcement learning experiment loop (with the current exception of not providing the reward signal), and trains a growing set of motor skills to increasing mastery of desired domains.
This architecture integrates several of the capabilities traditionally required from a human experimenter:
* The proposition of new tasks to perform/learn, given a set of already-known tasks.
* The decomposition of higher-level tasks into sequences of low-level skills; paired with retrieving the actual skill the robot possesses.
* Judging whether training of a set of skills has concluded, and a new round of data collection should be started for subsequent reinforcement learning.
To implement our agent, we focus on examining the suitability of currently available VLMs and prompting techniques for the intended level of automation; rather than attempting to expand their capabilities.
We therefore limit the scope of this implementation to only some of the components of the proposed system.
Notably, we do not automate the stopping of the experiment and the gradual additions of skills to the system yet, and instead present a post-hoc evaluation for both to mimic what their effect would have been.
In addition, due to the unavailability of a robust model that can automatically generate reward functions for arbitrary task, we do not automate the addition of arbitrary new skills and limit our evaluation to a domain with known rewards.
In our prototype, all of the reasoning capabilities are driven by a single, general purpose, and publicly available VLM [We use Gemini 1.5- Pro <cit.> in all experiments.], and are achieved zero-shot via prompting techniques.
Data generated under this high-level VLM's supervision is then used offline to improve a separate 'low-level' policy, which is trained specifically to output actions to control a robot, and is task-conditioned on language instructions from the high-level system.
To showcase the usefulness of our approach, we train a policy to perform multiple manipulation tasks on a simulated robot.
We show that the VLM-guided exploration produces richer data diversity, which in turn improves performance during successive iterations of policy self-improvement via fine-tuning.
In addition, we show that the same VLM can provide experiment supervision by judging the point at which an experiment should be considered to have converged.
Lastly, we illustrate that if we provide the VLM with growing sets of skills from different stages of the learning process – as judged by the VLM itself – it can produce reasonable decompositions for each stage and guide learning of progressively more complicated skills.
§ RELATED WORK
§.§ LLM-based Virtual Agents
Following the significant improvements in performance and capabilities of LLMs,
the field of LLM-based agent has seen a surge of recent interest.
Using an LLM as a general-purpose controller, recent work has attempted to replace
components or capabilities that used to require different pieces of software, models or human researchers by using outputs generated by prompting large language models.
Among these, there are, for example, works that propose general strategies to obtain enhanced inference from agentic LLMs by the use of chain-of-thought reasoning <cit.>,
self-consistency <cit.>,
self-reflexion <cit.>,
ReACT chains and tool use <cit.>.
More relevant to our work are the increasing number of LLM-empowered agents proposed to automate science and engineering tasks.
For example, LLM-based software engineer agents are now being designed to assist software development, leveraging the greatly improved coding capabilities of these models. This includes work that utilizes language models to enhance various aspects of software development such as assistive AI pair-programming for interactive notebooks or algorithmic reasoning within competitive programming <cit.>. Some recent work goes even further, e.g.
the SWE-Agent <cit.> explores performing end-to-end software engineering with LLM-based agents where a custom agent-computer interface is built to facilitate the agent to navigate repositories, edit code and execute programs.
On the side of automating scientific research,
LLM-based agents have been proposed to perform the work of researchers:
this includes generating novel research directions <cit.>,
reading through relevant literature to gather information <cit.>,
automate discovery of scientific knowledge <cit.>,
come up with hypothesis and revise it based on experimental evidence <cit.>.
In the specific field of automating machine learning research,
there are studies that use LLMs to help hyper-parameter tuning of machine learning models <cit.>,
as well as work that gives the LLM-agent capabilities to interact with computer files and execute code; thus conducting machine learning experimentation in a more integrated fashion <cit.>.
In this work, using a vision language model to both monitor the progress of a machine learning experiment and to examine the resulting performance to influence later experiments is one of the aspects that we focus on.
Different to existing work,
instead of automating experimentation in purely virtual domains,
we perform experiments with an embodied, robotic, agent in this work for automating RL research and automating domain mastery.
§.§ LLM/VLM-based Embodied Agents
Above we have discussed works that utilize LLMs to accomplish pure virtual tasks.
There are, however, also LLM-empowered methods that are designed to assist embodied agents.
For example, in the Minecraft domain, there is work on using large-scale foundation VLMs to learn behavior policies within the video pre-training (VPT) line of work <cit.>.
More closely related to our work is the open-ended Voyager agent <cit.>.
In particular, in Voyager, GPT-4 <cit.> acts as the backbone,
proposing tasks for it to accomplish and writing code to help it achieve goals.
It maintains a skill library which keeps track of LLM-generated and self-verified executable code interfacing with the Minecraft environment through JavaScript API; while in our case the stored skills are learned parameterized low-level control policies rather than code.
They employ an auto-curriculum proposed by LLMs to enable the agent to perform open-ended exploration; we adopt a similar mechanism, but use it to facilitate automatic domain mastery via RL, although we limit our prototype application to one robotic domain with a restricted set of skills that we can easily evaluate.
In the robotics domain,
CaP <cit.> is closely related to Voyager in the code-writing aspect.
It leverages LLMs to write policy code using perception and control APIs to control robots.
While its follow-up work PromptBook <cit.> provides further improvements and guidance in prompting LLMs to write code for low-level manipulation control primitives,
the high-level reasoning capability of LLMs is not highlighted in these works.
Utilizing LLM-based reasoning for robotics tasks was pioneered by SayCan <cit.> which uses language models to decompose a given high-level task into sub-tasks,
in which the decoding for decomposition is constrained by the availability of the robot skills and weighed by the affordance of skills under a current scene.
In their work, instructions or high-level tasks are provided by human operators rather than suggested by the LLM itself, which restricts the usability of their method for more general and open-ended purposes such as exploration or automatic mastery of domains, as done in this work. Our work is complementary in that we do not restrict the suggestion and decomposition performed by the high-level LLM; but do restrict ourselves to a fixed set of executable low-level skills for which rewards can be computed.
Likewise, <cit.> also uses LLMs to decompose tasks into sub-goals, and uses those as instructions for a language-conditioned policy.
Similar to SayCan, the tasks in their work are explicitly provided by a user.
Furthermore, it requires a separate VLM to obtain text descriptions from visual observations, as well as fine-tuning of an LLM to their specific domain.
This is in contrast to our work where a native multimodal Gemini <cit.> model is used without the need for finetuning.
On the task-proposing front, there are several works that focus on simulated domains: <cit.> propose to leverage LLMs for both proposing tasks and generating simulation code for the proposed tasks, while <cit.> further sketch a system that also includes components like code generation for reward.
Since we are interested in applying our proposed system directly in the real world, these methods would need further adaptation, e.g. by adding automatic reward modeling methods that do not require access to the simulator state.
A perhaps most closely related LLM-assisted agent in the control/robotics domain to our work is AutoRT <cit.>.
They adopt an LLM-based approach to orchestrate a fleet of robots to collect diverse data,
where tasks are proposed on-the-fly by LLMs.
While the open-ended task proposition is very similar to the Voyager paradigm and our work,
there are several notable differences to our approach.
First, AutoRT is purely aimed at data collection without any active learning of policies 'in the loop' wheras we set out to specifically automate the process of automatically steering the learning process of a low level RL learner.
Secondly, since their proposed tasks are at skill-level already,
there is no decomposition component in their agent architecture and particularly no sequencing of skills.
The authors also only measure the diversity of the collected data, but do not validate the quality of the data empirically by conducting any type of policy training with it.
In contrast, we do use the data collected by our proposed system to perform self-improvement and bootstrap a more capable policy.
In order to eventually execute and evaluate arbitrary tasks, plans and subgoals proposed by an automated , one key capability would be to get a reward function from the language caption of the task. As discussed above the current work does not yet include such a step.
While we do not investigate this aspect in this work (we restrict our self-improvement experiments on training skills which the reward function is known), we do note that there is a growing body of works that utilize LLMs to write reward code for desired behavior <cit.> and VLMs as general reward models or success detectors <cit.>, which are candidates for integration into our system once reliable enough for use in reinforcement learning domains.
§ SYSTEM ARCHITECTURE
We study the setting of an embodied agent with access to a workspace containing objects that it can interact with.
We propose a VLM-based agent architecture that can enable automatic mastery of the environment. By mastery we here mean that we expect the agent to be capable of accomplishing any meaningful task – for which we can measure success by a given set of reward functions – with any object in the environment by the end of the learning process,
and by automatic we mean that no human researcher is required to come up with a decomposition of tasks or a curriculum for learning the tasks in a specific order during the learning process and that no researcher is needed to monitor the training progress.
Our proposed agent architecture to fulfill this goal consists of the following modules:
* The curriculum module, which performs high-level reasoning to guide the learning process with auto-curriculum.
More specifically,
it is in charge of task proposition, task decomposition, skill retrieval,
and keeps a record of past successful and failed episodes.
* The embodiment module, which maintains a skill library consisting of the skills available to the embodiment. It will execute skills assigned by the curriculum in the environment, save episode data and report back success or failure. Finally it will trigger a low-level 'Actor-Critic' RL algorithm for learning (or improving) skills from the collected data.
* The analysis module, which monitors the training progress of skills, reports learning status and adds converged ones to the skill library of the corresponding embodiment.
§.§ The Curriculum Module
This module generates an auto-curriculum to guide automatic mastery of domains.
Each of its components (task proposition, task decomposition, skill retrieval) is realized by prompting the Gemini model.
In the following, we describe each component's prompts conceptually.
For concrete prompt designs used in our prototype implementation, see Appendix <ref>.
Task proposition.
We first prompt the VLM to propose a new high-level task in free-form text; for the agent to accomplish given a current image observation and past success and failures.
The VLM is prompted to output tasks that are novel and diverse while not being too far from the current capability of the agent.
The proposition prompt is heavily inspired by the prompt used in Voyager <cit.>,
consisting of a description of the domain, the request for a proposal and matching reasoning leading to it, as well as a set of constraints and requirements.
This fixed prompt and a number of exemplars are followed by a current image of the domain, and a growing list of successfully and unsuccessfully completed proposals from the same experiment. We refer to the appendix for a description of the instruction and how exemplars images and success detection are included in the prompt.
Task decomposition.
Given a free-text high-level task proposition, the list of available skills and a current image observation, we then prompt the VLM to decompose the task into a list of sub-goals/sub-tasks, again described as fee-form text without any restriction.
The decomposition prompt contains a general instruction and several exemplars, to which we concatenate the free-form description of the proposed task from the previous task, the encoding of a current image of the domain, and the skills currently available (note that these skills are naturally limited to those we can evaluate a reward for in our current implementation, and thus are fixed text strings rather than free-form text).
We note that although the fixed-text skills are provided to the decomposition prompt, the decomposition is not instructed to output steps using those fixed-text skills only, this is intended such that it should come up with steps that are necessary,
which may or may not be available (and thus we could use the steps as a proposal for learning a new skill that should be added to our library); the effect of the availability of the returned free-text skills will be discussed further in the retrieval section.
Skill retrieval.
Given each decomposed free-text sub-task and the list of available fixed-text skills, we can finally retrieve the most semantically similar skill – from the available skills – to accomplish the desired sub-task.
Note that several previous works perform retrieval using the embeddings of the language instructions,
whereas we formulate it as a direct question answering (QA) task in text, which we find to be more robust.
The retrieval prompt consists of a general instruction that states the retrieval request and the constraint of not rephrasing the retrieved skill name but picking only among the available skills, followed by an exemplar, then appended with the current list of available fixed-text skills and the free-text decomposition step to map. The result of this step then is a plan of a sequence of skills from the library that can be executed by the agent.
In addition, if the retrieval of a step in the decomposition sequence fails,
this can serve as a signal that a new skill is required in the policy.
Such cases can then be included in the next round of policy learning by the embodiment module.
We note, however, that due to the lack of a system to generate rewards for arbitrary skills, in this work we manually perform the choice of skills to add; which also means that in the current implementation, the decomposition plan will be discarded if the retrieval of any of its steps failed.
§.§ The Embodiment Module
After obtaining a decomposed list of subgoals, the curriculum will communicate the task to the corresponding embodiment to execute.
After executing a sequence of skills, it judges the success of the sequence, i.e. whether the goal that led to the decomposition, has been achieved, and reports the result back to the Curriculum module.
While determining the success of a sequence relies on pre-defined reward functions in our prototype system (see section <ref> for details),
it could, in theory, also draw upon LLM-based reward functions.
The module also collects all executed episodes in a dataset.
Once a stopping criterion has been reached – classically pre-defined as a certain number of episodes, but potentially also triggered by the Analysis module in section <ref> – a new policy learning iteration is launched with this dataset to fine-tune the previous policy; any offline policy learning algorithm could be used in this step and we refer to the next section for our specific exemplary choice.
For increased data efficiency, all of the episodes are re-labeled with the rewards of all of the skills currently known to the agent, including those newly added by the curriculum module described in section <ref>.
§.§ The Analysis Module
Finally, this module examines the learning progress of skills by few-shot prompting.
The prompt prefix is formatted as:
Each exemplar is given in the format of:
where the reward curve plot for each exemplar is plugged into
,
and the exemplar reasoning and answer are placed into
and
.
For all skills for a certain embodiment,
the analysis module will periodically go through the learning curves of each of them.
Those that are judged as converged will be added to the available skills of that embodiment (and by extension, become available to the curriculum module for decomposition) and its training will be terminated.
We note that this imposes some constraints on the reward functions that can be used: they need to be normalized, and the episode duration of evaluation runs needs to be known, in order to allow meaningful scaling of the curves for analysis.
§ SYSTEM REALIZATION
In order to explore the feasibility of the system, we implement its components, and apply them to a simulated robotic manipulation task.
§.§ Module Interaction
The periodically retrieves images from the environment, and includes them into the goal proposal prompt.
The goal is then decomposed into steps and skill captions are retrieved.
If any of the steps cannot be mapped to a known skill during retrieval, the plan is discarded, and the process repeated.
If all steps are retrieved, the skill sequence is sent to the , which uses them to condition a text-conitioned learned policy; we use the perceiver-actor-critic (PAC) algorithm <cit.> to learn and represent such policies.
The program flow is controlled by the : after each decomposition, all of the potentially multiple instances perform a fixed number of episode rollouts, with the skill being changed at fixed (pre-defined) intervals.
We acknowledge that this approach only applies for quasi-static domains like the object arrangement tasks considered here.
For more dynamic domains, it is necessary to also condition the model to return a duration for each skill, or to continuously query it as to whether to switch skills at a given point in time.
At the end of each rollout, the reports whether the plan was successfully executed. Success here is defined as observing a reward >0.5 for each executed skill, and >0.95 for the last skill in the sequence;
all skills in the proposed sequence must be completed to qualify as success.
The includes these success reports into its list of successful and unsuccessful plans, for use in subsequent prompts.
We use a chat-based interface between these modules, similar to that used by <cit.>.
This allows easily connecting them in a natural interface, which also facilitates human introspection and intervention during testing.
Modules simply join a Google Meet session, and interact with each other via chat messages, as well as streaming image and video data through it.
Messages can be broadcast, enabling a single high-level VLM to control the skills of multiple low-level policies at the same time, thus increasing compute efficiency in the face of otherwise expensive queries to the VLM.
The setup is illustrated in Figure <ref>.
The is used outside of the experiment loop in this prototype.
Rather than actually stopping the experiment, we run it after the experiment has concluded, so that we can evaluate whether the termination point chosen by it was indeed the point of convergence.
§.§ Policy Training
For the low-level control policy, we employ a Perceiver-Actor-Critic (PAC) model <cit.>.
Such a model has been shown to be trainable via offline reinforcement learning, can be text conditioned, and is able to utilize non-expert (exploration) data that our agent will generate.
This in turn allows us to additionally relabel all data with multiple reward functions, and thus reuse a small amount of data to train all desired skills.
In PAC, skills can be represented by either conditioning the policy on language, on goal images, or a mixture thereof. Here, we purely opt for language, as this allows us to directly communicate the high-level system's skill proposals to the low-level policy.
§.§ Prompting
The high-level system is represented by a standard Gemini 1.5 Pro model <cit.>.
To design the prompts for the Gemini model, we use the publicly available
OneTwo Python library <cit.>.
OneTwo is a model-agnostic layer that abstracts aspects such as injecting components into a VLM prompt template, and extracting pre-defined fields from the model's response.
Each component's prompt contains a small number of exemplars which were hand-designed and include image data from previous experiments.
This includes 2 each for for proposal and decomposition, 1 for retrieval, and 6 for analysis.
It is also worth noting that none of the proposal exemplars contain a scene with three objects, unlike in the domain we apply it to, in order to not bias the responses.
All exemplars used are provided in Appendix <ref>
§ EXPERIMENTAL RESULTS
§.§ Benchmark
To evaluate the benefits of our approach, we consider a robotic block stacking task, previously described in <cit.>.
In this task, three colored objects in a basket need to be arranged into a desired configuration by a 7-DoF Franka Panda robot fitted with a Robotiq 2F-85 parallel gripper.
The domain is implemented in the MuJoCo simulator <cit.>.
This task was chosen since it provides combinatorial complexity, which lends itself to building up more complex skills, yet is also narrow enough to allow automatic evaluation and manual reward design.
More specifically, our expectation with this domain is for the Gemini-aided auto-curriculum to be able to lead the agent to automatically discover and learn the object configurations such as tower and pyramid, which were previously manually designed by human researchers.
§.§ Auto-curriculum-based Exploration
To examine the ability of the system to perform task proposal and decomposition, we first train a PAC model to perform a number of simple base skills for the to utilize.
Note that the framework also allows for the agent to learn from scratch, but here as a proof of concept, we start with a base set of skills to allow for faster learning iterations.
We use a pre-existing dataset of approximately 1M episodes collected from a single-task RL experiment, where an MPO agent <cit.> was trained to perform the different permutations of stacking a single object on top of another.
The data is re-labeled with reward functions corresponding to a set of basic skills,
including opening and closing the gripper, reaching an object, lifting an object, holding one object over another, and stacking them. For a full list see Appendix <ref>.
We then train a PAC model with 140M parameters for 1.5M steps, after which performance has stabilized for all skills.
We then use this fixed policy to perform Gemini-driven data collection, following the approach described in Section <ref>.
As this data is intended for further self-improvement training, we roughly follow the CHEF approach <cit.> of performing a hyperparameter search to aid diversity.
However, we do not vary the parameters of the slow PAC training, but instead explore different settings for the .
Firstly, we vary the sampling temperature of the VLM, using both 0.0 and 0.3.
Secondly, we perform collection runs with different sets of skills made available to the agent: either all of the skills including the composite stack A on B, or only simpler ones up to hold A above B.
In each run, the controls 10 simulator instances in order to parallelize data collection.
Each skill proposal and decomposition sequence is also executed 5 times per robot to reduce querying load on the VLM.
Decomposed plans are executed open-loop, in the sense that each skill in the sequence is maintained for a fixed duration of 20 seconds before switching to the next one.
In this manner, we collect a set of 25k robot episodes in total.
§.§.§ Data Diversity
First, we compare the dataset used for pre-training the PAC policy (which we refer to as set) with the new dataset collected by our method (set), using a distance metric similar to <cit.>.
We do this separately for camera images and proprioception data (i.e. joint angles).
For camera images, we use a CoCa image embedding <cit.>; for proprioception, we use the non-embedded observations, and normalize them first along each dimension and then overall for unit norm.
Then we measure the relative L2 distance of these representations to each other, as well as the distance of each to their respective cluster in a k-means clustering with 5 clusters (where the clusters were learned on the set).
Table <ref> highlights how data in the collected set appears to be more spread out.
The diversity in vision and proprioception data can be taken to be directly beneficial for self-improvement.
Separately, we also compare the diversity in the embeddings of the language instruction of the skills executed throughout the episodes.
We pass these through the embedding available via an older, text-only Gemini model, and contrast the diversity of the set with the combined one used for fine-tuning.
We observe an L2 distance of 0.287 and cluster distance of 0.097 for the set, vs. 0.555 L2 and 0.732 cluster for the combined set.
This matches our expectations, given that the original diversity was low (with only 6 permutation of the form "stack A on B"), the set not only executes more diverse skills, but also multiple per episode.
When inspecting the data visually,
it also becomes apparent that it generates more complex object arrangements.
For instance, we find multiple attempts to build a tower, as well as pyramid-like structures – the latter of which result e.g. from failed tower building attempts, as the model does not propose pyramid-building itself.
It is also worth noting that the proposals generated by the model are in fact fairly focused, and mostly cover plans such as stack A on B and stack X on Y, as well as put A next to B, for only 27 unique proposals. However, the decomposition module expands these into 102 unique skill sequences.
For instance, decomposing the task of building a red-blue-green tower at different times results in two plans with the same outcome but different skill sequences:
reach blue, lift blue, above green>, open gripper,
reach red, lift red, above blue, open gripper
reach blue, lift blue, reach green, above green, open gripper,
reach red, lift red, reach blue, above blue, open gripper>
Additional examples, both successful and failed, are provided in Appendix
<ref>.
§.§.§ Self-Improvement
In addition to quantifying the quality of the collected data, we also use it to perform a round of self-improvement of the pretrained PAC policy.
For this, we introduce three new skill into the set learned by the model that were not available in the skill library given to the for data collection: arranging the three objects into a pyramid shape, arranging them in an inverted pyramid, and arranging them to form a tower of three objects.
We manually chose these for being the same as previously used as benchmark by <cit.>;
while the curriculum module frequently suggests tower building, it does not suggest the pyramid tasks during exploration.
The rewards for these tasks are defined in Appendix <ref>.
Data is relabeled with these new rewards in addition to the existing ones.
We then fine-tune the PAC model with two datasets: once using only the original dataset, and once using the combined and sets.
In the latter case, given that the set is much smaller than the set, we up-sample it so that both datasets contribute 50% of each training batch.
We also up-sample the newly added pyramid-building skill, to in turn account for 50% of the data from each dataset.
Figure <ref> compares the performance of these datasets on a selection of skills.
As is evident from these results, not only does the added data allow the model to learn the "pyramid" skills, but it also leads to better performance on the base skills.
It is worth noting that none of the policy learns to perform tower building; this is due to the low success rate of the pretrained PAC policy when sequencing multiple skills in order to attempt stacking (since this leads to visiting states that are not represented in the original data).
Failed tower building does often lead to "accidental" creation of pyramids however, which explains the better performance on those tasks.
We therefore point out that it seems sensible to separate the proposal of new tasks to learn from the proposal of tasks used during data collection.
Finally, using the skills resulting from learning on the combined datasets we perform one additional iteration: we subsequently collect 15k more episodes with the newly trained pyramid building skills added to the skill library; and thus available to the curriculum module.
We again up-sample data so as to weight all three data sources equally.
When using this data to fine-tune the PAC policy once more, performance for these skills increases substantially, as also seen in Figure <ref>.
§.§ VLM-based Performance Analysis
During the initial PAC policy training, we trained the model for approximately 1.5M learner steps.
After running that long, we observe a degeneration of performance, particularly for "simpler" skills, which can be attributed to overfitting.
Normally, a human RL would employ early stopping to avoid such effects, and stop the experiment once the learning curves for all skills appear to have converged.
Here, we use Gemini to judge the convergence state of the experiment post-hoc after the training has concluded and run for an extended number of steps, in order to determine the point at which the model would have proposed early stopping.
All learning curves are scaled to a maximum reward of 400, which is known since rewards are clipped to [0; 1] and evaluation episodes do not exceed 400 steps.
The analysis model is not otherwise informed regarding the expected total reward of each skill.
Figure <ref> illustrates a selection of these judgments.
While these judgments are not fully stable, and false ones do occur, the VLM judges an increasing number of skills as converged as training progresses.
Evident errors occur mostly when judging early plateaus in the learning progress (e.g. place blue on green at 300k steps or stack red on green at 400k steps) – a limitation that would similarly affect a human practitioner if not aware of the expected final reward.
Other unstable classifications involve irregular curves such as those of lift red.
Overall, the ratings reflect both the increasing performance of the skills over time, as well as their relative difficulty, as illustrated in Figure <ref>, where easier skills can be seen to be judged as converged from early on, while harder ones only get judged as such later on average.
§.§ Progressively Adding Skills
A second purpose of the analysis module lies in determining which skills are trained sufficiently to be used for decomposition.
In this work, we first trained the PAC policy to convergence, before starting curriculum-driven data collection.
But generally, these two processes can be performed concurrently.
In order to illustrate the curriculum module's ability to work with a growing set of skills, we therefore examine some of the plans generated when using those skills judged as converged in section <ref> at certain points in time.
We examine the proposals and decompositions at four points of the experiment: with those skills judged successful after 200k, 500k and 800k learner updates in the first PAC training experiment, as well as the entire set of skills added for self-improvement.
An overview of the model responses is provided in Table <ref>.
For more detailed outputs of the model, including the reasoning provided by the model for each response, see Appendix <ref>.
We see that after both 200k and 500k steps, the proposition yields the same simple goal.
But while after 200k steps the system has to use the most basic skills for decomposition, it employs the more reliable reach green and grasp anything skills at 500k steps.
At 800k steps, when all skills are available, it generates more complex propositions, and directly uses the higher-level stacking skills.
And with the fine-tuning skills included, the model attempts to arrange the objects into a line, which resolves into building a tower – i.e. a vertical line.
§ DISCUSSION AND FUTURE WORK
We have outlined an agent architecture for reinforcement learning that uses a VLM to perform various capabilities normally expected of a human in Section <ref>.
These capabilities would allow automating the training process of the agent beyond current capabilities, and let an embodied agent autonomously acquire an ever-growing set of skills with ever-increasing mastery.
We implemented and evaluated a first prototype of such a system in Section <ref>, including the functionalities of proposing new tasks for exploration, decomposing them into skill sequences, and analyzing the progress of the learning experiment.
For this first proof-of-concept system, we simplified several of the components and their interaction.
This was done both to limit the scope of this study, but also in order to focus on determining whether state-of-the-art methods and models are able to perform the required capabilities – particularly when used zero-shot, without costly fine-tuning of the VLM.
The prototype system showed the ability to automatically collect diverse data, which was successfully used to perform self-improvement of the control policy, and to learn new skills not learnable with a more narrow starting set (section <ref>).
The curriculum also displayed the ability to adapt the complexity of its task propositions and plans to the complexity of the available skills.
Going forward, we intend to reduce the simplifications made for the prototype system and strive for full automation, with several natural next steps outlined in the following.
In our prototype implementation, the analysis of learning progress was performed post-hoc, to illustrate the quality of analysis at different stages.
While the quality of judgments of experiment convergence was not fully reliable and suffered from the same uncertainty at plateaus that a human does, it did show potential to make correct judgments when aggregated over time.
In addition, the successful application of the system to self-improvement of the policy means that even if prematurely terminating the training of skills at plateaus, training of these skills can continue in subsequent self-improvement rounds.
In the future, we would therefore seek to integrate it directly into the automation and allow it to stop the experiment.
In the future, we also plan to include LLM-based reward functions into the architecture, once techniques are mature enough.
While currently it is too easy for RL agents to exploit false reward detections, recent advances such as Eureka <cit.> promise zero- or few-shot LLM-based rewards without the need to train specialized models.
This provides a natural next 's capability to integrate: to have the system automatically add its proposals as new skills.
It will also allow adding the one functionality of the we left out thus far: to automatically add proposed actions as new skills – which requires the PAC training to be able to label datasets with rewards matching those proposals.
Related to this, we currently simply discard decomposed sequences if any of the steps cannot be mapped to a skill known by the policy.
But encountering an unknown step provides a strong signal that there is a skill missing, and a natural next step is to include it in the next PAC training cycle as a base skill.
However, doing so would again require a universal reward module, which is not presently available.
During evaluation of the PAC policies, we observed that skills are often not sequenced correctly, e.g. after completing skill stack red on blue, the policy may not perform stack green on red successfully, even though it can perform it in isolation.
This may be either attributed to incomplete separation between skills, or as insufficient data coverage; generally, the base dataset would never have observed the terminal state of one skill as the initial state of another.
This can lead to otherwise sensible plans generated by the decomposition module to not achieve task success, which in turn causes the stack X on Y and Z on X skills to never achieve non-zero performance after the PAC finetuning.
We hypothesize that this may be rectified through repeated self-improvement, as the skill sequencing would generate more diverse data, and/or by reducing the weight of the narrowly biased set.
However, such extended data collection was not feasible in this work.
During episode unrolls, we executed the skill sequence open-loop, maintaining each skill for a fixed amount of time.
This was justified by the fact that the domain is largely static, but we do note that for more dynamic domains, it would be necessary to have the decomposition provide a duration for each skill, or determine the switching point dynamically.
The latter is infeasible for Gemini-sized models due to limited inference speed, but is expected to become possible in the future, as smaller yet equally capable models become available.
We thank Ksenia Konyushkova for suggestions on prompting, Pierre Sermanet for laying the foundations of the technical infrastructure, and Jonas Adler for general discussions of the research field.
§ PROMPT DESIGN
Below are the concrete prompts used in our prototype system.
These contain the static parts of the prompts and the format of exemplars.
For actual exemplars used, see Appendix <ref>.
§.§ Task proposition
We note that this prompt is heavily inspired by the curriculum prompt in Voyager <cit.>.
Each exemplar is given in the format of:
The evaluation content at run time will then be given in the format of:
And the model will respond in the following format:
§.§ Task decomposition
Each exemplar is given in the format of:
The evaluation content at run time will then be given in the format of:
And the model will respond in the following format:
§.§ Skill retrieval
Each exemplar is given in the format of:
The evaluation content at run time will then be given in the format of:
And the model will respond in the following format:
§ PROMPT EXEMPLARS
This section provides the concrete exemplars that are in the prompt templates described above when querying the model.
§.§ Proposition
1
[open gripper, grasp the red object]
[]
I see one object: red. All possible spatial structures that can be built with it: move the red object to a desired 3D point. Previously, the robot has successfully grasped the red object, so to reach a different spatial structure, you can try move it to a different planar position or lift it up.
lift the red object up
2
[open gripper, grasp the red object, lift the red object up]
[stack the green object on top of the red object]
I see two objects: red, green. All possible spatial structures that can be built with them: a line where the two objects are placed next to each other; two dots where the two objects are apart from each other; a two-level tower with one object on top of another. a two-level slanted tower with one object on top of another but not aligned at the center. Previously, the robot has successfully manipulated the red object but not the green one, and it also has not built any structure with both two objects, so trying to build a two-level tower by stacking the red object on top of the green object should be both interesting and feasible to try now.
stack the red object on top of the green object
§.§ Task decomposition
1
put the red object next to the blue object
[open gripper, grasp the red object, lift the red object up, stack the red object on top of the blue object]
All three objects are on the bottom of the basket. The robot is able to stack the red object on top of the blue object but does not know how to put the red one next to the blue one. However, the task might be accomplished by first stacking the red on blue, then lift up the red object and open the gripper, in which case the red object should drop not too far from the blue object.
[stack the red object on top of the blue object, lift the red object up, open gripper]
2
build a three-level tower
[open gripper, grasp the blue object, stack the blue object on top of the green object, grasp the red object, lift the red object up, stack the red object on top of the blue object]
The blue object is stacked on top of the green one. To build a three-level tower, I just need to stack the red object on top of the blue object, which I happen to know how to.
[stack the red object on top of the blue object]
§.§ Retrieval
1
stack red on blue
[open gripper, grasp the red object, lift the red object up, put the red object on top of the blue one]
The skill in query asks for a object configuration where the red object is on top of the blue one. Except for the 4th one, all other skills in the library concerns with the red object only, while the 4th one not only concerns the red and the blue object but also matches the desired configuration.
put the red object on top of the blue one
§.§ Analysis
1
The learning curve is still going up, the learning has not converged yet.
NO
2
The learning curve has reached a plateau and the performance has stablized, the training has converged.
YES
3
The learning curve has reached a plateau and the performance has stablized, the training has converged.
YES
4
The learning curve has has peaked an is degenerating again, the training had converged already.
YES
5
The learning curve had degenerated after an earlier peak but is going up again, the training has not converged yet.
NO
6
There is no full learning curve yet, the training has not converged yet.
NO
§ RESPONSE EXAMPLES
To illustrate the type of responses generated by the model, we present a number of examples from the beginning of a self-improvement run.
In the absence of successful trials, the very first proposal is to perform a tower building task.
EX1
[grasp anything, open gripper, close gripper, reach red, reach green, reach blue, above red, above green, above blue, lift red, lift green, lift blue, stack red on green, stack red on blue, stack green on red, stack green on blue, stack blue on red, stack blue on green]
[]
I see three objects: red, green, blue. All possible spatial structures that can be built with them: a line where the three objects are placed next to each other; three dots where the three objects are apart from each other; a two-level tower with one object on top of another, and the third one on the side; a three-level tower with one object on top of another; a three-level slanted tower with one object on top of another but not aligned at the center. Previously, the robot has successfully built two-level towers, so trying to build a three-level tower should be both interesting and feasible to try now.
build a three-level tower with blue on top of green on top of red
Decomposition of this task illustrates the potential impact of temperature of the model; if we sample multiple decompositions in the same situation (but with different images) with a high temperature of 1.0, the model produces plans of differing complexity.
EX2
build a three-level tower with blue on top of red on top of green
[grasp anything, open gripper, close gripper, reach red, reach green, reach blue, above red, above green, above blue, lift red, lift green, lift blue, stack red on green, stack red on blue, stack green on red, stack green on blue, stack blue on red, stack blue on green]
All three objects are on the bottom of the basket. To build a three-level tower with blue on top of red on top of green, I need to first stack red on green, then stack blue on red.
[reach green, above green, grasp anything, close gripper, lift green, reach red, above red, stack green on red, open gripper, reach blue, above blue, grasp anything, close gripper, lift blue, reach red, above red, stack blue on red, open gripper]
EX3
build a three-level tower with blue on top of red on top of green
[grasp anything, open gripper, close gripper, reach red, reach green, reach blue, above red, above green, above blue, lift red, lift green, lift blue, stack red on green, stack red on blue, stack green on red, stack green on blue, stack blue on red, stack blue on green]
All three objects are on the bottom of the basket. To build a three-level tower with blue on top of red on top of green, I need to first stack red on green, then stack blue on red.
[stack red on green, stack blue on red]
The same task is also proposed later, after collecting several trials, with the reasoning that it failed several times and should thus be practiced. Note that if the same trial is attempted multiple times, it is combined and denoted with a success count () for brevity; in the actual prompt, these are expanded into a list.
EX4
[grasp anything, open gripper, close gripper, reach red, reach green, reach blue, above red, above green, above blue, lift red, lift green, lift blue, stack red on green, stack red on blue, stack green on red, stack green on blue, stack blue on red, stack blue on green, build a three-level tower with blue on top of green on top of red (x2), build a three-level tower with blue on top of red on top of green (x2)]
[build a three-level tower with blue on top of green on top of red (x2), build a three-level tower with blue on top of red on top of green (x5)]
I see three objects: red, green, blue. All possible spatial structures that can be built with them: a line where the three objects are placed next to each other; three dots where the three objects are apart from each other; a two-level tower with one object on top of another, and the third one on the side; a three-level tower with one object on top of another; a three-level slanted tower with one object on top of another but not aligned at the center. The robot has successfully built three-level towers but failed many times, so it needs more practice.
build a three-level tower with blue on top of red on top of green
A common failure case involves the model incorrectly injecting a description of the scene into the answer field of the response. This leads to a failure in the OneTwo library to parse the response even when it contains a task proposal, which in turn will cause the plan to be rejected.
EX5
build a two-level slanted tower with blue on top of red
[grasp anything, open gripper, close gripper, reach red, reach green, reach blue, above red, above green, above blue, lift red, lift green, lift blue, stack red on green, stack red on blue, stack green on red, stack green on blue, stack blue on red, stack blue on green]
All three objects are on the bottom of the basket. The robot can stack blue on red to build a two-level tower, but it might not be slanted. To make it slanted, the robot can first lift the red object up, then move above blue and drop it.
All three objects are on the bottom of the basket
§ PAC SKILLS AND REWARD FUNCTIONS
The reward functions used to train the PAC policy generally follow those used in <cit.>.
However, we use lower-level skills: instead of the strongly shaped and staged "stack and leave" reward they use, we only use its atomic components, as we would hope for our system to "discover" the heavily engineered, human-provided composite function in prior work. Thus our skill library consists of the following (where , and are placeholders for all possible permutations of red, blue and green):
: Shaped; 0 if the gripper is closed, 1 if is maximally opened.
: Shaped; Inverse of open_gripper.
: Binary; 1 if the gripper's grasp sensor is triggered, 0 otherwise.
: Shaped; tangentially decaying distance between the robot's TCP and the center of object .
: Shaped; tangentially decaying distance between the robot's TCP and a point 10cm above the center of .
: Shaped; 0 if the center of is less than 5cm above the workspace surface, 1 if more than 10cm above, linearly interpolated between those limits.
: Shaped; tangentially decaying distance between the center of and a point 4cm above .
: Shaped; place_X_Y, but set to 0 if grasp_anything is non-zero.
During the self-improvement experiments, we add three more skills, composed of the above:
: Product of stack_X_Y and stack_Y_Z.
: Product of stack_X_Y and stack_X_Z.
: Product of stack_X_Y and stack_Z_Y.
§ PROGRESSIVE SKILL ADDITION
Below are the detailed model responses when examining proposal and decomposition at different steps of the pre-training process in section <ref>.
We note that since this is a post-hoc analysis, as the system had all skills available during the data collection experiments we performed.
Thus we can not collect and for proposition. Therefore for all the following rounds,
we use all skills available at the corresponding training step in the skill library as the and use an empty list as the for proposition.
§.§ 200k Learner Steps
Skill library:
[open gripper, close gripper, above red, above green, above blue, reach red]
I see three objects: red, green, blue. All possible spatial structures that can be built with them: a line where the three objects are placed next to each other; three dots where the three objects are apart from each other; a two-level tower with one object on top of another; a three-level tower with one object on top of another; a two-level slanted tower with one object on top of another but not aligned at the center; a three-level slanted tower with one object on top of another but not aligned at the center. Previously, the robot has successfully reached the red object but has not grasped any object yet, so trying to grasp an object should be both interesting and feasible to try now.
grasp the green object
grasp the green object
[open gripper, close gripper, above red, above green, above blue, reach red]
The robot does not know how to grasp the green object directly. However, it can move its gripper above the green object and then close the gripper, which should result in grasping the green object.
[above green, close gripper]
above green
[open gripper, close gripper, above red, above green, above blue, reach red]
The query concerns with a configuration where something is above the green object. All skills except above green are irrelevant.
above green
close gripper
[open gripper, close gripper, above red, above green, above blue, reach red]
The query skill concerns with the gripper only. The second entry in the library is the only one that concerns with the gripper.
close gripper
§.§ 500k Learner Steps
Skill library (added on top of 200k steps):
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red]
I see three objects: red, green, blue. All possible spatial structures that can be built with them: a line where the three objects are placed next to each other; three dots where the three objects are apart from each other; a two-level tower with one object on top of another, and the third one on the side; a three-level tower with one object on top of another; a three-level slanted tower with one object on top of another but not aligned at the center. Previously, the robot has successfully manipulated the red object but not the green and blue ones, so trying to grasp the green object should be both interesting and feasible to try now.
grasp the green object
grasp the green object
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red]
The robot can reach above the green object, reach the green object, and grasp anything. Therefore, the robot can grasp the green object.
[The robot can reach above the green object, reach the green object, and grasp anything]
The robot can reach above the green object
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red]
The query concerns the robots capability of reaching a location above the green object. All skills in the library except above green are about robot actions, while above green describes a location relative to the green object.
above green
reach the green object
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red]
The query asks for reaching the green object. All skills except reach green are either irrelevant or describe a different object.
reach green
and grasp anything
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red]
The query skill does not induce any specific object configuration but asks for a grasping action. The only skill in the library that involves grasping is grasp anything.
grasp anything
§.§ 800k Learner Steps
Skill library (added on top of 500k steps):
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red, hold red over green, hold red over blue, hold green over blue, hold green over red, hold blue over red, hold blue over green, stack red on green, stack red on blue, stack green on blue, stack green on red, stack blue on red, stack blue on green]
I see three objects: red, green, blue. All possible spatial structures that can be built with them: a line where the three objects are placed next to each other; three dots where the three objects are apart from each other; a two-level tower with one object on top of another, and the third object next to the tower; a three-level tower with one object on top of another; a three-level slanted tower with one object on top of another but not aligned at the center. The robot has successfully built all two-level towers, so trying to build a three-level tower should be both interesting and feasible to try now.
build a three-level tower with blue on top of red on top of green
build a three-level tower with blue on top of red on top of green
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red, hold red over green, hold red over blue, hold green over blue, hold green over red, hold blue over red, hold blue over green, stack red on green, stack red on blue, stack green on blue, stack green on red, stack blue on red, stack blue on green]
All three objects are on the bottom of the basket. To build a three-level tower with blue on top of red on top of green, I need to stack red on green first, then stack blue on red.
[stack red on green, stack blue on red]
stack red on green
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red, hold red over green, hold red over blue, hold green over blue, hold green over red, hold blue over red, hold blue over green, stack red on green, stack red on blue, stack green on blue, stack green on red, stack blue on red, stack blue on green]
The query asks for a specific object configuration where the red object is on top of the green one. All skills except those related to stack only involve individual object manipulation. The desired configuration is specifically achieved by stack red on green.
stack red on green
stack blue on red
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red, hold red over green, hold red over blue, hold green over blue, hold green over red, hold blue over red, hold blue over green, stack red on green, stack red on blue, stack green on blue, stack green on red, stack blue on red, stack blue on green]
The query asks for a specific object configuration - blue on top of red. All other skills either manipulate a single object or create different configurations. Only stack blue on red matches the desired outcome.
stack blue on red
§.§ After Self-improvement
Skill library (added on top of 800k steps):
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red, hold red over green, hold red over blue, hold green over blue, hold green over red, hold blue over red, hold blue over green, stack red on green, stack red on blue, stack green on blue, stack green on red, stack blue on red, stack blue on green, stack green on blue and red on green, stack blue on green and red on blue, stack red on green and blue on red, stack green on red and blue on green, stack blue on red and green on blue, stack red on blue and green on red, build a pyramid with red on top and green and blue at the bottom, build a pyramid with red on top and blue and green at the bottom, build a pyramid with green on top and blue and red at the bottom, build a pyramid with green on top and red and blue at the bottom, build a pyramid with blue on top and red and green at the bottom, build a pyramid with blue on top and green and red at the bottom, build an inverted pyramid with green and blue at the top and red at the bottom, build an inverted pyramid with blue and green at the top and red at the bottom, build an inverted pyramid with blue and red at the top and green at the bottom, build an inverted pyramid with red and blue at the top and green at the bottom, build an inverted pyramid with red and green at the top and blue at the bottom, build an inverted pyramid with green and red at the top and blue at the bottom]
I see three objects: red, green, blue. All possible spatial structures that can be built with them: a line where the three objects are placed next to each other; three dots where the three objects are apart from each other; a two-level tower with two objects on top of one another; a three-level tower with three objects on top of one another; a two-level slanted tower with two objects on top of one another but not aligned at the center; a three-level slanted tower with three objects on top of one another but not aligned at the center. The robot has already built all possible two-level and three-level towers, including slanted ones. The robot has not yet built a line with all three objects.
build a line with the red object next to the green object and the blue object next to the green object
build a line with the red object next to the green object and the blue object next to the green object
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red, hold red over green, hold red over blue, hold green over blue, hold green over red, hold blue over red, hold blue over green, stack red on green, stack red on blue, stack green on blue, stack green on red, stack blue on red, stack blue on green, stack green on blue and red on green, stack blue on green and red on blue, stack red on green and blue on red, stack green on red and blue on green, stack blue on red and green on blue, stack red on blue and green on red, build a pyramid with red on top and green and blue at the bottom, build a pyramid with red on top and blue and green at the bottom, build a pyramid with green on top and blue and red at the bottom, build a pyramid with green on top and red and blue at the bottom, build a pyramid with blue on top and red and green at the bottom, build a pyramid with blue on top and green and red at the bottom, build an inverted pyramid with green and blue at the top and red at the bottom, build an inverted pyramid with blue and green at the top and red at the bottom, build an inverted pyramid with blue and red at the top and green at the bottom, build an inverted pyramid with red and blue at the top and green at the bottom, build an inverted pyramid with red and green at the top and blue at the bottom, build an inverted pyramid with green and red at the top and blue at the bottom]
The robot can stack green on blue and red on green in one go, which will result in the desired configuration.
[stack green on blue and red on green]
stack green on blue and red on green
[open gripper, close gripper, above red, above green, above blue, reach red, reach green, reach blue, grasp anything, lift red, hold red over green, hold red over blue, hold green over blue, hold green over red, hold blue over red, hold blue over green, stack red on green, stack red on blue, stack green on blue, stack green on red, stack blue on red, stack blue on green, stack green on blue and red on green, stack blue on green and red on blue, stack red on green and blue on red, stack green on red and blue on green, stack blue on red and green on blue, stack red on blue and green on red, build a pyramid with red on top and green and blue at the bottom, build a pyramid with red on top and blue and green at the bottom, build a pyramid with green on top and blue and red at the bottom, build a pyramid with green on top and red and blue at the bottom, build a pyramid with blue on top and red and green at the bottom, build a pyramid with blue on top and green and red at the bottom, build an inverted pyramid with green and blue at the top and red at the bottom, build an inverted pyramid with blue and green at the top and red at the bottom, build an inverted pyramid with blue and red at the top and green at the bottom, build an inverted pyramid with red and blue at the top and green at the bottom, build an inverted pyramid with red and green at the top and blue at the bottom, build an inverted pyramid with green and red at the top and blue at the bottom]
The query asks for a three-object configuration with green on blue and red on green. This configuration is exactly what the 22nd skill in the library describes.
stack green on blue and red on green
|
http://arxiv.org/abs/2409.02445v1 | 20240904045500 | Energy and helicity evolution in a flux emergence simulation | [
"K. Moraitis",
"V. Archontis",
"G. Chouliaras"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Physics Department, University of Ioannina, Ioannina GR-45110, Greece School of Mathematics and Statistics, St Andrews University, St Andrews, KY16 9SS, UK
The main aim of this work is to study the evolution of the recently introduced relative helicity of the magnetic polarity inversion line (PIL) in a magnetohydrodynamics simulation.
The simulation used is a typical flux emergence simulation in which there is additionally an oblique, pre-existing magnetic field. The interaction of the emerging and ambient fields produces intense coronal activity, with four jets standing out. The 3D magnetic field allows us to compute various energies and helicities, and to study their evolution during the simulation, especially around the identified jets. We examine the evolution of all quantities in three different regions: in the whole volume, in three separate subvolumes of the whole volume, and in a 2D region around the PIL on the photosphere.
We find that the helicities are in general more responsive to the jets, followed by the free energy. The eruptivity index, the ratio of the current-carrying helicity to the relative helicity, does not show the typical behaviour it has in other cases, as its variations do not follow the production of the jets. By considering the subvolumes we find that the magnetic field gets more potential and less helical with height. The PIL relative helicity confirms the recent results it showed in observed active regions, exhibiting stronger variations during the jets compared to the standard relative helicity. Moreover, the current-carrying helicity around the PIL has a similar behaviour to the PIL relative helicity, and so this quantity could be equally useful in solar eruptivity studies.
Energy and helicity evolution in a flux emergence simulation
Moraitis et al.
Energy and helicity evolution in a flux emergence simulation
K. Moraitis^1 V. Archontis^1 G. Chouliaras^2
Received ... / Accepted ...
============================================================
§ INTRODUCTION
Solar coronal jets are transient, energetic phenomena of the Sun that power the solar wind and are also related to coronal heating. Coronal jets are collimated plasma ejections with a characteristic inverse Y shape that are observed in extreme ultraviolet and X-rays, mostly in coronal holes. They typically occur with a frequency of 2-3 per hour, lifetimes of 10 mins, velocities around 200 km s^-1, and heights of up to a few 100 Mm <cit.>. Jets are often helical, and also show evidence of untwisting <cit.>. The standard picture we have for the formation of jets is that a small parasitic polarity grows inside a larger-scale, pre-existing magnetic field of the opposite polarity, and this leads to magnetic reconnection between the two systems <cit.>. Jets can be divided into standard jets and blowout jets <cit.>, depending on whether the smaller bipole is eruptive or not.
The formation of coronal jets can be modelled by different types of magnetohydrodynamics (MHD) simulations. A first class of jet experiments is when magnetic flux emerges from below the photosphere and collides with a pre-existing coronal field <cit.>. Another class of experiments is when an initial magnetic configuration containing a null point is slowly driven to instability by continuous photospheric motions <cit.>. In both cases, the jet is produced by magnetic reconnection when the opposite-polarity fields meet.
A physical quantity that is often examined in MHD simulations in general, and in jet simulations in particular, as it is related to the twist, is magnetic helicity. The significance of magnetic helicity is evident, as it is one of the three conserved quantities of ideal MHD <cit.>. In astrophysical applications, the appropriate form it has is given by relative helicity, which is expressed with the help of a reference magnetic field <cit.>. Moreover, relative helicity can be uniquely split into two gauge-independent components; namely, the current-carrying helicity and the volume-threading helicity <cit.>.
The ratio of the current-carrying helicity to the total relative helicity has been shown to have strong relations with solar eruptivity <cit.>, and is thus often referred to as the (helicity) eruptivity index. This quantity has been applied successfully in many simulated active regions (ARs) <cit.>, and in observed ARs as well <cit.>. The typical behaviour of the eruptivity index is that it increases until a case-dependent threshold is reached right before eruptive events, and then drops quickly.
Another helicity-related quantity that is ideal to study in MHD simulations is relative field line helicity (RFLH). This quantity can be considered as the density of relative helicity <cit.>, as it can highlight the locations where helicity is most important. It has been studied recently around the time of a strong flare from a single AR; namely, the X2.2 flare of AR 11158 <cit.>. It was shown there that the morphology of RFLH indicates a flare-related decrease in relative helicity that is coincident with the flare ribbons. Additionally, the relative helicity contained in the ribbons showed the same sharp absolute decrease as the volume helicity during the flare.
In a continuation of this work with a wider AR sample, <cit.> has shown that the relative helicity contained in a narrow region around the polarity inversion line (PIL) of the magnetic field is linked to solar flaring activity. This conclusion was based on the sharpest decreases experienced by the PIL helicity during solar flares compared to those by the other helicities that were examined. A first motivation for the current work is therefore to see whether these results hold in a simulated AR as well. It is also interesting to examine the behaviour of the current-carrying component of RFLH, something not done before, and the controlled environment of an MHD simulation is ideal for this purpose.
The main aim of this work is to examine whether the recent results for the eruptivity-indicating PIL helicity are relevant for a flux emergence MHD simulation of solar jet production. In addition to that, we also examine the overall evolution of various energy- and helicity-related quantities during the simulation. In Sect. <ref>, we describe the main characteristics of the MHD simulation that we use. In Sect. <ref>, we define all quantities of interest and the methodology for computing them from the simulation. In Sect. <ref>, we present the obtained results, and finally in Sect. <ref> we summarise and discuss the results of the paper.
§ THE MAGNETOHYDRODYNAMICS SIMULATION
The MHD simulation that we use is a typical flux-emergence jet experiment in which a highly twisted flux tube emerges into a stratified atmosphere that resembles the solar atmosphere <cit.>. The flux tube is along the y axis, is inserted at a depth of z=-2.3 Mm, and has a magnetic field strength of 7900 G at its axis. Another ingredient of the simulation is that the partial ionisation of the plasma is taken into account, although in a single-fluid description <cit.>. There is additionally an oblique ambient magnetic field of strength 10 G that makes an angle of 11^o with respect to the vertical direction (along the z axis). Apart from the ambient field, the set-up is identical to the PI case of <cit.>, where more information can be found about the simulation.
The initial set-up of the simulation is depicted in Fig. <ref>. The computational box is uniform with 420 pixels in each direction that correspond to 64.8 Mm in physical dimensions. The convection zone occupies the lower 7.2 Mm below the photosphere and so the volume of interest is 64.8 Mm×64.8 Mm×57.6 Mm. The simulation consists of 100 snapshots in total, of which we have used 73: snapshots 15-75 with a cadence of 1, and snapshots 77-99 with a cadence of 2. Snapshots before 15 have not been examined, since the flux tube is still below the photosphere. The time unit for each snapshot is equal to 86.9 s, and so the total duration of the simulation corresponds to ∼145 min of real time. The full details of the simulation are discussed in an upcoming work (Chouliaras et al. 2024, in preparation), in which the comparison with the fully ionised case is also made.
The solenoidality of the produced MHD fields is of course excellent, as was expected from the divergence-preserving Lare3D code that was used to produce the simulation <cit.>. As an example, the divergence energy ratio <cit.> takes average values of 0.01, well below the threshold of ∼ 0.05 that is needed for a reliable helicity estimation <cit.>.
The evolution of the simulation is quite dynamic, with a number of transient phenomena taking place. More specifically, at snapshots 30-34, which correspond to the times 43 min≲ t≲ 49 min, a reconnection jet is produced when the emerging flux tube meets the ambient magnetic field. Later, three more jets are produced at times t∼ 70 min, t∼ 83 min, and t∼ 106 min, which can all be characterised as blowout jets. We show in Fig. <ref> two of the jets, as these are determined from the temperature images in the y=0 plane that is perpendicular to the flux tube axis: the reconnection jet, and the last blowout jet, which is the strongest. It should be noted that the exact determination of the jet production times is not possible, and so in the following we indicate a wider time interval around them. Apart from these four events, many other smaller events take place during the simulation.
§ METHODOLOGY
In this section, we describe the various physical quantities that we are going to study and their computation method.
§.§ Energies
The energy of a magnetic field, 𝐁, that occupies the volume, V, is given by
E=1/8π∫_V 𝐁^2 dV.
Another magnetic field of interest is the potential, or current-free field, 𝐁_p, whose normal components have the same distribution as those of 𝐁 on the boundary of the volume, ∂ V. This condition can be written as
. n̂·𝐁|_∂ V=. n̂·𝐁_p|_∂ V,
with n̂ denoting the outward-pointing unit normal on ∂ V.
The energy of the potential field, E_p, was computed from Eq. (<ref>) with 𝐁_p in place of 𝐁. The difference between the two energies defines the free energy, E_j=E-E_p. Alternatively, free energy can be defined from Eq. (<ref>), with the current-carrying magnetic field (𝐁-𝐁_p) in place of 𝐁. The two definitions are equivalent when 𝐁 is numerically solenoidal enough.
§.§ Relative helicities
Relative magnetic helicity <cit.> is defined as
H_r=∫_V (𝐀+𝐀_p)· (𝐁-𝐁_p) dV,
where 𝐀, 𝐀_p are the vector potentials that correspond to the two magnetic fields. Relative helicity is independent of the gauges of the vector potentials as long as the condition of Eq. (<ref>) holds.
Relative helicity can be split into two gauge-independent components <cit.>: the current-carrying helicity,
H_j=∫_V (𝐀-𝐀_p)· (𝐁-𝐁_p) dV,
and the volume-threading helicity,
H_pj=2∫_V 𝐀_p· (𝐁-𝐁_p) dV.
It is easy to check that summing the two components recovers the relative helicity.
During the evolution of a magnetic system, the two terms fluctuate but always sum up to the value of the relative helicity. The rate of helicity that is exchanged between the two components is given by
d H_j/ d t=-2 ∫_V (𝐯×𝐁)·𝐁_p dV,
where 𝐯 is the plasma velocity <cit.>. It should be noted that this quantity is gauge-independent and that its sign corresponds to the transfer of helicity from H_j to H_pj; the inverse transfer has the opposite sign.
§.§ Relative field line helicities
Relative field line helicity, h_r, is a proxy for the density of relative helicity <cit.>. It is a generalisation of the plain field line helicity (FLH) <cit.> and, like FLH, it depends on the gauges of the vector potentials. Relative helicity can be written with the help of RFLH as a surface, and not as a volume integral; namely,
H_r,fl=∮_∂ V h_r dΦ,
where dΦ=| n̂·𝐁| dS is the elementary magnetic flux on the boundary and dS the respective area element. By defining the RFLH operator,
h(𝐀)=∫_α_+^α_- 𝐀· dl - 1/2( ∫_α_+^α_p- 𝐀· dl_p+∫_α_p+^α_- 𝐀· dl_p),
we can simply write RFLH as
h_r=h(𝐀+𝐀_p).
In Eq. (<ref>), α_+, α_-, dl denote the positive footpoint, the negative footpoint, and the elementary length along the field lines of 𝐁, respectively, while α_p+, α_p-, dl_p denote those of 𝐁_p.
The current-carrying component of relative helicity can be expressed through a current-carrying RFLH, h_j, as
H_j,fl=∮_∂ V h_j dΦ,
similarly to Eq. (<ref>). The corresponding RFLH is
h_j=h(𝐀-𝐀_p).
From the difference between the two RFLHs, we can also define the RFLH of the volume-threading helicity, as
h_pj=h_r-h_j=h(2𝐀_p).
§.§ Numerical computation of the various quantities
The computation of the potential field satisfying the condition of Eq. (<ref>) was performed with the numerical solution of a 3D Laplace equation <cit.>. In the computation of the vector potentials from the respective magnetic fields, it is assumed that they satisfy the <cit.> gauge, as this was specialised in <cit.>. More specifically, 𝐀 is taken in the simple DV gauge and 𝐀_p in the Coulomb DV gauge (DVS and DVC in the notation of <cit.>). The field line integrations required for computing the RFLHs were performed in the manner described in <cit.>. Similar to that work, the footpoints in the RFLH computations are restricted on the `photospheric' boundary, the plane z=0. We finally note that all integrals were computed using the trapezoidal rule.
§ RESULTS
In this section, we examine the evolution of the various quantities discussed in Sect. <ref> in three cases: when the whole simulation volume is considered, when different subvolumes of the total volume are considered, and when a 2D region on the photosphere is considered. We start with the first case.
§.§ Consideration of the whole volume
The evolution of the various energies is shown in Fig. <ref>. The energy of the magnetic field (black curve) is in general an increasing function of time, which exhibits small decreases during the last two jets. The potential energy (blue curve) shows a mild increase that after t∼ 80,min becomes much slower. The free energy (red curve) has two increasing periods, one until the third jet, and another one after the fourth jet, while in between it fluctuates a lot due to the production of the blowout jets. The decreases experienced by the free energy during the last two jets are more pronounced compared to those of the total energy. In all plots, the evolution after t=30 min, or snapshot number 20, is shown, since before that all of the quantities are practically constant. Something worth noticing is that the total and potential energies start from non-zero values due to the presence of the ambient magnetic field. This is not the case for the free energy, which initially is zero.
The reason for not getting significant jet-related changes during the first two jets in the energy patterns can be deduced from the evolution of the unsigned magnetic flux, which is shown in Fig. <ref>. We notice that during the first two jets flux is still emerging at a high rate and this leads to a similarly steep increase in the energy curves and the suppression of any finer details. The flattening of the flux evolution during the last two jets allows the jet-related changes to stand out more clearly.
The evolution of the various helicities is shown in Fig. <ref>. The relative helicity (black curve) exhibits an overall increasing pattern that is interrupted by large decreases during the last two events. The relative helicity also experiences small changes during the first two jets, when the rate of its increase weakens. The volume-threading helicity (blue curve) follows the same pattern as relative helicity, with the difference that it flattens after the large blowout jet and does not continue to increase as relative helicity does. The current-carrying helicity (red curve) does not show important changes during the first two events but decreases during the last two jets like the other helicities, although to a lesser degree. This seems counter-intuitive as the non-potential H_j should show more evident changes during jet eruptions compared to H_pj. It can be explained however by the presence of the ambient magnetic field, which amplifies the volume-threading component, the mutual helicity of the ambient and emerging fields, and its variations. The behaviour of the current-carrying helicity is also different between the third and fourth jets, when it increases much more mildly than the other helicities. As also happens for energies, the relative and volume-threading helicities start from non-zero values owing to the presence of the ambient magnetic field. The current-carrying helicity is initially zero, since in the beginning of the simulation all the current-carrying magnetic field is located in the not-yet-emerged flux tube. Finally, the peaks of the current-carrying helicity in the last two events, which are the largest, occur a little earlier than those of H_pj and H_r.
The dynamics of the two relative helicity components can be seen in more detail in Fig. <ref>, in which the H_j to H_pj transfer term, which was introduced in the analysis of <cit.>, is depicted. We note that the three major peaks of this transfer term coincide with the times of the three largest events, the blowout jets. During these time intervals, the transfer term increases and then relaxes abruptly, which means that there is an increased transformation of H_j to H_pj then. This can also be explained by Fig. <ref>, in which H_j peaks earlier than H_pj, and so, in between the peaks of the two helicities, the former decreases while the latter increases.
The incoherent behaviour of H_j and H_pj during the simulation leads to an irregular behaviour of the eruptivity index; that is, of the ratio |H_j|/|H_r|, as is shown in Fig. <ref>. We note that the general trend of the eruptivity index is increasing more or less until t∼85,min. It then drops until the large blowout jet and then increases again but more slowly. The eruptivity index experiences changes during all jets, which are mostly decreases. During the reconnection jet, it decreases a bit, while also exhibiting some jiggling; during the second jet, it shows a small break-like decrease; during the third jet, it drops more intensely but then rises even more; and in the large blowout jet, it shows a small decrease followed by an increase. The eruptivity index also exhibits various changes outside of the intervals of jet activity, most notably with the peak around t∼85 min.
§.§ Consideration of three subvolumes
We next consider three separate subvolumes of the whole volume and examine the evolution of all of the quantities in them. We first restricted the horizontal span of the subvolumes to -20 Mm<x,y<20 Mm, as the magnetic field is very weak outside this region. We also truncated the height of the total volume to z<30 Mm. The remaining volume was split into three unequal subvolumes with height ranges 1 Mm<z<6 Mm, 6 Mm<z<11 Mm, and 11 Mm<z<30 Mm. The choice of 6 Mm and 11 Mm for the subvolumes' height limits follows from the temperature morphology of Fig. <ref> and should be considered approximate. The lower subvolume was chosen to depict what happens below the reconnection point of the initial jet, the middle subvolume for where most of the action takes place, and the upper one for the lower-intensity coronal part. Moreover, the first subvolume starts at z=1 Mm so as to avoid the turbulent layer at, and slightly above, the photosphere. The two lower subvolumes have almost the same size, while the upper one is ∼ 4 times their size. The sum of these three volumes is ∼ 20% of the coronal volume studied in Sect. <ref>. The subvolumes, and their relation with the whole volume, are shown in Fig. <ref>.
In each subvolume, we computed the energies and helicities with the methodology described in Sect. <ref>. This requires the computation of a different potential field in each subvolume, which satisfies the appropriate Eq. (<ref>) for the specific volume. We should stress here that the various helicities are not additive <cit.>; that is, the sum of the helicities of the subvolumes is not equal to the helicity of the whole volume.
The energy and helicity evolution curves for the lower subvolume, at heights 1 Mm<z<6 Mm, are shown on the top panel of Fig. <ref>. The energy patterns in this first subvolume (top left panel in Fig. <ref>) are similar to those of Fig. <ref>, with the difference that they are a little smoother because of neglecting the lower 1 Mm of the photosphere. Additionally, they are up to an order of magnitude smaller, as was expected from the much smaller volume they come from. The helicity patterns (top middle panel in Fig. <ref>) have an overall similar behaviour to those of Fig. <ref>, with the exception that their peaks occur a little earlier, and also that they start decreasing at the end of the simulation. The helicity ratio pattern (top right panel in Fig. <ref>) has four local maxima, with only the first coincident with the reconnection jet and the rest occurring in between jet activity.
In the middle subvolume, at heights of 6 Mm<z<11 Mm, the energies (middle left panel in Fig. <ref>) are even smaller and have local maxima in all but the reconnection jet. The free energy shows an additional peak between the third and fourth jets, around t∼ 95 min. The helicity patterns (middle panel in Fig. <ref>) are smaller, different than in Fig. <ref>, and show peaks coincident with the four jets. The current-carrying helicity shows the same additional peak as the free energy. The helicity ratio pattern (middle right panel in Fig. <ref>) is totally different than in Fig. <ref>; it exhibits many peaks but only one of them is during the production of a jet, the one during the second blowout jet.
In the upper subvolume, at heights of 11 Mm<z<30 Mm, the total and potential energies (bottom left panel in Fig. <ref>) are mostly decreasing functions of time, with local maxima at the two last jets. The free energy shows a similar pattern as in the other volumes, but it has much smaller values, as it is two orders of magnitude smaller than the total energy. Moreover, it has two sharp peaks during the last two events. The helicity patterns (bottom middle panel in Fig. <ref>) are quite fuzzy and show mixed signs, fluctuating between positive and negative values. During the last two jets, however, H_j and H_pj show jet-related changes. The helicity ratio pattern (bottom right panel in Fig. <ref>) is spiky, with various peaks that do not correspond to the jet activity.
When we look the panels of Fig. <ref> vertically, we can observe a few more things about the height dependence of the various quantities. Focusing first on the energies (left column of Fig. <ref>), we notice that the total energy is a factor of three lower (on average) in the middle volume compared to the other two. The free energy on the other hand decreases by an order of magnitude from the lower to the middle volume, and by a factor of five from the middle to the upper volume. Put simply, the magnetic field becomes more potential as we go higher. The curves for relative and volume-threading helicities decrease by factors of ∼ 5 as we move to the next higher volume, while the current-carrying helicity by factors of ∼ 15. This reaffirms the link between the potentiality of the field and the height, and additionally shows that the field gets less helical and twisted upwards. For the helicity ratio, we can only note that its average values decrease with height, and that it exhibits more spikes higher up due to the more frequent close-to-zero values of relative helicity. A final observation is about the relative timings of the peaks of the energy and helicity curves. If we focus on the more pronounced peaks during the last two jets in either of the curves of the left and middle columns of Fig. <ref>, we see a slight shift to the right as we go to higher volumes. This could be a signature of the time it needs for the disturbances to propagate from one volume to the other.
§.§ Evolution of polarity inversion line helicities
Apart from the volumes examined in the previous sections, a 2D region of interest constitutes the PIL of the magnetic field on the photosphere. In a recent work, <cit.> found that the part of the relative helicity that is contained in the region around the PIL can be used to indicate solar eruptive behaviour. We now examine whether this result is relevant for a simulated solar AR as well.
As in that work, we used the method of <cit.> to determine the region of interest around the PIL. More specifically, we considered a dilation window of 3×3 pixels, and a threshold for the magnetic field equal to 10% of the maximum value it attains above its mean value. We made this choice since the mean value is non-zero in the specific experiment because of the ambient field. Finally, the width of the Gaussian that was convolved with the PIL was taken to equal 9 pixels. The resulting Gaussian mask on the photosphere, W, was used to define the helicities,
H_x,PIL=∫_z=0 h_x W dΦ,
where `x' can be any of the characters `r', `j', or `pj'. These are the parts of the respective helicities contained in the PIL.
In the top panel of Fig. <ref>, we show the evolution of three different relative helicities: H_r from Eq. (<ref>), H_r,fl from Eq. (<ref>), and H_r,PIL from Eq. (<ref>). We note that the relative helicity derived from the field line helicity (red curve) has a similar evolution pattern as H_r (black curve). Moreover, their values have absolute relative differences of less than 10% most of the time. This shows that the RFLH computation method is working as expected and as has also been seen in other cases previously <cit.>. The PIL helicity (blue curve) also follows H_r until the large blowout jet, despite fluctuating much more and being smaller by a factor of ∼25. The fluctuations, which are found in all PIL-related quantities, are caused by the calculation method of the PIL helicities and especially by the number of points comprising the PILs, which vary between snapshots. The jiggling of the PIL relative helicity does not allow us to infer its behaviour during the first two jets, but the peaks of H_r,PIL near the last two jets coincide with those of H_r. In fact, they are even more pronounced compared to those of the other relative helicities. After the large blowout jet, H_r,PIL decreases, while the other two helicities increase. This different behaviour of H_r,PIL indicates that the increase in H_r after the last jet is due to the coronal field.
In the bottom panel of Fig. <ref>, we show the evolution of the three different current-carrying helicities, H_j, from Eq. (<ref>), H_j,fl from Eq. (<ref>), and H_j,PIL from Eq. (<ref>). We note that the current-carrying helicity derived from the respective field line helicity (red curve) has similar values to H_j (black curve), exhibiting relative absolute differences up to 15%. This slightly higher value compared to the case of the relative helicities (top panel of Fig. <ref>) is mostly due to the difference in the evolution patterns of H_j,fl and H_j around t∼ 100 min, right before the last jet. Similarly to the PIL relative helicity, the PIL current-carrying helicity, H_j,PIL (blue curve), is also jiggling and identifies the two major peaks during the last two jets more clearly than the respective volume helicity, H_j. Likewise, H_j,PIL decreases after the large blowout jet, contrary to the other current-carrying helicities. We note that overall the two panels of Fig. <ref> exhibit many similarities between the relations of H_j and H_r with their respective PIL helicities.
§ DISCUSSION
This work examines various energy- and helicity-related quantities in a flux emergence MHD simulation. The presence of the oblique ambient magnetic field in the simulation leads to the production of a number of jets in the coronal volume and to an overall high level of activity. We have identified four jet events as the main focus of our study.
The evolution of the energies and the helicities show jet-related changes in all the identified jets, although these are more pronounced in the last two jets, which are the strongest. The relative helicity and its two components exhibit sharper changes during these jets compared to the energies, while the most jet-indicating among the three energies is the free energy. These remarks strengthen the idea that helicity is a better marker for eruptivity, followed by free energy <cit.>. We stress, however, that both energy and helicity are needed in order to understand the dynamics of a magnetised system. Additionally, an increased transformation of H_j to H_pj was observed during the jets, owing to the slightly earlier peaking of H_j, in accordance with the results of <cit.>.
A noteworthy result of this work is that the eruptivity index shows atypical behaviour in the simulation. Apart from the reconnection jet, when the eruptivity index has a local peak and decreases afterwards temporarily, it shows mixed behaviour during all other jets. A possible reason could be the presence of the ambient field and the resulting increased coronal activity of the specific simulation. It is known, however, that the eruptivity index does not conform to its standard picture in all the observed jets <cit.>. Moreover, the eruptivity index shows an intense, broad peak outside the intervals of jet production. It seems that something else happened at that time that did not manifest itself in the temperature images of Fig. <ref>, or in any other quantity that we checked.
Another factor that could play an important role in the behaviour of the eruptivity index is the choice of volume in which it is computed, as <cit.> have shown. The consideration of the individual subvolumes in Sect. <ref> allowed us to determine how the various profiles change with height. For the eruptivity index, however, this does not show any improvement compared to the case of the whole volume. In contrast, we are able to draw various conclusions for the energies and helicities. The removal of the lower 1 Mm above the photosphere in the first subvolume leads to smoother profiles. The middle volume shows the best agreement with jet activity, in both the energies and the helicities. The energy and helicity variation with height shows that the magnetic field gets more potential and less helical as it goes higher. Our analysis shows that it is important to check the results in different volumes, as extra information can be obtained.
An important aspect that we did not examine in this work was the effect of partial ionisation on our results. We know from <cit.>, for example, that partial ionisation affects the emergence of magnetic flux at the photosphere in a number of ways, and so it could have an impact on our results to some extent. We leave this subject for a future work in which a comparison with the case in which the plasma is fully ionised is going to be made as well.
The main interest of this work is to see the behaviour of the recently introduced PIL relative helicity <cit.> in an MHD simulation. The PIL helicity reaffirms the results of that work in a totally different set-up. It seems, therefore, that the relative helicity contained around the PIL is an important factor for studies of solar eruptivity. The PIL helicity is added to the list of other PIL-derived quantities, such as the R-parameter of <cit.> or the mean twist around the PIL <cit.>, that have been shown to relate to solar activity.
Furthermore, the study, for the first time, of the current-carrying helicity contained around the PIL reveals that it not only agrees qualitatively with the respective volume quantity, but it is even more responsive to jet activity, similarly to the PIL relative helicity. The current-carrying PIL helicity could therefore be equally good at determining upcoming solar eruptivity and so it is worth examining further in the future.
The authors thank the referee for carefully reading the paper and providing constructive comments. This research has received funding from the ERC Whole Sun Synergy grant N^o 810218. GC acknowledges support from the Royal Society grant RGF/EA/180232. The work was supported by the High Performance Computing facilities of the University of St Andrews `Kennedy'.
aa
|
http://arxiv.org/abs/2409.02243v1 | 20240903191636 | A Novel Audio-Visual Information Fusion System for Mental Disorders Detection | [
"Yichun Li",
"Shuanglin Li",
"Syed Mohsen Naqvi"
] | cs.CV | [
"cs.CV"
] |
A Novel Audio-Visual Information Fusion System for Mental Disorders Detection
Yichun Li, Shuanglin Li, Syed Mohsen Naqvi
Intelligent Sensing and Communications Research Group, Newcastle University, UK
September 9, 2024
===============================================================================================================================
§ ABSTRACT
Mental disorders are among the foremost contributors to the global healthcare challenge.
Research indicates that timely diagnosis and intervention are vital in treating various mental disorders. However, the early somatization symptoms of certain mental disorders may not be immediately evident, often resulting in their oversight and misdiagnosis. Additionally, the traditional diagnosis methods incur high time and cost.
Deep learning methods based on fMRI and EEG have improved the efficiency of the mental disorder detection process. However, the cost of the equipment and trained staff are generally huge. Moreover, most systems are only trained for a specific mental disorder and are not general-purpose.
Recently, physiological studies have shown that there are some speech and facial-related symptoms in a few mental disorders (e.g., depression and ADHD).
In this paper, we focus on the emotional expression features of mental disorders and introduce a multimodal mental disorder diagnosis system based on audio-visual information input. Our proposed system is based on spatial-temporal attention networks and innovative uses a less computationally intensive pre-train audio recognition network to fine-tune the video recognition module for better results. We also apply the unified system for multiple mental disorders (ADHD and depression) for the first time.
The proposed system achieves over 80% accuracy on the real multimodal ADHD dataset and achieves state-of-the-art results on the depression dataset AVEC 2014.
mental disorder, machine learning, depression, ADHD, multimodal
§ INTRODUCTION
Mental health encompasses an individual's psychological, emotional, and social well-being, which includes the ability to cope with stress, manage emotions, maintain relationships, and make decisions <cit.>. It plays a crucial role in overall health and functioning, influencing thoughts, feelings, and actions in daily life.
Fig. 1 shows the age (years) and gender distribution of patients with a diagnosis of severe mental illness (SMI) compared with all patients recorded by the United Kingdom National Health Service (NHS), UK. The results suggest that approximately 5-15 % of recorded patient visits within the 20-60 years age group are impacted by severe mental illness, constituting a substantial overall figure.
Various factors, including genetics, environment, life experiences, and biological factors, can influence mental health. As a result, mental disorders such as depression, Attention Deficit Hyperactivity Disorder (ADHD), and anxiety are common, particularly among children and adolescents <cit.>. According to <cit.><cit.>, ADHD affects around 5-7 % of children and adolescents worldwide. Moreover, the global prevalence of depression was estimated at 28 % in 2021. These disorders could have serious consequences for individuals, including learning difficulties, impaired social interactions, and emotional issues <cit.>.
The traditional diagnosis of mental disorders typically relies on clinicians' observation, questioning, and consultation, guided by the Diagnostic and Statistical Manual of Mental Disorders (DSM) <cit.>. However, this diagnostic process is time-consuming and heavily dependent on the clinician's experience and judgment, including the long waiting time for the clinical appointment. The timely intervention significantly impacts the treatment of mental disorders and improves the quality of life for patients and their families <cit.>. Due to the time-consuming process and the shortage of experienced clinical consultants, it has been reported that the waiting period for diagnoses and treatment of certain mental disorders such as ADHD, depression, and Alzheimer's can extend to several years <cit.>.
Recently, there has been a growing interest in machine learning methods for mental disorders detection and diagnosis. The majority of research in this area relies on Magnetic Resonance Imaging (MRI) and Electroencephalography (EEG) <cit.>. These methods efficiently detect and extract neurobiological symptoms and features, leveraging objective brain changes to diagnose subjects.
The conventional MRI and EEG-based methods have two limitations. Firstly, the expensive equipment and high operational costs limit the practical use of MRI and EEG in real-world diagnosis. MRI and EEG scanners cost £150,000-1,000,000 and £1,000-25,000, respectively, and the regular maintenance costs are also very expensive <cit.>. Secondly, recent detection and diagnosis techniques are specific to one mental disorder. However, many different mental disorders share the same or similar behavioral symptoms, such as dodge expression and uncontrollable body shaking, which are often overlooked by these neurobiological diagnostic techniques.
Therefore, there is a growing demand for cost-effective and versatile psychiatric screening methods. In 1980, Russell introduced the concept of emotional states being represented as continuous numerical vectors in a two-dimensional space known as the Valence–Arousal (VA) space <cit.>. Valence denotes positive and negative emotional states, while arousal indicates the intensity of emotions ranging from sleepiness to high excitement.
As shown in Fig. 2, depression typically occupies the third quadrant of the VA space, while ADHD is predominantly situated in the first and third quadrants. The distinct expression of various mental disorders within emotional space enables possible diagnosis and screening using a unified system. Many mental disorder symptoms manifest as observable emotional swings, which are reflected in both the patient's speech and facial expressions.
Therefore, we introduce a novel diagnostic system for mental disorders based on emotion recognition and the classification of audio and facial video input.
The contributions of this paper are summarized as follows:
∙ A generalized diagnostic system for mental disorders is proposed, leveraging emotional recognition from raw RGB facial video and speech audio data. The performance of the system is also validated by the medical-approved NHS body.
∙ An efficient multimodal detection method is also proposed and applied to the mental disorder diagnosis for the first time. The innovative use of the simple pre-train audio model to fine-tune the video-based model to improve accuracy is provided.
∙ We demonstrate the effectiveness of our proposed method over both ADHD and depression datasets, compared to state-of-the-art benchmarks across the latest performance matrix.
The rest of the paper is organized as follows. The related work to mental disorders and diagnosis techniques is introduced in Section II. Then, the proposed method is described in Section III. The experimental settings and results are presented in Section IV. Finally, our work is concluded in Section V. It should be noted that this paper aims to explore the application of fusion systems based on audio-visual features in mental disorder assessment and detection. Further feature fusion experiments and more comprehensive detection results, will be addressed in the future journal paper of this work.
§ RELATED WORKS
Various physiological and psychological researchers have verified that lesions in certain brain areas can lead to behavior disorders. Therefore, for some psychological disorders, diagnosis and detection through emotional expression and behavioral characteristics have been proven feasible <cit.>.
Depression is a common psychiatric disorder. The DSM-V characterizes depression as enduring sadness and diminished interest in previously enjoyed activities. It also highlights that individuals might encounter supplementary physical symptoms, including chronic pain or digestive problems. In the research conducted by Li et al. <cit.>, they employed a methodology that calculates the two
attributes of brain regions based on the multi-layer network of dynamic functional connections and fuses morphological and
anatomical network features to diagnose depression, resulting in a reasonable classification accuracy of 93.6%. In recent research, machine learning methods based on emotion recognition have also been used in depression assessment. In the research conducted by Niu et al. <cit.>, they proposed a representation block to find a set of basis
vectors to construct the optimal transformation space and generate the transformation result.
Brain MRI is the most widely used modality for ADHD diagnosis with machine learning. Most studies have sourced their MRI images from a single public database, namely, the Neuro Bureau ADHD-200 Reprocessed repository (ADHD-200) <cit.>. This dataset comprises structural and resting-state functional MRI images collected from 585 control individuals and 362 children and adolescents with ADHD. Numerous studies have identified structural differences between individuals with ADHD and controls. Based on this dataset, Tang et al. <cit.> achieve 99.6% accuracy by employing a modified auto-encoder network to extract discriminative features and enlarge the variability scores for the binary comparison.
Audio and video, as the most readily available multimodal signals, play a crucial role in various multimodal machine learning applications <cit.>. Their advantages include providing rich sensory information, enabling a better understanding of context, and facilitating natural interaction. Additionally, they greatly improve the performance and robustness of related systems by combining different features <cit.>.
Based on our investigation, the majority of mental disorders detection and diagnosis relies on fMRI and EEG tools, incurring high human and instrumental costs for application. Diagnostic and detection multimodal methods are highly limited, with most research on text and wearable sensors.
Therefore, developing a universal mental disorder detection system based on low-cost audio-visual signals has great potential for application.
§ PROPOSED METHODS
This section briefly introduces the outlines of our systems and datasets utilized in this work. As our main emphasis lies on the multimodal fusion system based on audio and videos, details of networks utilized in the proposed system are also presented in this section.
§.§ Proposed Multimodal Mental Disorder Detection systems
The proposed system contains three main parts: video-based facial expression detection, audio-based pre-train model, and classification and regression performance measurements.
Because of the particularity of medical and clinical-related information, open-source mental disorders datasets are relatively limited at this stage. We select multimodal datasets containing real facial video and audio encompassing a broad spectrum of mental disorders, including ADHD and depression.
Fig. 3 illustrates our proposed mental disorder detection system. Details will be introduced in the following subsections.
§.§ Datasets
This paper primarily focuses on attention deficit hyperactivity disorder (ADHD) and depression. These psychiatric conditions are selected due to their prevalence as the most common mental disorders, each characterized by specific emotional symptoms. Two challenging multimodal datasets, which serve as benchmarks for ADHD and depression detection and assessment, are utilized in the experiments. All datasets used to validate the detection and assessment performance of the proposed system are approved by certified medical authorities.
In our proposed system, we utilize the interview segments from our real multimodal ADHD dataset for the binary classification of ADHD <cit.>. Each subject and control undergoes a data recording process lasting 10-20 minutes, involving 21 questions selected from the Diagnostic Interview for ADHD in Adults (DIVA) administered in English. Notably, DIVA is a standard questionnaire used by doctors for ADHD diagnosis.
The recording setup involves three GoPro cameras: a front-facing Camera 1 captures facial information, while side Cameras 2 and 3 record the left and right torsos and limbs at a resolution of 3840×2160 pixels. For our proposed mental disorder assessment and detection system, only facial information captured by Camera 1 is utilized.
Videos are segmented into 60-second clips, and each is labeled as either 0 (non-ADHD controls) or 1 (ADHD subjects). The ADHD dataset used in our proposed system comprises 188 video clips, partitioned into training, validation, and testing at a ratio of 6:1:3, respectively.
For depression detection, we employ the AVEC 2014 dataset to train and evaluate our proposed system. This dataset consists of videos accompanied by BDI-II score labels, self-evaluated by the participants in each video. The labels span from 0 to 63 and are divided into four depression levels: minimal (0–13), mild (14–19), moderate (20–28), and severe (29–63). The AVEC 2014 dataset consists of a total of 300 video recordings,
which are divided into three categories: training, development, and testing sets. Each set contains two different types of video recordings: Freeform and Northwind. The Northwind task involves participants reading aloud the fable `The North Wind and the Sun' in German. The Freeform task requires participants to answer a series of questions in German. The length of each video is approximately between 10 to 60 seconds.
§.§ Networks
As shown in Fig. 3, we design and present a two-stream multimodal system-based attention network with CNN and ResNet. The novelty is also in utilizing a pre-trained audio-based model to fine-tune the loss obtained from video input and get better results.
Based on the raw audio, we choose an attention-CNN structure as the main core network <cit.>.
A simple attention module is added to the CNN structure. Compared to traditional CNN, it focuses on leveraging local feature connections while utilizing parallel computing to decrease training time. The weights in the convolution kernels are shared, and multiple convolution kernels can be used to extract multi-dimensional information <cit.>.
There are 5 convolution layers that have 3×3 kernels with 1 stride.
Different from the original CNN audio classification network, we freeze the model after fully training the network. Then, we integrate this simple pre-train audio recognition model into the multimodal system to fine-tune the video recognition model.
The loss of the audio-based recognition model (ℓ _S) is to minimize the Mean Absolute Error (MAE) of the outputs and true label results:
ℓ_S = 1/n∑_i = 1^n | ŷ_i - y_i |
where n means the number of samples in the dataset, the ŷ_i and y_i are the predicted value and the true value of the ith sample, respectively.
For the video-based recognition and classification network, we proposed a Cov-Attention module based on the ResNet backbone. This module aims to capture both global and local spatial-temporal information from video frames.
The network architecture is illustrated in Fig. 3.
In the first Cov1D layer, a convolution operation with a kernel size of 7×7×7 and a stride of 1x2x2 is applied to extract and downsample low-level features. Subsequently, a 3×3×3 pooling layer with a stride of 1×2×2 is employed to further process the features. The processed features are then passed through a residual module, which comprises two bottleneck structures.
Notably, the attention module replaces the middle layer of the bottleneck and generates a weighted feature incorporating attention features. Following a stack of residual modules, an adaptive pooling layer resamples the feature into a fixed shape. Finally, the last fully connected layer predicts a score, serving as the output of the proposed system.
The loss of the video-based recognition model (ℓ _V) is to minimize the Mean Absolute Error (MAE) of the video outputs and true label results:
ℓ_V = 1/n∑_j = 1^n| ŷ_j - y_j|
where n means the number of samples in the dataset. The ŷ_j and y_j are the predicted value and the true value of the jth sample, respectively.
We propose to perform loss fusion in this system, using the loss ℓ _S of an independent pre-train audio recognition model to fine-tune the loss ℓ _V on the video side to achieve more accurate recognition results. The fine-tune loss ℓ _B is defined as:
ℓ _B=α·ℓ _S+ β·ℓ _V
where the α and β are the mixing coefficients to achieve the highest recognition effect through grid search and set empirically to 0.6 and 0.4, respectively.
Based on this basic system, we adapt two popular networks, i.e., LSTM and 3D-CNN <cit.>, to the video-based classification tasks. We also compare the proposed method with other commonly used audio-based recognition networks (LSTM) <cit.>. To ensure the fairness of the evaluation, we use the same fusion method and parameter settings as in our proposed system.
§ EXPERIMENTS
§.§ Preprocessing and Experimental Settings
To optimize the system's capacity to extract features from multimodal inputs, it is advantageous to perform preprocessing on the audio and video signals. By subjecting the preprocessing process, such as normalization, denoising, and feature extraction, the system can better discern relevant information from both modalities, thereby improving the performance of the system.
For optimal preservation of short-term facial expressions, we initially extract the raw video input as individual images at a 1-frame interval. Our preprocessing utilizes the Dlib toolkit to precisely extract facial landmarks from the sequence of frames, effectively eliminating background interference and aligning human faces to minimize environmental disruptions. As shown in Fig. 4, during alignment, we center the facial features precisely between the eyes and adjust the vertical distance between the eyes and mouth to occupy one-third of the image's height. Subsequently, the aligned facial images are resized to the dimension of 224 × 224 pixels for further processing.
We extract audio data from the original recordings and proceed to eliminate background noise from the extracted audio samples. This process is achieved using the noisereduce() function in Python.
Fig. 5 shows the audio spectrograms separated by denoising from the original recordings. The noise threshold is determined based on statistical analysis performed across the audio clip.
Following the denoising process, the raw audios are segmented into clips of a 2-second duration and organized within the same structure as the corresponding video dataset.
Our proposed system is trained on the AVEC 2014 and the real multimodal ADHD dataset. We subsequently validate and test it on the development/validation and testing sets of both datasets, respectively.
For the training period for the multimodal system, we randomly select a sequence of 64 frames from a given audio-visual recording at a stochastic position to form a training clip. To augment the input data during training, we apply random horizontal flips and adjust brightness, contrast, saturation, and hue within the range of 0 to 0.1 for all frames in one video clip.
During the testing phase, for a given testing audio-visual recording, we crop it into a sub-video with 64 frames and compare it with the indicated audio group, predicting each group to calculate the mean value of depression scores and the ADHD classification probability score. The training epochs for the audio model is 100, with the learning rate empirically set to 1× 10^-4. The training epochs for the multimodal network are 150, and the learning rate is empirically set to 1× 10^-3. All experiments are run on a workstation with four Nvidia GTX 1080Ti GPUs.
§.§ Results and Discussions
As mentioned in Section III, our proposed system differs from traditional machine learning approaches based on fMRI and EEG. Instead, it concentrates on creating multimodal systems for detecting and assessing mental disorders using video and audio data collected by common sensors. These multimodal systems have the advantage of correcting or compensating for errors in detection that can arise from single-modality inputs, making them especially valuable for such related medical research.
Our proposed system's performance is evaluated by comparing it with LSTM and 3D-CNN networks on this real multimodal ADHD dataset with precision, accuracy, and Area Under the Curve (AUC) to evaluate the classification performance of the system.
For the results of the AVEC 2014 dataset for depression, we utilize Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE).
The experimental results are summarized in Table I and Table II.
From Table I, the proposed system shows good classification ability on the real multimodal ADHD dataset. At the same time, the results from Table II show that the proposed audio-visual attention network is significantly higher than the other fusion networks in MAE and RMSE.
We also provided part of state-of-the-art results compared to our proposed method in Table III and Table IV. It should be emphasized that, due to medical confidentiality requirements, there is no publicly available ADHD multimodal dataset. Therefore, we evaluated the performance of state-of-the-art ADHD detection systems on various datasets containing EEG and daily activities videos.
The proposed audio-visual fusion emphasizes extracting emotional information, specifically relevant symptom features of mental disorders, from both audio and video inputs. In this system, the Cov-Attention model captures expression and emotional cues across multiple contiguous frames, utilizing both spatial and temporal dimensions of video data. This model is crucial for accurate diagnosis and classification in related tasks. Additionally, the attention-CNN model in the audio recognition module effectively captures frequency and dimensional features present in speech signals,i.e., enhancing performance in speech-based detection and diagnosis while introducing the pre-train model. Overall, the proposed system exhibits strong performance in the detection tasks of the ADHD multimodal dataset, achieving high accuracy using only cost-effective audio-visual information data. Additionally, it demonstrates robust performance in the assessment of depression using the AVEC 2014 dataset.
We conduct ablation studies to assess the diagnostic performance of each module within our proposed system on the same AVEC 2014 dataset and multimodal ADHD dataset. The corresponding results are presented in Table V and Fig. 6, respectively.
Based on the results of the ablation studies, firstly, both the audio and video-based classification networks exhibit a notable level of robustness in assessing and detecting various psychological and emotional features, underscoring their efficacy in cross-mental disorders.
Secondly, fine-tuning the video model using simple pre-trained audio models leads to significant improvements in classification accuracy and performance across different experiments.
Thirdly, by leveraging the strengths of both network models and exploiting the features from different modalities, we have developed a comprehensive system for assessing and detecting various mental disorders. This integrated approach yields superior performance, particularly evident in depression assessment, where the MAE on the AVEC 2014 dataset is 7.23, and the AUC on the ADHD dataset is 0.77.
The aforementioned results highlight the similar related symptoms in emotional expression, including facial expressions and speech, across different mental disorders. They also indicate the feasibility of evaluating and screening multiple mental disorders through a unified multimodal system.
Moreover, while video-based depression assessment performs slightly better than only audio-based depression assessment, the opposite is observed for ADHD diagnosis. This discrepancy may be attributed to differences in symptom manifestation among various mental disorders. We note that the ADHD data are collected from interview videos, potentially amplifying the prominence of speech characteristics over facial expressions.
In our future work, we intend to delve deeper into these findings through more comprehensive experiments and introduce more multimodal data, such as EEG and fMRI, for fusion and evaluation of the related disorders.
§ CONCLUSION
This paper presented an innovative multimodal detection system for identifying and detecting various mental disorders. The proposed system demonstrated state-of-the-art assessment and classification capabilities on depression and ADHD datasets, respectively. By leveraging a simple pre-train audio model to fine-tune video data, our system achieved promising results, as evidenced by comparative and ablation study experiments. Compared to conventional machine learning methods based on EEG and fMRI, our system offered cost-effectiveness and broader applicability, pointing to a promising direction in clinical practice.
For future research, we aim to broaden the scope of our proposed system to encompass a wider array of mental disorders with a larger sample size. It should be noted that this paper aims to explore the application of fusion systems based on audio-visual features in mental disorder assessment and detection. Further feature fusion experiments and more comprehensive detection results will be addressed in this work's future journal paper.
§ ACKNOWLEDGMENTS
We would like to express our gratitude to Dr. Rejesh Nair from the United Kingdom National Health Service (NHS), UK, for his professional medical advice and help and all participants and volunteers for the multimodal ADHD data recording. Especially the Cumbria, Northumberland, Tyne, and Wear (CNTW) NHS Foundation Trust, one of the largest mental health and disability Trusts in England.
IEEEtran
|
http://arxiv.org/abs/2409.02303v1 | 20240903212848 | A Lesion-aware Edge-based Graph Neural Network for Predicting Language Ability in Patients with Post-stroke Aphasia | [
"Zijian Chen",
"Maria Varkanitsa",
"Prakash Ishwar",
"Janusz Konrad",
"Margrit Betke",
"Swathi Kiran",
"Archana Venkataraman"
] | cs.LG | [
"cs.LG",
"eess.SP",
"q-bio.NC"
] |
Lesion-aware Edge-based GNN for Post-stroke Aphasia
Z. Chen et al.
Department of Electrical and Computer Engineering, Boston University Center for Brain Recovery, Boston University Department of Computer Science, Boston University
{zijianc,mvarkan,pi,jkonrad,betke,skiran,archanav}@bu.edu
A Lesion-aware Edge-based Graph Neural Network for Predicting Language Ability in Patients with Post-stroke Aphasia
Zijian Chen1, Maria Varkanitsa2, Prakash Ishwar1, Janusz Konrad1,
Margrit Betke3, Swathi Kiran2 Archana Venkataraman1
September 9, 2024 – Version 1.0
===========================================================================================================================
§ ABSTRACT
We propose a lesion-aware graph neural network (LEGNet) to predict language ability from resting-state fMRI (rs-fMRI) connectivity in patients with post-stroke aphasia. Our model integrates three components: an edge-based learning module that encodes functional connectivity between brain regions, a lesion encoding module, and a subgraph learning module that leverages functional similarities for prediction. We use synthetic data derived from the Human Connectome Project (HCP) for hyperparameter tuning and model pretraining. We then evaluate the performance using repeated 10-fold cross-validation on an in-house neuroimaging dataset of post-stroke aphasia. Our results demonstrate that LEGNet outperforms baseline deep learning methods in predicting language ability. LEGNet also exhibits superior generalization ability when tested on a second in-house dataset that was acquired under a slightly different neuroimaging protocol. Taken together, the results of this study highlight the potential of LEGNet in effectively learning the relationships between rs-fMRI connectivity and language ability in a patient cohort with brain lesions for improved post-stroke aphasia evaluation.
§ INTRODUCTION
Stroke is one of the major causes for disability worldwide <cit.>, with approximately one-third of stroke survivors affected by speech and language impairments, known as aphasia <cit.>. Resting-state fMRI (rs-fMRI) captures steady-state patterns of co-activation in the brain and provides a unique glimpse into the altered brain network organization due to the stroke <cit.>. Exploring this relationship is crucial for understanding the mechanisms underlying aphasia and for developing effective, personalized treatment strategies. However, developing models that can simultaneously accommodate patient-specific changes in functional connectivity due to a lesion (i.e., the stroke area) and use this information to predict generalized language impairments remains an open challenge.
Prior studies have attempted to predict language ability using neuroimaging data. The earliest work <cit.> developed a stacked random forest (RF) model that performed feature selection across multiple modalities and then used these features to predict the composite Aphasia Quotient scored from the revised Western Aphasia Battery, i.e., WAB-AQ <cit.>. Another study <cit.> used support vector regression (SVR) to predict WAB-AQ by stacking features from functional MRI, structural MRI, and cerebral blood flow data. Recent work <cit.> proposed a supervised
learning method for feature selection and fusion methods to integrate features from different modalities and predicted WAB-AQ using RF and SVR. An earlier study <cit.> used similar multimodal ML methods to predict treatment response, rather than baseline functionality.
Finally, Wang et al. <cit.> used persistent diagrams derived from patient rs-fMRI
to identify aphasia subtypes <cit.>.
While these studies represent seminal contributions, they largely treat the data as a “bag of features" and do not fully capitalize on network-level information.
We propose to address this gap with Graph neural networks (GNN), which represent the brain as a graph, where nodes correspond to regions of interest (ROIs) and edges represent functional connections between ROIs. Convolutions on the graph aggregate information from neighboring nodes or edges. They can be node-based, as seen in models like BrainGNN <cit.>, GAT <cit.>, and GIN <cit.>, or edge-based, as formulated in the BrainNetCNN <cit.> and the HGCNN <cit.> models. GNNs have shown superior performance compared to traditional machine learning techniques in predicting cognitive outcomes related to autism <cit.>, aging and intelligence <cit.>, Alzheimer's Disease <cit.>, and ADHD <cit.>. However, these applications revolve around intact brain networks, which is not the case for a large lesion caused by stroke. Previous work <cit.> took the approach of masking out the lesioned ROIs from the input data. However, this strategy ignores the possibility of informative brain signals from around the lesion boundary. Another challenge is the limited availability of rs-fMRI data from stroke patients. One approach is to reduce the number of features in the analysis <cit.>. However, feature selection may inadvertently remove key information in the data, and prior studies have not been diligent about cleanly separating data used for feature selection from that used for performance evaluation <cit.>.
In this paper, we introduce a novel lesion-aware edge-based GNN model, which we call LEGNet, that uses rs-fMRI connectivity to predict language ability in patients with post-stroke aphasia. LEGNet is designed to aggregate information from neighboring edges of the brain graph, thus aligning with both the nature of rs-fMRI connectivity and the distributed interactions that contribute to language performance. We incorporate lesion information into LEGNet by encoding the stroke size and position into the model and by using this encoding to constrain the graph convolution process. To address data scarcity, we draw from the approach of <cit.> and develop a comprehensive data augmentation strategy that inserts an “artificial lesion" into healthy neuroimaging data and simulates the corresponding impact on rs-fMRI connectivity and language ability. We demonstrate that LEGNet outperforms baseline deep learning methods on two in-house datasets of patients with post-stroke aphasia.
§ LESION-AWARE GNN WITH SIMULATED TRAINING DATA
§.§ LEGNet Model Architecture
An overview of our LEGNet model architecture is shown in Fig. <ref>. LEGNet is designed to bridge the gap between the region- or node-based characterization of a lesion and rs-fMRI connectivity, which is defined on edges. As seen, our model includes three components: an edge-based learning module, a lesion encoding module, and a subgraph learning module that connects the two viewpoints.
Formally, let N be the number of ROIs in the brain. The input to LEGNet is the patient rs-fMRI connectivity 𝐗∈ℝ^N× N, which is obtained by exponentiating the correlation matrix computed from the mean time series of non-lesioned voxels within each ROI, as introduced in <cit.>. If the entire ROI lies within the lesion, then the time series is zero. The entries of 𝐗 can be viewed as edge features in the underlying brain graph defined on the ROIs.
Edge-Based Learning: From the input 𝐗, LEGNet first performs an edge-to-edge convolution <cit.> given by the following relationship:
𝐇_ij = ϕ( ∑_n∈𝒩(i)𝐫_n 𝐗_in + ∑_n∈𝒩(j)𝐜_n 𝐗_nj),
where 𝐇_ij∈ℝ^d_0 is the feature map of edge (i,j), 𝒩(i) is the set of neighboring nodes to ROI i, including i itself, 𝐫_n ∈ℝ^d_0 and 𝐜_n ∈ℝ^d_0 are the learnable filters for each node n, and ϕ is an activation function that is applied element-wise. Intuitively, Eq. (<ref>) aggregates the connectivity information along neighboring edges that share the same end-nodes and updates the edge features accordingly.
Following this step, LEGNet maps the edge features back into the node space:
𝐡_i^(1) = ϕ( ∑_n∈𝒩(i)𝐠_n 𝐇_in + 𝐛_1 ), i = 1,2,…,N,
where 𝐡_i^(1)∈ℝ^d_1 is the feature map of node i, 𝐠_n ∈ℝ^d_1× d_0 is the learnable filter, and 𝐛_1∈ℝ^d_1 is the learnable bias term from <cit.>.
Lesion Encoding: The LEGNet lesion encoding module captures the size and position of the stroke for downstream processing. This is done by computing the percentage of spared gray matter p_i in each ROI i. We use this information to construct the diagonal lesion embedding matrix 𝐋∈ℝ^N× N for each patient:
𝐋 = [ | | |; 𝐋_1 𝐋_2 ⋯ 𝐋_N; | | |; ] = [ p_1 ; ⋱ ; p_N ],
If an ROI is intact, then p_i=1 to indicate no lesion; otherwise, 0≤ p_i<1.
Subgraph Learning: At a high level, the subgraph learning module divides the nodes/ROIs into k subgroups based on their lesion encoding information and their (learned) contributions to the final prediction. First, LEGNet uses the lesion encoding 𝐋 to update the node representations via:
𝐡_i^(2) = ϕ( ∑_j∈𝒩(i)𝐖_j 𝐡_j^(1)), i = 1,2,…,N,
where 𝐡_i^(2)∈ℝ^d_2 is the updated representation for node i, and, inspired by <cit.>, the filters 𝐖_j ∈ℝ^d_2× d_1 are parameterized using the lesion matrix 𝐋 as follows:
vec(𝐖_j) = Θ_2 ·ψ(Θ_1 𝐋_j) + 𝐛_2.
The learnable parameters Θ_2 ∈ℝ^d_2d_1 × k and Θ_1 ∈ℝ^k × N are shared across all regions and all subjects. The bias term is 𝐛_2, and ψ an activation function.
The assignment score for each node j is computed as ψ(Θ_1 𝐋_j) and depends on its lesion embedding. The score indicates the involvement of node j in each subgraph. In this way, ROIs with similar lesion information and functionality are grouped together and updated with similar filters. Following the subgraph learning, the updated node features 𝐡_i^(2) are fed into a fully connected layer, with dimension d_3, to predict the scalar WAB-AQ, which quantifies language ability.
Training Loss: We train LEGNet using the mean squared error between the actual y_m and predicted ŷ_m language performance for each subject m, together with a ridge regularization term on the network filters:
ℓ = 1/M∑_i=1^M (ŷ_i - y_i)^2 + λ R(Θ_1, Θ_2, 𝐛_1, 𝐛_2, 𝐫, 𝐜,𝐠),
where M is the total number of subjects and R is an L^2-norm.
§.§ Synthetic Data Generation for Model Pre-Training
Given the heterogeneity of stroke, we pre-train LEGNet using a large simulated dataset. This pre-trained model is then fine-tuned using our small patient dataset. Our strategy is to insert “artificial lesions" into the neuroimaging data of healthy subjects and simulate its impact on rs-fMRI connectivity and language.
Our pipeline for generating synthetic data is shown in Fig. <ref>. We first simulate a unique structural lesion for each subject based on the following rules: (1) lesions are left-hemisphere only; (2) lesions are placed randomly but do not cross arterial territories <cit.>; (3) lesion sizes range from 5% to 20% of one arterial territory; (4) lesions are spatially continuous and simply-connected (i.e., without holes in the inside). Next, the artificial lesion is used to mask out voxels when computing ROI mean time series. We also diminish and add Gaussian noise to the connectivity represented in 𝐗 between the lesioned region and the rest of the brain, followed by clipping the values to lie within the original connectivity range. Finally, the language performance score is re-scaled proportional to the percentage spared gray matter (<1) to simulate the negative impact of the lesion on functionality.
§.§ Implementation Details
Simulated-Lesion HCP (HCP-SL): We use rs-fMRI data from 700 randomly selected subjects in the Human Connectome Project (HCP) S1200 database <cit.> as the foundation for generating synthetic data. Following the standard HCP minimal preprocessing pipeline <cit.>, we parcellate the brain into 246 ROIs using the Brainnetome atlas <cit.>. The subject language score is accuracy in answering simple math and story-related questions during an fMRI language task. Artificial lesions are inserted and modify the data as described in Section. <ref>.
Pre-training:
Pre-training is done via 10-fold cross validation (CV). We use a two-stage grid search to fix the model hyperparameters {λ, k, d_0, d_1, d_2, d_3}, with a coarse stage used to select a suitable power of 2 from 2^1 to 2^6, followed by a fine stage with increments of 1-2. The regularizer λ is swept across [10^-4,1]. The final values are {λ=0.005, k=8, d_0=4, d_1=8, d_2=2, d_3=8}. We use the Adam optimizer with learning rate starting from 0.01 and decaying by a factor of 0.95 every 20 steps. Early stopping is also applied based on the validation loss. Once the hyperparameters are selected, we pre-train the LEGNet architecture in order to provide a better model initialization for our stroke datasets.[All code and synthetic data will be made public upon paper acceptance.]
Application to Post-Stroke Aphasia: We use repeated 10-fold CV on our larger in-house dataset (see Section 3.1) to evaluate the performance of LEGNet in a real-world setting. The hyperparameters and optimizations are fixed at the values determined during the pre-training phase on synthetic data. The same procedure is applied to all baseline methods. We compare two scenarios: (1) training the models (LEGNet and baselines) from scratch, and (2) using the pre-trained model as the initialization for our repeated CV experiment.
Cross-Dataset Generalization: As further validation, we quantify the language ability prediction performances when the models (LEGNet and baselines) are trained on DS-1 and applied to our second in-house dataset (DS-2) which has slightly different patient characteristics than DS-1.
§.§ Baseline Models
We compare LEGNet with four baseline approaches. The first baseline is a modified BrainGNN model (BrainGNN†) <cit.>, which uses the same subgraph learning modules but does not perform edge-based learning or incorporate lesion information. The second baseline is BrainNetCNN model <cit.> with ROIs masked from the rs-fMRI connectivity input if the percentage of spared gray matter is less than 0.3 (BNC-masked). The third baseline is the BrainNetCNN with a two-channel input, with one channel being the unaltered rs-fMRI connectivity and the second channel being a lesion mask (BNC-2channel). The final baseline is support vector regression (SVR) with the lower-triangle of the rs-fMRI connectivity matrix used as the input feature vector.
The baseline models inherit the appropriate subset of hyperparameters from LEGnet. We used the default radial basis function kernel with C=100, γ=2/(N(N-1)), and ϵ=0.1.
§ EXPERIMENTAL RESULTS
§.§ Datasets of Post-Stroke Aphasia
In-House Dataset 1 (DS-1): This dataset consists of 52 patients with chronic post-stroke aphasia with left-hemisphere lesions and aged between 35–80 years.
Structural MRI (T1-weighted; TE=2.98ms, TR=2300ms, TI=900ms, res=1mm isotropic) and rs-fMRI (EPI; TE=20ms, TR=2/2.4s, res=1.72× 1.72× 3mm^3) were acquired on a Siemens 3T scanner. Both scans are pre-processed using the CONN toolbox <cit.>. Lesion boundaries were delineated manually by trained professionals and normalized to the MNI space. We use the Brainnetome atlas <cit.> to delineate 246 ROIs for the input rs-fMRI connectivity. Finally, all patients were evaluated using the WAB test <cit.> to obtain a measure of overall language ability (i.e., WAB-AQ). This value ranges from 0-100 with lower scores indicating severe aphasia, and higher scores indicating mild aphasia.
In-House Dataset 2 (DS-2): This dataset consists of 18 patients with chronic post-stroke aphasia that were recruited separately from DS-1. While the inclusion criteria and neuroimaging acquisition protocols are the same as for DS-1, the distribution of WAB-AQ scores is different. This provides an ideal scenario to evaluate cross-dataset generalization of LEGNet and the baseline models.
§.§ Performance Characterization and Model Interpretation
Table <ref> reports the predictive performance of each method using repeated 10-fold CV on DS-1. LEGNet achieves the best performance in RMSE, R^2, and correlation coefficient. While it is second-best to BNC-masked in MAE, the difference is not statistically significant. As a baseline, we applied the LEGNet architecture to DS-1 from a random initialization, i.e., without having access to synthetic data. To avoid data leakage, we selected the hyperparameters based on the corresponding modules used in previous studies <cit.>. We note a statistically significant decrease in performance w/o HCP, which underscores the importance of using synthetic data to design and initialize the deep network.
Fig. <ref> (left) illustrates the top two subgraphs identified by LEGNet for the best-performing model during repeated 10-fold CV. The top subgraphs are identified by averaging the subgraph assignment scores for each ROI (ψ(Θ_1 𝐋_j) from Section 2.1) across all 52 patients in DS-1. We use Neurosynth <cit.> to decode the functionality associated with the ROIs assigned to each of the top two subgraphs, as shown in Fig. <ref> (right). We note that LEGNet assigns high scores to regions that are related to the language ability. Intuitively, these regions also influence the prediction of language ability, as described in Section 2.1.
Finally, we tested generalization performance by applying the model that performs best on DS-1 to DS-2 without any fine-tuning
(Table <ref>). In terms of R^2, LEGNet maintains a leading position but, along with the other baselines, also shows a decrease compared to the validation performance in Table <ref>. While BrainGNN and SVR also show a decrease, BrainNetCNN-based models exhibit a sharper drop, indicating their reduced robustness
on unseen data. The other three metrics follow a similar trend. This is expected due to the slight distribution shift between DS-1 and DS-2. Nevertheless, LEGNet still outperforms all baseline methods, indicating superior generalization ability.
§ CONCLUSION
We have introduced LEGNet, a novel lesion-aware edge-based graph neural network model designed to predict language performance in post-stroke aphasia patients from rs-fMRI connectivity. LEGNet bridges the gap between the lesion boundary defined on nodes and rs-fMRI connectivity defined on edges, while simultaneously using the lesion size and position to guide both the graph convolution and subgraph identification processes. Our synthetic data generation procedure addresses the challenge of limited patient data by simulating lesioned brain networks in healthy subjects. Pretraining on the augmented HCP dataset allows for unbiased hyperparameter selection and a reliable model initialization for fine-tuning on patient data. We demonstrate that LEGNet outperforms state-of-the-art methods in predictive accuracy and generalization ability, thus highlighting its potential as a reliable tool for post-stroke aphasia evaluation.
§.§ Acknowledgements
This work was supported by the National Institutes of Health R01 HD108790 (PI Venkataraman), the National Institutes of Health R01 EB029977 (PI Caffo), the National Institutes of Health R21 CA263804 (PI Venkataraman), the National Institutes of Health P50DC012283 (BU Site PI Kiran) and the National Institutes of Health R01 DC016950 (PI Kiran).
splncs04
|
http://arxiv.org/abs/2409.03392v1 | 20240905095335 | Anisotropic Resonant Scattering from uranium systems at the U M4 edge | [
"E. Lawrence Bright",
"E. N. Ovchinnikova",
"L. M. Harding",
"D. G. Porter",
"R. Springell",
"V. E. Dmitrienko",
"R. Caciuffo",
"G. H. Lander"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
European Synchrotron Radiation Facility, 71 Avenue des Martyrs, Grenoble 38043, France
M. V. Lomonosov Moscow State University, Leninskie Gory, Moscow 119991, Russia
H. H. Wills Physics Laboratory, University of Bristol, Bristol, BS8 1TL, UK
Beamline I16, Diamond Light Source. Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, UK
H. H. Wills Physics Laboratory, University of Bristol, Bristol, BS8 1TL, UK
A. V. Shubnikov Institute of Crystallography, FSRC Crystallography and Photonics RAS, Moscow 119333, Russia
Istituto Nazionale di Fisica Nucleare, Via Dodecaneso 33, IT-16146 Genova, Italy
H. H. Wills Physics Laboratory, University of Bristol, Bristol, BS8 1TL, UK
§ ABSTRACT
We have conducted a series of scattering experiments at the uranium M_4 absorption edge on low-symmetry uranium compounds (U_2N_3 and U_3O_8) produced as epitaxial films. At weak and forbidden reflections, we find a resonant signal, independent of temperature, with an energy dependence resembling the imaginary part f” of the scattering factor. Theory, using the FDMNES code, shows that these results can be reliably reproduced assuming that they originate from aspherical 5f electron charge distributions around the U nucleus. Such effects arise from the intrinsic anisotropy of the 5f shell and from the mixing of the 5f electrons of uranium with the outer 2p electrons of the anions. The good agreement between theory and experiment includes azimuthal scattering dependencies, as well as polarization states of the scattered photons. The methodology reported here opens the way for a deeper understanding of the role the 5f electrons in the bonding in actinide compounds.
Anisotropic Resonant Scattering from uranium systems at the U M_4 edge
G. H. Lander
September 9, 2024
======================================================================
§ INTRODUCTION
Diffraction experiments as a function of the incident X-ray energy passing through elemental absorption edges were first performed in the early years of X-ray diffraction <cit.>, but became possible on a more expansive scale with the development of synchrotron sources, and were pioneered experimentally by Templeton and Templeton in the 1980s <cit.> and treated theoretically by Dmitrienko in the same period <cit.>. Since that time many experiments have been conducted on different materials, but the vast majority have been at the K edges of transition-metal 3d series of elements. The K edges for these materials span the range from ∼ 5 to 10 keV, which are prime energies for both synchrotron sources and diffraction experiments. Much new information about the materials under investigation can be obtained with suitable theoretical understanding <cit.>. However, the K-edge has two possible transitions, first the dipole (E1) transition 1s → 4p, and, second, the quadrupole (E2) transition 1s → 3d. In the case of 3d metals, these transitions can be almost of equal strength, so difficult to distinguish, although they do occur at slightly different energies. An example of the power of the technique can be seen in the work on TiO_2, in which the transitions could be separated and the resulting p-d hybridization of the electronic states identified <cit.>. The many ways in which resonant scattering can be observed are discussed in a review article by Kokubun and Dmitrienko <cit.>, which also covers work on Ge at the K-edge of 11.1 keV.
Other suitable edges for such experiments are the M_4,5 of the actinides <cit.>. These edges have an energy of 3.55 keV (U M_5) to ∼ 4.5 keV for Cf, and the E1 transitions represent an electron promoted from the occupied 3d shell to the partially filled 5f shell. In particular, we shall focus on the U M_4 edge at 3.726 keV. Diffraction has limitations at these edges, as the wavelength of the incident X-rays is λ = 3.327 Å, which drastically reduces the available reciprocal space that can be examined. The E1 transition (for M_4) is 3d_3/2 → 5f_5/2. This transition is much stronger (as discussed below) than any E2 transitions that involve 3d → 6d, 6g or 7s states, so it is assumed that the effects measured involve the 5f electrons, which are those of major interest in the actinides.
The best known E1 transition in this series allows one to probe the magnetic dipole ordering that occurs in many actinide materials. The first experiments to observe this effect were on a single crystal of UAs in 1989 <cit.>, and the authors comment that the resonant scattering was about six orders of magnitude greater than any non-resonant scattering in the antiferromagnetic state. Many experiments <cit.> on various aspects of magnetic structures have been explored with this resonant scattering at the M_4,5 edges of actinides up to and including Pu materials.
In the general case, the anisotropic resonant scattering (ARS) needs to be formulated as a tensor (hence it is often called anisotropic tensor scattering, ATS), and the theory is reviewed in Ref. <cit.> starting with Eq. 55 and continuing to Eq. 62. In the special case of cylindrical symmetry [i. e. SO(2)] the main interactions and observables may be represented in a simpler form where the cross sections are given in terms of the two components of polarization of the scattering <cit.>, parallel (π) and perpendicular (σ) to the diffraction plane. The results are that the E1 X-ray scattering amplitude contains 3 terms, the first is a non-resonant scalar probing electric charge monopoles. The second term is a rank-1 tensor sensitive to the magnetic dipole moment that, for uranium M edges, gives the large enhancement noted above <cit.>. The third term is a rank-2 tensor even under time reversal and sensitive to electric-quadrupole moments and to any asymmetry intrinsic to the crystal lattice.
Many experiments <cit.> have measured the magnetic dipole scattering, which we can conveniently call
E1-ℱ^[1]. Such a magnetic term has the characteristic energy dependence of the imaginary part f” of the X-ray form factor and is proportional to the component of the dipole magnetic moment perpendicular to the plane defined by the incident and scattered polarization vectors. The third term in the E1 scattering amplitude, which we call E1-ℱ^[2], has been observed in UPd_3 <cit.>, NpO_2 <cit.>, UO_2 <cit.>, and in their solid solutions <cit.>. These results refer to the observation of charge quadrupoles, which cannot be measured by neutron diffraction, and have been of considerable interest <cit.>. They probably exist in more f-electron materials than presently realized <cit.>. The energy dependence of the scattering in the E1-ℱ^[1] and E1-ℱ^[2] processes is different <cit.> and can be calculated beyond the fast collision approximation <cit.>. For instance, the intensity at the M_4 edge in the σ-σ channel for the UO_2 (1 1 2) and NpO_2 (0 0 3) reflections, due to the E1-ℱ^[2] term, is centered about 2 eV below the position of the magnetic dipole resonance and has an approximate Lorentzian squared shape, contrary to the E1-ℱ^[1] signal that usually exhibits a Lorentzian line shape. However, it must be noted that when the multiplet splitting of the intermediate state can be neglected, an average energy value can be used in the denominator of the E1 scattering amplitude, and the resonant factor can be replaced by a Lorentzian-shaped energy profile.
Similarly, the polarization dependencies are different for E1-ℱ^[1] and E1-ℱ^[2]. In the former the incident σ polarization is all rotated to π radiation, whereas in the latter process both σ-σ, and σ-π polarizations exist. The azimuth angle dependence of the resonant Bragg peaks (the variation of the peak intensity while the sample is rotated about the scattering vector) of both cross sections provide information on the mutual orientations of the aspherical electronic clouds in the crystallographic unit cell. As well as electric quadrupoles, the E1-ℱ^[2] scattering also occurs when the magnetic structure has at least two components that are non-collinear, i. e. either 2k, or 3k magnetic configurations <cit.>.
We have discussed the more conventional resonant scattering as performed in the actinides in some detail in order to make a contrast with the results reported in the present paper. Our first observation was reported briefly in 2019 using epitaxial films of the cubic bcc U_2N_3 <cit.>. We shall discuss reflections from this material in more detail later, but we show in Fig. <ref> the energy dependence for various reflections measured with this material.
The shape of the energy curves closely follows that of the E1 resonance anticipated from the f” term in the cross section. The position and shape of the peak in energy strongly suggest this is an E1 process. The energy dependence obtained from theory (see Sec. <ref>) is compared with experimental results in Fig. <ref>. The calculated curves are clearly narrower than the experimental ones, but this is a question of experimental resolution. The overall agreement is excellent.
U_2N_3 also orders antiferromagnetically (AF) at T_N ∼ 75 K. Evidence for this is reported in Ref. <cit.>. The new AF reflections appear at non-bcc reciprocal lattice points, i. e. at reflections with h + k + ℓ = odd, which indicates that in the AF state the dipole moments related by the bcc operator have oppositely directed moments. The exact AF configuration is unknown, but it is important to stress that the effects reported in the present paper are unrelated to the AF order. First, the effects have been observed on purely charge-related reflections, i. e. h + k + ℓ = even, and, second, no temperature dependence is found for any of the effects discussed here.
There is ample evidence, especially in the study on U_2N_3, that the effects are due to anisotropic 5f electron charge distributions. These will become evident at absent or weak Bragg reflections when the spherical charge distribution due to the radon core (86 electrons) is subtracted due to the out-of-phase contributions from two different uranium atoms. In the studies reported below we have such conditions in the unit cell. The effects we observe can then be seen when the spherical core distributions are subtracted, and the remaining part represents the difference between the anisotropic charge distributions from the 5f states. The fact that these have a maximum value at the M_4 absorption edge, is simply a consequence of the maximum of the f” component at this energy. They unambiguously assign the effects to aspherical 5f distributions, possibly associated with covalency, presumably (in the case of U_2N_3) between the U 5f states and the nitrogen 2p states.
We show in Fig. <ref> the structures of the two uranium compounds we have examined. In both cases there are two independent sites for the U atoms, and it is the differences in the charge distributions between these two sites that gives rise to the anisotropic resonant scattering.
These effects cannot be observed in reflections that are absent due to global symmetry constraints (e. g. at positions forbidden by fcc or bcc symmetry operators), but can be present at forbidden reflections <cit.> due to glide-plane operators. They can also be present at weak reflections, where contributions from the uranium atoms in the unit cell are out of phase. The effect cannot be observed in high-symmetry structures such as UO_2 (fcc CaF_2 structure) or UN (fcc NaCl structure). Even in the well-known compound URu_2Si_2 with the I4/mmm (SG no. 139) tetragonal structure, the effect will not be present, as there is only one U atom at the origin of the unit cell.
§ EXPERIMENTAL DETAILS
Measurements were performed on epitaxial thin films fabricated at the FaRMS facility at the University of Bristol, UK <cit.> which has a dedicated actinide DC magnetron sputtering system.
Epitaxial thin films provide a series of advantages for the measurements performed in this study. Firstly, they allow easy fabrication and stabilization of single crystals such as U_2N_3, which has not previously been produced in bulk. Secondly, the low volume of radioactive material allows for easy handling and transportation of the samples. Thirdly, to facilitate a major aim of the experiments of obtaining azimuthal scans, where the sample is rotated about the scattering vector, and the intensity determined as a function of Ψ, the so-called azimuthal angle. The major experimental difficulty is associated with a large absorption of X-ray beams of this tender energy incident on a sample containing uranium. As given in Ref. <cit.> the attenuation length (1/e) of such beams at the M_4 edge into uranium metal is ∼ 400 nm, somewhat longer for an oxide with lower density. A large, flat surface (5 × 5 mm^2) with a thickness of ∼ 200 nm gives a uniform scattering volume as it is rotated about the scattering vector to perform the azimuthal scan, provided also that the angle to the specular direction of the film is less than ∼ 20 deg. Whereas qualitative results are relatively easy to obtain, quantitative results for the azimuthal intensity that can be compared to theory are much more difficult to extract.
Films of U_2N_3 and U_3O_8 were deposited by sputtering in N_2 and O_2 partial pressures, respectively, as described previously <cit.>. To avoid oxidation, all films were covered by a polycrstalline cap (∼ 50 nm) of Nb.
U_2N_3 was deposited on (001) oriented CaF_2, producing U_2N_3 with the principle axes aligned in the specular direction. Due to the symmetry of the U_2N_3 bixbyite structure, with non-equivalent a, b, and c axes and a [111] screw axis, this effectively produces two domains. These domains will have completely overlapping Bragg reflections. For convenience, we will describe the film with the [001] axis specular, with the two domains defined by having either the [100] or [010] axis along the CaF_2 [100] direction.
U_3O_8 was also deposited on a (001) CaF_2 substrate, producing a film with 8 domains with [131] specular. Non-specular reflections of domains do not overlap, making them easy to distinguish.
ARS experiments were performed using the I16 diffractometer <cit.> at the Diamond Synchrotron (UK). The energy of the incident X-ray beam has been tuned to the uranium M_4 edge at 3.726 keV.
All the results in this paper refer to the samples at room temperature. In Ref. <cit.> tests were done on a forbidden reflection of U_2N_3 as a function of temperature, and no T-dependence was found. We have assumed that these effects are associated with bonding in the material, and thus no T-dependence is expected.
It is also important to determine whether the polarization of the scattered radiation is unrotated, i. e. σ-σ or rotated, i. e. σ-π, which is measured in standard fashion by using an Au (111) crystal as an analyzer before the detector. Since the results reported here are of weak intensities, and the use of an analyzer reduces the observed signal, we have only performed limited polarization scans.
A major further difficulty is that there are domains in all of the films. These have been studied and characterized at Bristol before the synchrotron experiments. Multiple scattering is also a possibility.
§ THEORY
Our studies of resonant X-ray scattering in U_2N_3 and U_3O_8 at the incident radiation energy close to the M_4 absorption edge demonstrated strong anisotropy of resonant atomic factors of uranium corresponding to the E1 transitions between the 3d_3/2 and virtual 5f states. The study of the spectral shape of both the forbidden reflection (105), and several weak allowed reflections in U_2N_3 and U_3O_8, has shown that their spectral shape has a form of a peak close to the M_4 absorption edge, implying that the resonant contribution to the atomic factor is sufficiently strong in comparison to the charge scattering, in contrast to the situation at the K-edge <cit.>. The pronounced azimuthal dependence confirms this, as well as the existence of a scattering channel with a change in polarization. In both studied crystals, uranium atoms occupy two crystal sites with different local symmetry, hence the spectral and angular properties of reflections are determined by the interference of the waves scattered by non-equivalent atoms and by the electronic density. This makes the azimuthal dependence of reflections dependent on energy. Such a phenomenon was also observed in Fe K-edge
<cit.>.
In U_2N_3 the local symmetry of the U_1 atom is 3̄, hence the non-magnetic dipole resonant atomic factor is uniaxial with two independent components, whereas the atomic factor of the U_2 atom with 2 local symmetry is not uniaxial and possesses 3 independent components. In U_3O_8 the atomic tensor factors of both U_1 and U_2 are not uniaxial, but their symmetry differs from one other. All tensor components have their specific spectral shapes, providing a variety of spectral and azimuthal properties of resonant reflections, which are determined by their combinations.
We will not describe in detail all the features of the tensor factors of uranium, but we will demonstrate some statements using the example of calculations performed with the FDMNES program <cit.>. It allows us to make a variety of calculations, including calculating energy spectra and azimuthal dependences of reflections, and makes it possible to vary many physical parameters that describe the system under study for comparison with experimental data.
Fig. <ref> shows calculations of the azimuthal dependence of various reflections in U_2N_3. There is, of course, also a dependence on the intensity of the energy displacement from the edge but the azimuthal symmetry is largely independent of this factor.
The large variety of shapes of the azimuthal dependences is due to the difference in the spectral shape of the components of the tensor atomic factor for each uranium atom, which contributes to individual reflections, as well as the type of interference of waves scattered by atoms of positions U_1 and U_2. The (004) reflection, which is the strongest in the structure has, of course, no azimuthal dependence and is all σ-σ.
Fig. <ref> gives further details of the (015) reflection. The upper panel shows the azimuthal dependence of the the square of the modulus of the structural amplitude for the
σ-σ and
σ-π channels, together with their sum. It is worth noting here that the σ-σ intensity is zero at the azimuth where the total intensity has a maximum, so there should be a strong σ-π contribution at this point, which was found experimentally. The lower panel of Fig. <ref> shows the same quantity taking into account the contribution only from atoms of position U_1 (magenta line) and only atoms of position U_2 (orange line), as well as when taking into account both positions of uranium (cyan line). Note that the cyan curve is not the sum of the other two, since it is the square of the modulus of the sum of the scattering amplitudes from the two uranium positions, taking into account the phase difference.
The situation is even more complicated for the non-forbidden reflection, because it is necessary to take into account the charge scattering, which participates in the interference of the waves. There is a good chance to separate the resonant and charge scattering using polarization analysis, because the latter forbids the σ-π scattering channel <cit.>. Calculations demonstrate strong difference of the azimuthal dependences of the σ-π and σ-σ scattering. In particular, strong σ-π scattering is expected for forbidden (103) and (105) reflections, and this has been confirmed experimentally (Fig. <ref>), but for allowed reflections σ-σ is stronger than σ-π.
§ RESULTS AND DISCUSSION
§.§ U_2N_3
This material has the body-centered cubic bixbyite structure common to materials such as Mn_2O_3, which has an inversion center at (000). Space group no. 206 Ia3̄. Because the film (200 nm) is deposited on a CaF_2 substrate, there is some small strain (1.9%), the c axis = 10.80 Å in the growth direction, and the basal plane axes are 10.60 Å. We have performed DFT simulations to see whether this small strain, which results in an orthorhombic structure, changes significantly the symmetry conditions of the uranium atoms, but they show that the effects are very small. We therefore keep the cubic bcc structure as a good approximation to the symmetry in the film. Orientation [001] vertical, a and b in plane.
As discussed in Sec. <ref>, U_2N_3 also orders magnetically at ∼ 75 K, see Ref. <cit.>. The AF order gives rise to new reflections at positions h + k + ℓ = odd, whereas all measurements reported here have been made at true bcc positions, i. e. h + k + ℓ = even, and are at room temperature.
There are two types of uranium in the unit cell: U_1 sits at 8b position, point symmetry (.3̄.) with coordinates ( ¼ ¼ ¼) and this atom is at an inversion center. The second uranium U_2 sits at position 24d with coordinates (x 0 ¼) with x ∼ - 0.02 and there is no inversion center at this site, the point symmetry is (2 . .).
There is no 4-fold symmetry element in this space group. This implies that the [100] and [010] axes are different. In turn, this implies that there are two domains in the film with an [001] axis as the growth direction. To compare theory and experiment, we need to average over the two domains. In practice, what appears to be a theoretical curve for the (002) with a repeat of 180 deg in the azimuthal angle, will result in two patterns displaced by 90 deg, so the overall repeat appears to be 90 deg in the azimuthal. For other reflections the domain averaging is more complex.
Results for azimuthal scans for the (002) allowed (but weak) reflection are shown in Fig. <ref>. Polarization scans showed the majority scattering was in the σ-σ (unrotated) channel, but since the reflection is also allowed this is not surprising. We did find a small signal in σ-π, consistent with theory.
We now turn to the forbidden reflections (105) and (015). We have already shown in Fig. <ref> the energy dependence of the intensity found at (013), which like the (015) is forbidden in this space group due to the presence of a glide plane. For azimuthal scattering we have chosen the (105) as the angle to the specular (11.3 deg) is smaller than for the (103). We also have similar theoretical curves for the (105) and (015), see Fig. <ref> (lower panel).
Recall that the domains will result in a summing of these two reflections before we can compare experiment with theory.
Figure <ref> shows the experimental results compared to theory for the (105) + (015) domains.
Experimentally we find a minimum intensity at Ψ = 0 (when the [100] is along the beam direction), but it is not zero. Despite these scans all being η scans (i. e. the film is rocked through the reciprocal lattice point) some small intensity (∼ 0.3 on scale of Fig. <ref>) remains and this we ascribe to background multiple scattering. Notice here that the only position where we have found appreciable rotated (i. e. σ-π) scattering is at the position of the maximum. This is predicted by theory (see Fig. <ref>, lower panel) and the agreement with experimental results is clearly acceptable. A further test was made by rotating the sample 90 deg and the minimum in scattering rotated by the same angle.
Approximately, the intensity of the ARS scattering is between two and three orders of magnitude lower than the strong Bragg reflections from the structure, which is also in agreement with theory.
§.§ U_3O_8
U_3O_8 is an important product of the oxidation of UO_2. The structure of the α-form is orthorhombic. Although there is a tendency to give the space group as no. 38 with symmetry Amm2, this loses the connection to the hexagonal high-temperature form with Space Group no. 189 and P6̄2m symmetry. We have therefore found it easier to retain this connection by defining the orthorhombic form with the symmetry C2mm and lattice parameters (at RT) of a = 6.715 Å, b = 11.96 Å, and c = 4.15Å. When this converts to the hexagonal form, the c axis remains the same, and there is simply a shift of the atoms in the ab plane. This is consistent with the early work on the crystal structures reported by Loopstra <cit.>. More recent work has tended to use the description in terms of the Amm2 notation
<cit.>. At 25 K this material orders antiferromagnetically <cit.>, but we have examined the thin film sample only at room temperature.
The symmetry of this system is low, there are two different U positions in the unit cell. U_1 is at the position (x 0 0), with x = 0.962 on a 2-fold axis. This atom is supposed to have a U^6+ valence state, so there should be no 5f electrons associated with U_1, as the 5f shell is empty. However, transitions from the core 3d states into the empty 5f shell are still possible. This interpretation is consistent with a recent study with resonant inelastic X-ray scattering <cit.>.
The 4 U_2 atoms, with valency U^5+, i. e. 5f^1 are at positions (x y 0) with x = 0, y = 0.324 and they sit on a mirror plane. The space group is non-centrosymmetric, and neither U atom is at positions of inversion symmetry. There are no forbidden reflections in this system (except that h + k = even from the C-face centering). No extra scattering was found on reflections h+k = odd.
It proved difficult to measure azimuthal dependencies, because of the need to make large absorption corrections as a function of Ψ. We established that the weak (241) reflections, with an angle of 14.5 deg to the specular had the azimuthal dependence shown in Fig. <ref>. With a repeat of 180 deg. The allowed (241) reflection has a calculated intensity 3.0% of the strongest reflection (001). So, the forbidden intensity is ∼ 1% of strongest reflection.
Extra energy dependent contributions were found for a number of other weak reflections, but their azimuthal dependence was not readily established. Only a very small contribution (< 5%) was found in the σ-π channel, so the σ-σ dominates. There are a number of domains in this system, but they do not align with the principal domain we have chosen, so there is no overlap in comparing with theory. The latter gives a repeat of 180 deg with only small σ-π cross section, which is consistent with the experiment.
§ CONCLUSIONS
The experiments and theory presented here show clearly that the uranium atoms in the investigated structures exhibit aspherical 5f charge distributions.
The results obtained show that in U_2N_3 the contribution to resonant scattering from atoms U_1 is significantly less than from atoms U_2, but it cannot be neglected. This conclusion was deduced from intensity considerations in Ref. <cit.>, but lacked quantitative evidence from azimuthal scans. The present theory confirms that this is the case. The tensor atomic factor of atoms U_1 has uniaxial symmetry, which is not the case for U_2.
In the case of U_3O_8 both uranium sites contribute to the ARS of the reflections. The U_1 site in this material is U^6+ so has no occupied 5f states, however, the ARS cross section depends on the status and asphericity of the unoccupied 5f states.
An interesting paper by Lovesey <cit.> has suggested that we may be observing uranium octupoles in U_2N_3, which would require an E2 transition at the M_4 edge. Given our discussion in the Appendix about E2 transitions, together with the strong evidence for a dipole (E1) transition in the energy dependence (shown in Fig. <ref>), we believe this interpretation <cit.> is unlikely, and too small to be observed even if present.
Based on a rough estimate of the ARS diffraction intensity from these two systems, the ratio to the strongest reflections from the crystal structures is between 0.1 and 1%. This is not a particularly difficult limit with synchrotron sources, although measuring accurately the azimuthal dependencies is more challenging due to multiple scattering, as well as the large absorption at the resonant energy.
Quite possibly, many more such systems can be found and measured to give further evidence for these effects. The bulk of the data should be able to be modelled to examine the orbital occupation of the 5f states around the U nuclei; thus, giving a more quantitative understanding of the covalency in these materials. For example, the crystal truncation rod experiments of Stubbs et al on UO_2 <cit.> could be combined by measuring at the U M_4 edge with dissolution experiments (Springell et al. <cit.>) to search for complexes involving uranyl-based (U^6+) deposits on the surface. Ab initio calculations, such as those by Arts et al. <cit.> could then model the molecular structure to understand better what happens at the atomic level during dissolution.
The presence of aspherical 5f orbitals largely depends on the symmetry of the lattice and covalent interactions. However, our current experiment does not allow us to determine the specific orbitals involved in covalency or the extent of it. Advanced ab initio electronic structure calculations, such as those combining density functional theory and its time-dependent extension, or dynamical mean field theory, are required <cit.>. Nevertheless, it is only by gathering more experimental information, as done in the current work by elastic resonant X-ray scattering, or by spectroscopy techniques <cit.>, that a precise model of the covalency in actinide bonding can be established.
§ ACKNOWLEDGMENT
We would like to thank Steve Collins, Alessandro Bombardi, and Gerrit van der Laan for discussions. This work was carried out with the support of Diamond Light Source, instrument I16 (proposals MM27807, MM34651).
These experiments were started in 2018, with a report of the first observation of ARS from U_2N_3 published in 2019 <cit.>. More experiments were then performed in 2021, and the theoretical calculations were completed by the end of 2021. A further experiment to test the theoretical predictions was performed in 2023.
§ APPENDIX. DISCUSSION OF POSSIBLE HIGHER-ORDER TRANSITIONS IN RXES AT THE URANIUM M EDGES.
The E2 contributions at the M_4,5 edges of actinides indirectly probe 5f-shell higher order multipoles (up to rank 4), through 3d → (6d, 6g, 7s) transitions, or intermultiplet processes. Up to now, there has been no definitive identification of the E2 term at the actinide M edges due to the involvement of intermediate states that are delocalized orbitals, resulting in weak overlap integrals and intensities much weaker than the E1 transition. We should emphasize here that the absence of an observable E2 transition at the actinide M_4,5 edges contrasts with the transitions at the K edges of transition metals, where the E1 and E2 transitions are of similar magnitude.
This is because the E2 transition at the K-edge connects two relatively localized orbitals (1s → 3d) and is comparable in intensity, or even stronger, than the E1 transition, which connects the 1s state with the more delocalized 4s and 4p states.
A quantitative estimate can be made by referencing a paper on URu_2Si_2<cit.> and examining the terms included in the cross sections of such transitions. Using the formulae given as Equations (32) and (33) in Ref. <cit.>, we can estimate the relative strength of the (M_4, E1) transition compared to that of the (M_4, E2) transition, i. e.
I(M_4, E2)/I(M_4, E1)∝( ⟨ 3d|r^2|6d, 6g, 7s ⟩/⟨ 3d|r|5f ⟩)^4
where the wave-vector (k) and core-hole lifetimes (Γ) given in Ref. <cit.>, Eq. (32, 33), drop out, as we are examining the same M edges in both cases. The 4th power emerges because the process involves a photon in/photon out, and the amplitude encompasses the ground and intermediate state, followed by the reverse process. This results in the square of the matrix element. For intensity, this requires the 4th power. We use the values
⟨ 3d | r | 5f ⟩ = - 0.04452, ⟨ 3d | r^2 | 6d ⟩ = 0.00147, and
⟨ 3d | r^2 | 7s ⟩ = – 0.00047 <cit.>, so the ratio for the 6d transition is down a factor of 30, and that for the 7s is down a factor of 95. It is the 4th power of these numbers that results in factors of less than 10^-5, so E2 transitions at the M_4,5 edges will be difficult to observe, consistent with the fact that none have been observed so far. It is the relatively large value of the overlap between the wavefunctions 3d and 5f in the actinides that gives rise to the large E1 term, and the enhancements reported in such measurements <cit.>.
|
http://arxiv.org/abs/2409.02515v1 | 20240904082524 | Nonequilibrium dynamics of coupled oscillators under the shear-velocity boundary condition | [
"Hidetsugu Sakaguchi"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"nlin.CD"
] |
Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, Kasuga, Fukuoka 816-8580, Japan
§ ABSTRACT
Deterministic and stochastic coupled oscillators with inertia are studied on the rectangular lattice under the shear-velocity boundary condition. Our coupled oscillator model exhibits various nontrivial phenomena and there are various relationships with wide research areas such as the coupled limit-cycle oscillators, the dislocation theory, a block-spring model of earthquakes, and the nonequilibrium molecular dynamics. We show numerically several unique nonequilibrium properties of the coupled oscillators. We find that the spatial profiles of the average value and variance of the velocity become non-uniform when the dissipation rate is large. The probability distribution of the velocity sometimes deviates from the Gaussian distribution. The time evolution of kinetic energy becomes intermittent when the shear rate is small and the temperature is small but not zero. The intermittent jumps of the kinetic energy cause a long tail in the velocity distribution.
Nonequilibrium dynamics of coupled oscillators under the shear-velocity boundary condition
Hidetsugu Sakaguchi
September 9, 2024
==========================================================================================
§ INTRODUCTION
Coupled phase oscillators called the Kuramoto model have been intensively studied by many authors as a simple solvable model of collective synchronization <cit.>. There are many generalized models for the Kuramoto model. One model includes the inertia term or the second derivative of the phase variable <cit.>. The coupled phase oscillators on square or cubic lattices was called oscillator lattices and the phase transition via the collective synchronization was studied in the finite-dimensional systems <cit.>.
In the previous paper, we demonstrated that the vortex motion in the oscillator lattices is closely related to the dislocation motion in solids <cit.>. The dislocation motion is important to understand the mechanical properties of the solid under the shear stress <cit.>.
The correspondence between the coupled phase oscillators and lattice dynamics is as follows. The phase variable in the oscillator lattice with inertia is interpreted as the one-dimensional displacement in the z direction in a three-dimensional crystal, and the displacement is assumed to be uniform in the z direction. That is, the displacement z_i,j,k at the (i,j,k) site takes the same value for all k's, and is expressed as z_i,j. The vortex in the oscillator lattice corresponds to the screw dislocation in the crystal. The shear stress on the boundaries corresponds to the external force at the boundaries of the oscillator lattices. We found that the vortex begins to move if the shear stress is beyond a critical force, which corresponds to the Peierls force in the dislocation theory. The spontaneous generation of vortices and the complex motion with nonzero frequencies correspond to the slip motion or the plastic flow in solids. We further studied the oscillator lattices under external noises, and found that the vortex motion occurs under the critical force owing to the fluctuation effect, which corresponds to the dislocation motion at a finite temperature <cit.>.
The coupled oscillators under the shear boundary conditions are also related to the mechanics of earthquakes and faulting <cit.>. The block-spring models of earthquakes have been studied by many authors <cit.>. In the block-spring model, each block has a mass and is coupled with the neighboring blocks through a linear spring. The shear force is applied from the top plate moving with a small velocity. A large slip motion in the block-spring model corresponds to an earthquake.
The deterministic chaos and fluctuation in the oscillator lattice under the shear stress is related to the nonequilibrium statistical mechanics. The nonequilibrium statistical mechanics has been numerically studied with the molecular dynamics by many authors <cit.>. Various nonequilibrium properties such as the shear viscosity <cit.> and heat conductivity <cit.> have been investigated by numerical simulations of Newton's equation of motion of many particles. The relationship between the reversible equations and irreversible behavior was discussed by several authors. <cit.>.
In the molecular dynamics of fluids, interacting pairs of molecules change with time. In our oscillator lattice models, each oscillator interacts only with the nearest neighbors, but a large displacement occurs by the phase slip between the neighboring oscillators. In this paper, we will discuss several nonequilibrium properties and velocity distributions in the coupled oscillators under the shear velocity boundary conditions.
§ DETERMINISTIC COUPLED OSCILLATORS UNDER THE SHEAR-VELOCITY BOUNDARY CONDITION
In this section, we consider deterministic coupled oscillators with inertia on the rectangular lattice:
d^2 z_i,j/dt^2=K_x∑_i^'=i-1,i+1(z_i^',j-z_i,j)+K_y∑_j^'=j-1,j+1sin (z_i,j^'-z_i,j)-ddz_i,j/dt,
where z is the displacement in the z direction at the lattice point of (i,j), d is the coefficient of resistance force in proportion to the velocity, and (i^',j^')'s are the nearest neighbor sites of the (i,j) site on the rectangular lattice of L_x× L_y. The linear and sinusoidal couplings are assumed respectively in the x- and y-directions. The coupling strength K_x and K_y are set to be 1 in this paper.
Periodic boundary conditions are imposed at i=1 and i=L_x. In the y direction, the boundary conditions corresponding to the shear velocity are imposed, that is, z_i,j=0 at j=0 and z_i,j=v_0 t at j=L_y+1.
The numerical simulation was performed with the Runge-Kutta method with timestep Δ t=0.005. For d=0 and v_0=0, the coupled oscillator system has the Hamiltonian:
H=∑_i,j1/2 (dz_i,j/dt )^2+1/2∑_i,j∑_i^'=i-1,i+1(z_i^',j-z_i,j)^2+∑_i,j∑_j^'=j-1,j+1{1-cos(z_i,j^'-z_i,j)}.
For L_x=L_y=1, a single oscillator obeys the equation
dz/dt = v,
dv/dt = sin(v_0t-z)-sin z-dv.
Equation (3) is a conservative (dissipative) system at d=0 (d 0) since the phase space volume decays as e^-dt. Figure 1(a) shows the Poincaré map (stroboscopic mapping) in the phase space of ( mod(z,2π),v) at t=2π n/v_0 (n: integer) for v_0=0.1 and d=0. The initial condition is z(0)=0 and v(0)=0.05001. A chaotic dynamics is observed owing to the periodic forcing term sin(v_0t-z). Figure 1(b) shows the velocity distribution of v. The average velocity is 0.05. The velocity is confined between -2.78 and 2.88.
For L_x=300 and L_y=1, the one-dimensional coupled oscillators obey
dz_i/dt = v_i,
dv_i/dt = K{sin(v_0t-z_i)-sin z_i}-dv_i+K(z_i+1-2z_i+z_i-1).
Figure 2(a) shows the time evolution of the average value of the kinetic energy: E=(1/L_x)∑_i=1^L_x(1/2)v_i^2 at v_0=0.01, 0.1, and 1 for d=0. The kinetic energy increases with time. The time evolution can be approximated as a power law E∝ t^α with α≃ 0.27, 0.35, and 0.41 respectively for v_0=0.01, 0.1, and 1. The increase of kinetic energy or heating occurs under the shear-velocity boundary conditions. The heating is not observed at L_x=1 as shown in Fig. 1. We have checked that the heating occurs for L_x≥ 4 at v_0=0.1. The critical size for the heating depends on v_0, that is, the critical size L_xc was evaluated as 3, 8, and 13 respectively for v_0=1, 0.05, and 0.03. The critical size decreases as v_0 is larger.
Although the heating occurs due to the nonequilibrium boundary conditions, the mechanism is not clear. Figure 2(b) shows the probability distributions of v at v_0=0.1 near t=50000 and 100000, The probability distributions are well approximated at the Gaussian distribution, and the variance increases with time. The Gaussian distribution of the velocity corresponds to the Maxwell distribution of the ideal gas in the statistical mechanics.
Figure 3(a) shows the time evolution of E at v_0=0.1 and d=0.001 and 0.05. The increase of the kinetic energy stops owing to the dissipation term -dv_i with d=0.001.
The kinetic energy is fluctuating at d=0.001 and 0.05. The average value of E and the fluctuation amplitude is the same order at d=0.05.
Figure 3(b) shows the relationship between d and the average kinetic energy ⟨ E⟩ at v_0=0.1. The dashed line is ⟨ E⟩∝ t^-α with α=0.35. The numerical result of the time evolution of E as E∝ t^α at d=0 suggests that E satisfies an approximate equation dE/dt=β E^1-1/α at d=0.
By adding the dissipation term of -2dE, the time evolution of the kinetic energy E is assumed to be
dE/dt=β E^1-1/α-2dE.
The stationary value of E is expressed with
E∝ d^-α.
Figure 3(b) shows that the power law ⟨ E⟩∝ d^-α is satisfied for d<0.001. ⟨ E⟩ deviates from the power law for d>0.001.
Figure 3(c) shows the probability distributions P(v) at d=0.001 and 0.05.
The dashed lines are P(v)=1/√(2πσ^2)exp{-(v-v_0/2)^2/(2σ^2)} with σ^2=2.35 and P(v)=1/(2γ)exp(-|v-v_0/2|/γ) with γ=0.39. (The dashed line P(v)=1/√(2πσ^2)exp{-(v-v_0/2)^2/(2σ^2)} is hardly seen because of the overlapping with the numerically obtained P(v).) For small d, P(v) is well approximated at the Gaussian distribution, however, P(v) deviates from the Gaussian distribution and closer to the exponential distribution. Figure 3(d) shows the numerically obtained kurtosis q=⟨(v-v_0)^4⟩/(⟨ (v-v_0)^2⟩)^2-3 as a function of d. The kurtosis is 0 for the Gaussian distribution and q is a quantity expressing the deviation from the Gaussian distribution. Figure 3(d) shows that the deviation from the Gaussian distribution appears for d>0.002.
The deviation of P(v) from the Gaussian distribution at d=0.05 might be due to the large fluctuation of the kinetic energy compared to the average value, although the mechanism is not well understood.
Next, we show numerical results for a rectangular system of L_x× L_y=300× 50.
Figure 4(a) shows the time evolutions of the variance of v_j for i=1,2,⋯,L_x at j=5,30, and 45 for d=0 and v_0=0.5. The variance increases with time and do not depend on j. The dashed line denotes the power law of t^α with 0.35. The variance increases faster than a power law, but it is confirmed that the heating occurs also in this rectangular system. Figure 4(b) shows the profile of the average velocity ⟨ v_j⟩ at d=0 and v_0=0.5. The average velocity changes linearly in j. The linear shear flow is characteristic in the Newton fluid with the linear viscosity.
Figure 5(a) shows the profiles of the average velocity ⟨ v_j⟩ at d=0.0005, 0.002, 0.005, and 0.01 for v_0=0.5. The average velocity changes almost linearly in j at d=0.005 and the difference ⟨ v_j+1⟩-⟨ v_j⟩ or the shear rate between the jth and (j+1)th layers is nearly constant. However, the shear rate increases with j at d=0.002. The shear rate takes a small value for j<33 and takes a large vale for j>33 at d=0.005. The average velocity changes rapidly from 0 to v_0=0.5 near j=L_y at d=0.01. That is, the shear rate is almost zero for j<46 and takes a very large value for j>46 at d=0.01. The localization of shear rate near the boundary j=L_y is a phenomenon similar to the one observed in the coupled noisy oscillators without inertia found in the previous paper <cit.>. We showed that the shear localization occurs when v_0 is relatively large, and the vortex density is high in the boundary region where the shear rate of the average velocity takes a large value and the vortex density takes a lower constant value in the bulk region where the average velocity is almost zero. The phenomena of shear or strain localization are observed in various complex fluids and plastic materials such as the plug flow in the Bingham fluid <cit.>, the shear banding <cit.>, and the formation of shear zones in rocks <cit.>, although the mechanism might be different. The reason of the shear localization in our model is not well understood yet.
Figure 5(b) shows the profiles of the variance of the velocity v_j at d=0.0005, 0.002, 0.005, and 0.01. The variance of the velocity corresponds to the temperature. The temperature is almost uniform for d=0.0005, but the temperature is higher near j=L_y for d=0.01. This is the excess heating due to the shear of the velocity profile near j=L_y. Even if the average velocity is zero, the variance of the velocity takes a nonzero value and changes slowly for j<46 at d=0.01, that is, the oscillators are not stationary.
Figure 6(a) shows the time evolution of the kinetic energy per each oscillator E=(1/L_xL_y)∑ (1/2)v_i,j^2 at d=0.01. Intermittently, E takes a large value, which is related to the excess heating. Figure 6(b) shows the probability distribution of the velocity v_j at j=20 and j=48 for d=0.01. The velocity distribution at j=20 is well approximated at the Gaussian distribution exp{-v^2/(2σ^2)}/√(2πσ^2) with σ^2=0.22. Figure 6(c) is the velocity distribution P(v) in the semi-logarithmic scale at j=48 for d=0.01. The dashed line is 1/√(2πσ^2)exp{-(v-0.35)^2/(2σ^2)} with σ^2=0.785. The velocity distribution at j=48 is not mirror symmetric around the peak value and deviated from the Gaussian distribution probably due to the shear velocity.
§ COUPLED NOISY OSCILLATORS UNDER THE SHEAR-VELOCITY BOUNDARY CONDITION
The coupled oscillator model at a finite temperature is assumed to be
d^2 z_i,j/dt^2=K_x∑_i^'=i+1,i-1(z_i^',j-z_i,j)+K_y∑_j^'=j+1,j-1sin (z_i,j^'-z_i,j)-ddz_i,j/dt+ξ_i,j(t),
where ξ_i,j(t) is the Gaussian white noise satisfying ⟨ξ_i,j(t)ξ_i^',j^'(t^')⟩=2dTδ_i,i^'δ_j,j^'δ(t-t^'). Here, T is a quantity corresponding to the temperature. The coupling constants are assumed to be K_x=K_y=1. If the periodic boundary conditions are imposed for all the four boundaries: i=1, i=L_x, j=1, and j=L_y, the thermal equilibrium distribution is realized. The probability distribution of the velocity v_i,j is the Gaussian distribution
P∝ e^-H/T∝ e^-∑_i,jv_i,j^2/(2T),
where H is the total energy of Eq. (2).
The shear-velocity boundary conditions induce a nonequilibrium state.
Figure 7(a) shows the average velocity ⟨ v_j⟩ as a function of j at T=2, d=0.01 and v_0=0.5. By the effect of finite temperature, the velocity profile is more delocalized than the one shown in Fig. 5(a). The dashed blue line is an approximate line 0.5exp{-0.185(50-j)} for the velocity profile. Figure 7(b) shows the variance profile of the velocity v_j.
The variance is almost T=2 for j<35, which is almost equal to the strength of the noise, however, the variance is slightly larger than 2 near the boundary j=51. Figure 7(c) is the probability distributions at j=20 and 48. The probability distribution can be well approximated at the Gaussian distribution in this finite-temperature system. When T is large, the stochastic motion is dominant as shown in Fig. 7.
The length scale of the shear localization is evaluated by a quantity λ=∑_j=1^L_y(L_y+1-j)⟨ v_j⟩ /∑_j=1^L_y⟨ v_j⟩.
Figure 8(a) shows λ for several d's at T=2 and v_0=0.5, and Fig. 8(b) shows λ for several v_0's at T=0.5 and d=0.01. The length scale λ decreases with d and v_0.
When v_0 and T are small, a more regular motion appears. Figure 9(a) shows the profile of the average velocity ⟨ v_j⟩ at d=0.01, T=0 and v_0=0.01. Figure 9(b) shows the time evolutions of z_i,j between t=390000 and 430000 at (i,j)=(n/2,1), (n/2,2), (n/2,3), and (n/2,4) for d=0.01, T=0 and v_0=0.01. z_i,j increases stepwise except for j=1. That is, the phase slip of 2nπ occurs almost simultaneously.
The period of the stepwise increase is around 2500, and the phase slip of 4π occurs at j=2 and the phase slip of 8π occurs for j≥ 3. The average velocity is zero at j=1 and takes the same value for j≥ 3, and
the velocity profile has a jump from 0 to 0.01 at j=2 as shown in Fig. 9(a). Figure 9(c) shows the time evolution of the kinetic energy E for T=0, v_0=0.01, and d=0.01. The kinetic energy increases periodically due to the periodic phase slips, whose period is around 2500.
Figure 10(a) shows the profile of the average velocity ⟨ v_j⟩ at d=0.01, T=0.05 and v_0=0.01. The average velocity changes continuously with j owing to the finite temperature effect and the profile might be approximated at the linear one.
Figure 10(b) shows the time evolution of Δ z=√(∑_j∑_i(z_i,j-∑_i z_i,j/L_x)^2/(L_xL_y)) at d=0.01, T=0.05 and v_0=0.01. Here, ∑_i(z_i,j-∑_i z_i,j/L_x)^2/L_x is the variance ⟨ (Δ z_i,j)^2⟩ of the displacement z_i,j within the jth layer, and Δ z is the root mean square of the variance for j=1,2,⋯, L_y: Δ z=√(∑_j ⟨ (Δ z_i,j)^2⟩/L_y). Δ z is fluctuating by the noise effect, but sometimes jumps to a fairly large value and then decays stepwise. Figure 10(c) is the time evolution of the kinetic energy E. The kinetic energy E jumps when Δ z jumps. When Δ z and E jump, phase slips occur between some neighboring layers, which will be shown in Fig. 11(a). Figure 10(d) shows the probability distribution of v_j at j=20 in the semi-logarithmic scale. The dashed line is the Gaussian approximation 1/√(2πσ^2)exp{-(v-0.00345)^2/(2σ^2)} with σ^2=0.05025. The Gaussian approximation is good near v=0, however, the deviation from the Gaussian distribution is observed for large |v|. The acceleration of v_j at the jumps of E might be related with the velocity distribution with the long tail.
Nontrivial spatio-temporal dynamics is observed when Δ z and E jumps. Figure 11(a) shows the profiles of z_i,j as a function of j for i=25 at t=12000+25n (n=0,1,⋯,30), which corresponds to the second peak region of Δ z shown in Fig. 10(b). For example, the lowest line represents a profile of the displacement z_i,j at i=25 and t=12000, and the second lowest one represents z_i,j at i=25 and t=12025. There is a discontinuity at j=25 at the profiles of z_i,j for t≤ 12375, which was generated by the phase slip event near t=5900 which corresponds to the first peak in Fig. 10(b). Figure 11(a) shows that a phase slip occurs and the discontinuity of z is created at j=9 near t=12400. And then phase slips occur at j=34, and 38 near t=12800. The jump of Δ z near t=12000 shown in Fig. 10(b) is caused by these successive phase slips.
The spatial variation Δ z of the displacement z_i,j decreases stepwise after the jump as shown in Fig. 10(b). The stepwise decrease of Δ z is understood through the spatio-temporal dynamics of z_i,j as follows.
Figure 11(b) shows the time evolution of z_i,j at six points (i,j)=(1,36), (51,36), (101,36), (151,36), (201,36), and (300,36) in the same layer of j=36 from t=13000 to 15000. It is observed that z_i,j's tend to be aligned with time. The stepwise merging of z_i,j makes the spatial variation of z_i,j decrease in time, which corresponds to the stepwise decrease of Δ z shown in Fig. 10(b). Figure 11(c) shows the time evolution of z_i,j at the same layer j=36. The lowest line is the profile z_i,j as a function of i at j=36 and t=14500, and the second lowest one shows the profile of z_i,j at j=36 and t=14525. There are vortex and antivortex solutions in Eq. (1) because of the sinusoidal coupling in the y direction. The kink and antikink structures seen in Fig. 11(c) represent the vortex or antivortex.
Figure 11(c) expresses that z_i,j tends to be uniform through the collisions of vortex and antivortex, which corresponds to the stepwise merging of z_i,j shown in Fig. 11(b) and the stepwise decrease of Δ z shown in Fig. 10(b).
The intermittent time evolution of the coupled oscillators at v_0=0.01 and T=0.05 as shown in Figs. 10 and 11 can be qualitatively explained as follows. The small perturbations δ z_i,j in Eq. (1) obeys
d^2 δ z_i,j/dt^2=K_x∑_i^'=i+1,i-1(δ z_i^',j-δ z_i,j)+K_y∑_j^'=j+1,j-1cos (z_i,j^'-z_i,j)(δ z_i,j^'-δ z_i,j)-ddδ z_i,j/dt.
If the difference z_i,j+1-z_i,j between the neighboring layers increases with time and is larger than π/2, cos(z_i,j+1-z_i,j) becomes negative and the perturbations grow exponentially owing to the linear instability, which induces the phase slips. If the phase slips occur, the kinetic energy jumps and the spatial variation Δ z in the x direction grows. After the phase slip, the spatial fluctuations tend to decay through the collisions of vortex and antivortex. After Δ z and E decay to the thermal fluctuation level at the finite temperature T=0.05, the differences z_i,j+1-z_i,j between the neighboring layers increase further again, and phase slips occur at points (i,j) where cos(z_i,j+1-z_i,j)<0, which are generally different from the previous phase-slip points. This process repeats many times. These intermittent time evolutions are observed at small values of v_0 and T. If the temperature or the shear rate v_0/L_y is large, the spatial irregularity owing to the temperature or deterministic chaos increases and the intermittent behavior of E becomes unclear.
The three parameters v_0, T, and d determine the nonequilibrium dynamics in the coupled noisy oscillators Eq. (5). Although the whole parameter range is not yet investigated and the nonequilibrium dynamics is not theoretically understood, we investigate numerically the excess kinetic energy characterizing the nonequilibrium state by changing some control parameters. The kinetic energy jumps intermittently from the thermal equilibrium level T/2 as shown in Fig. 10(c). The excess kinetic energy is the temporal average of the difference of the kinetic energy from T/2: ⟨ E⟩ -T/2. Figure 12(a) shows ⟨ E⟩ -T/2 for several T's at d=0.01 and v_0=0.01. The excess kinetic energy weakly decreases with T for T>0.1. Figure 12(b) shows ⟨ E⟩-T/2 for several d's in the double-logarithmic scale at v_0=0.01 and T=0.01. The dashed line is 0.00005/d. The excess kinetic energy decreases as 1/d for small d.
§ SUMMARY
We have studied coupled oscillators with inertia under the shear velocity boundary conditions on the rectangular lattice. We have found that the kinetic energy increases with time when d=0 or the dissipation is absent. We have calculated the probability distribution of the velocity. When d=0 or sufficiently small, the velocity distribution is well approximated at the Gaussian distribution or the Maxwell distribution. We have observed the deviation from the Gaussian distribution when d is large. The deviation from the Gaussian distribution is characteristic of the nonequilibrium state far from equilibrium. We have found that the velocity profile changes from the linear one to the localized one near the boundary as d is larger. In the coupled oscillators under a finite temperature, the localization becomes weaker, and the probability distribution of the velocity becomes closer to the Gaussian distribution. When v_0 is small and T is small but not zero, the coupled oscillators are a weakly nonequilibrium stochastic system. We have observed intermittent jumps of the kinetic energy. The jumps correspond to the phase slips in the y direction, and an annealing process of spatial fluctuations of the displacement z_i,j in the x direction occurs through the vortex-antivortex collisions.
Our coupled oscillator model under the shear velocity boundary condition is an interesting system, since the model exhibits various nonlinear-nonequilibrium phenomena and is related to various research areas such as the coupled limit-cycle oscillators, the dislocation theory in solids, a block-spring model of earthquakes, and the nonequilibrium molecular dynamics. We have shown several numerical results in this paper, however, the theoretical understanding is not sufficient, which is left to future study.
99
Kuramoto Y. Kuramoto, Chemical Oscillations, Waves, and Turbulence (Springer, New York, 1984).
Bonilla J. A. Acebrón, L. L. Bonilla, C. J. Perez Vincente, F. Ritort, and R. Spigler, Rev. Mod. Phys. 77, 137 (2005).
Strogatz S. Strogatz, Physica D 143, 1 (2000).
Tanaka H. Tanaka, A. J. Lichtenberg, and S. Oishi, Phys. Rev. Lett. 78, 2104 (1997).
Sakaguchi H. Sakaguchi and T. Matsuo, J. Phys. Soc. Jpn. 81, 074005 (2012).
Sakaguchi2 H. Sakaguchi, S. Shinomoto, and Y. Kuramoto, Prog. Theor. Phys. 77, 1005 (1987).
Hong H. Hong, H. Chate, H. Park, and L. H. Tang, Phys. Rev. Lett. 99, 184101 (2007).
Sakaguchi3 H. Sakaguchi, Phys. Rev. E 105. 054211 (2022).
Read W. T. Read, Dislocations in crystals (McGraw Hill, 1953).
Cottrell A. H. Cottrell Dislocations and plastic flow in crystals (Clarendon Press, 1953).
Sakaguchi4 H. Sakaguchi, Phys. Rev. E 106, 054154 (2022).
Scholz C. H. Scholz, The Mechanics of Earthquakes and Faulting (Cambridge University Press, Cambridge, 2002).
Langer J. M. Carlson and J. S. Langer, Phys. Rev. Lett. 62, 2632 (1989).
Sakaguchi5 H. Sakaguchi and S. Kadowaki, J. Phys. Soc. Jpn. 86, 074001 (2017).
Evans D. J. Evans and G. P. Morriss, Statistical Mechanics of Nonequilibrium Liquids (Academic Press, San Diego, 1990).
Ashurst W. T. Ashurst and W. G. Hoover, Phys. Rev. A 11, 658 (1975).
Liem S. Y. Liem, D. Brown, and J. H. R. Clarke, Phys. Rev A 45, 3706 (1992).
Lepri S. Lepri, R. Livi, and A. Politi, Phys. Rev. Lett. 78, 1896 (1997).
Holian B. L. Holian, W. G. Hoover, and H. A. Posch, Phys. Rev. Lett. 59, 10 (1987).
Bingham E. C. Bingham, Fluidity and Plasticity (MacGraw-Hill, New York, 1922).
Salmon J. B. Salmon, A. Colin, S. Manneville, and F. Molino, Phys, Rev. Lett. 90, 228303 (2003).
Rutter E. H. Rutter, Techtonophysics 303, 147 (1999).
|
http://arxiv.org/abs/2409.02785v1 | 20240904150128 | A Novel Interference Minimizing Waveform for Wireless Channels with Fractional Delay: Inter-block Interference Analysis | [
"Karim A. Said",
"A. A.",
"Beex",
"Elizabeth Bentley",
"Lingjia Liu"
] | eess.SP | [
"eess.SP"
] |
A Novel Interference Minimizing Waveform for Wireless Channels with Fractional Delay: Inter-block Interference Analysis
Karim A. Said, A. A. (Louis) Beex, Elizabeth Bentley, and Lingjia Liu
K. Said, A. A. Beex and L. Liu are with Wireless@Virginia Tech, the Bradley Department of ECE at Virginia Tech, Blacksburg, VA. E. Bentley is with the Information Directorate of Air Force Research Laboratory, Rome NY.
Accepted Sep 3 2024 to ApJ Letters
=====================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In the physical layer (PHY) of modern cellular systems, information is transmitted as a sequence of resource blocks (RBs) across various domains with each resource block limited to a certain time and frequency duration. In the PHY of 4G/5G systems, data is transmitted in the unit of transport block (TB) across a fixed number of physical RBs based on resource allocation decisions. This simultaneous time and frequency localized structure of resource allocation is at odds with the perennial time-frequency compactness limits. Specifically, the band-limiting operation will disrupt the time localization and lead to inter-block interference (IBI). The IBI extent, i.e., the number of neighboring blocks that contribute to the interference, depends mainly on the spectral concentration properties of the signaling waveforms.
Deviating from the standard Gabor-frame based multi-carrier approaches which use time-frequency shifted versions of a single prototype pulse, the use of a set of multiple mutually orthogonal pulse shapes-that are not related by a time-frequency shift relationship-is proposed. We hypothesize that using discrete prolate spheroidal sequences (DPSS) as the set of waveform pulse shapes reduces IBI. Analytical expressions for upper bounds on IBI are derived as well as simulation results provided that support our hypothesis.
§ INTRODUCTION
Orthogonal frequency division multiplexing (OFDM) has been selected as the physical layer waveform for the 5G NR standard, a choice influenced mainly by considerations of maturity and backwards compatibility <cit.>. However, there are many technical concerns regarding OFDM’s long-term sustainability mainly due to its inadequacy in high mobility scenarios <cit.>. In addition, OFDM's spectrum has high out-of-band (OOB) emissions which can cause significant severe interference to systems operating in adjacent frequency bands <cit.>.
This has motivated many efforts to investigate novel waveforms to supplant OFDM <cit.>. A candidate waveform rising in popularity is Orthogonal Time Frequency Signaling (OTFS) where information is encoded in the delay-Doppler (DD) domain <cit.>.
Other DD modulation waveforms have been proposed in the literature <cit.>. However, in some works it is argued that OTFS is a precoded version of OFDM <cit.>.
OTFS has a number of benefits, including power uniformity across symbols and channel invariance <cit.>.
Nevertheless, OTFS has its own challenges such as its susceptibility to fractional Doppler <cit.> and potentially fractional delay which makes cyclic prefixes corresponding to integer channel tap lengths invalid.
One of OTFS's most celebrated advantages is the sparse structure of its equivalent channel matrix which helps in reducing the equalization complexity <cit.>. However, this sparsity rests on the assumption that delay and Doppler of the channel paths are integers when measured in units of samples and cycles/frame (normalized units), which is an unrealistic assumption. The fractional Doppler limitation is widely acknowledged in the OTFS existing literature and its impact on channel estimation accuracy and equalization complexity. From the point of view of channel estimation, works such as <cit.> and <cit.> study the impact of fractional Doppler on channel estimation accuracy. As a consequence, wide guard overhead regions are required to mitigate data to pilot interference and maintain channel estimation integrity. Machine learning based approaches have been used as an attempt to circumvent channel estimation altogether <cit.>. From the point of view of equalization, the channel matrix sparsity advantage is lost in the presence of fractional Doppler to higher equalization computational complexity <cit.>.
By analogy, fractional delay presents similar problems for single carrier (SC) waveforms. SC waveforms have been considered recently in 5G application scenarios such as massive machine type of communication (mMTC) <cit.> and ultra-reliable low latency communications (URRLC) <cit.>. SC waveforms rely on time-domain equalizers to combat inter-symbol interference (ISI), which can only handle multi-path delay taps that are integer multiples of the sample period <cit.>. The effect of fractional delay on OFDM systems has also been discussed in works such as <cit.>.
Most aforementioned works concern intra-frame effects of fractional delay and Doppler. On the other hand, not much attention is paid to inter-frame effects. For example, fractional delay can cause leakage between OTFS frames that extends beyond the nominal CP length. In a more general setting where information is transmitted as a sequence of blocks (OTFSs frame, OFDM symbol or single carrier block of samples) in time, fractional delay can cause Inter-block Interference (IBI) of considerable magnitude that can have an impact on symbol error rate performance as we show in our work.
Given this context, our work analyzes the effect of fractional delay on existing waveforms in terms of IBI and presents a novel waveform that can minimize IBI for a minimum sacrifice in resource utilization. Most existing waveforms such as OFDM, FBMC and OTFS can be classified under the category known as Gabor frames <cit.>. Gabor frames consist of a set of waveforms that are time and frequency shifted versions of a single prototype pulse shape. In this work, we propose using (a set of) multiple mutually orthogonal pulse shapes that are not related by a time-frequency shift relationship. The pulse shape set is comprised of discrete prolate spheroidal sequences (DPSS) <cit.> for which we demonstrate its merit in terms of very low IBI.
The main contributions of this work can be divided into the following:
* A mathematical framework for quantifying the effect of fractional time or frequency shifts on the energy spread for arbitrary waveforms. In this work we focus mainly on fractional time shift but the framework is applicable to fractional shifts in frequency as well.
* Upper bounds on inter-block interference for arbitrary signaling waveform.
* A DPSS-based signaling waveform and theoretical justification for its significantly lower IBI compared to other domains.
§ SYSTEM MODEL
We adhere to a matrix framework for representing the discrete time input-output relations of operations at the transmitter, receiver and channel effects.
Without loss of generality, at the transmitter a frame of information symbols 𝐢∈ℂ^I × 1 modulates a set of waveforms to generate samples in the time domain represented by vector 𝐱∈ℂ^N where 𝐱=𝐎𝐢 and 𝐎∈ℂ^N × I.
After undergoing the channel effects represented by a time-varying impulse response matrix 𝐇∈ℂ^N× N, vector 𝐲 is acquired at the receiver:
𝐲=𝐇𝐱+𝐧
where 𝐧 is the noise vector.
A matched filtering operation is applied by correlating with the transmit waveform set (or its co-set) for bi-orthogonal schemes:
𝐳 =𝐎^H𝐇𝐎𝐢+𝐎^H𝐧
=𝐇_eq𝐢+𝐎^H𝐧
Discretizing in time, for a finite stream of LK symbols, x(t) in (<ref>) can be written in terms of a matrix-vector product:
𝐱=(𝐈_L⊗𝐐)𝐢
where 𝐢=[i_0,..,i_LK-1]^T, 𝐐=[𝐪_0,𝐪_2,..𝐪_K-1], 𝐪_i ∈ℂ^M'× 1 where M'≥ K, and 𝐈_L is an identity matrix of size L× L.
In OFDM and similar modulation schemes such as generalized OFDM (G-OFDM) <cit.> and filter-bank multi-carrier (FBMC) <cit.> the domain with coordinates 0,..,K-1 is frequency. Vector 𝐪_i represents a modulated version of a single prototype pulse shape 𝐪; a category of modulation schemes known as Gabor frames <cit.>, where 𝐪_k=diag(𝐟_k)𝐪 and [𝐟_k]_n=e^j2π km/M, m=0,..,M'-1.
Fig. <ref> shows the interaction between a channel of maximum delay spread τ_max, and a pulse shape consisting of two sub-blocks, 𝐩= [𝐠^T,𝐪^T]^T ∈ℂ^M × 1, 𝐠∈ℂ^τ_max× 1, M = M'+τ_max.
Now (<ref>) changes to:
𝐱=(𝐈_L⊗𝐏)𝐢
where 𝐏=[𝐩_0,𝐩_2,..𝐩_K-1].
A common strategy to eliminate this form of IBI is to set sub-block 𝐠 to zero, where 𝐠 is called a zero prefix (ZP) and ignore the corresponding sub-block in the output. Another strategy is to set 𝐠= [p_N-τ_max+1:p_N]^T where 𝐠 is called a cyclic prefix (CP) where the submatrix represented by the blue triangle effectively translates to the upper right corner of yellow border matrices as depicted by the faded blue triangles in Fig. <ref>.
Substituting (<ref>) into (<ref>) and referring back to the aforementioned objective of shaping 𝐇_eq to be close to a diagonal structure, we can see that using a ZP makes it possible to obtain a block-diagonal structure.
𝐳 =(𝐈_L⊗𝐏)^H𝐇(𝐈_L⊗𝐏)𝐢+(𝐈_L⊗𝐏)^H𝐧
=𝐇_eq𝐢+(𝐈_L⊗𝐏)^H𝐧
where 𝐇_eq is block diagonal matrix as in (<ref>), and 𝐧∈ℂ^N × 1 is a AWGN noise vector.
𝐇_eq= blkdiag(𝐇_0,0,𝐇_1,1,..,𝐇_L-1,L-1)
As a result, (<ref>) can be separated into smaller sets of equations:
𝐳_l=𝐏^H𝐇_l,l𝐏𝐢_l+𝐏^H𝐧_l, l=0,..,L-1
where [𝐇_l,l']_m,m'=[𝐇]_lM+m,l'M+m', m,m'=0,..,M-1, [𝐧_l]_m=[𝐧]_lM+m, m=0,..,M-1, [𝐢_l]_m=[𝐢]_lM+m, m=0,..,M-1 and τ_max is the maximum delay.
The significant advantage of such a block channel structure is that, through proper choice of 𝐠, equalization can be done on a block-by-block level and that greatly reduces complexity. This inspires our strategy to design a waveform where the block length can be made as small as possible. In doing so, we must address the consequences of using small block lengths on the manifestation of channel effects related to delay spread.
§.§ Discrete Doubly Dispersive Channel Model
In a typical communication system, time and bandwidth constraints are simultaneously enforced; transmit filters strictly limit the signal bandwidth, at the receiver side, the received signal is forced to be limited when evaluating its inner product against a finite extent reference block.
The limit on signaling bandwidth and time extent of a signaling block induces a discrete time channel matrix representation of the time-varying impulse response (TV-IR). For a channel with P discrete specular paths:
𝐇 =∑_p=0^P-1𝐇_ντ_p=∑_p=0^P-1h_p𝐃_ν_p𝐇_τ_p
where [𝐇_ν_p]_l,k=e^j2π lν_pδ[l-k] represents the Doppler modulation effect for normalized Doppler frequency ν_p, [𝐇_τ_p]_l,k=sinπ(l-k-τ_p)/π(l-k-τ_p) is the delay effect for normalized delay τ_p for the p-th path respectively and h_p is the path gain.
Now we analyze the structure of the matrix 𝐇 by looking at the structure of the individual summand matrices 𝐇_ντ_p. Each summand matrix is the product of a (main) diagonal matrix 𝐃_ν_p and Toeplitz matrix 𝐇_τ_p. Matrix 𝐇_τ_p will have exactly one (sub) diagonal if τ_p is an integer, otherwise it will be a full matrix. As a result, the product matrix inherits the diagonal extent of 𝐇_τ_p (spanning the full matrix while decaying in the anti-diagonal direction for non-integer delays) but loses the property of being Toeplitz. An illustration is shown in Fig. <ref>.
Thus, the thin parallelogram depiction in Fig. <ref> is true only if all (normalized) path delays are integers. As a consequence, CP or ZP approaches will not completely eliminate inter-block interference and (<ref>) is modified as follows to include an IBI term β_l:
𝐳_l =𝐏^H𝐇_l,l𝐏𝐢_l+β_l+𝐏^H𝐧_l
where
β_l = ∑_j=0,j≠ l^L-1𝐏^H𝐇_l,j𝐏𝐢_j= ∑_p=0^P-1h_p∑_j=0,j≠ l^L-1Λ_lj(p)
where Λ_lj(p)=𝐏^H𝐃_l,l(ν_p)𝐇_l,j(τ_p)𝐏.
§.§ Impact of waveform choice on IBI in a purely delay-dispersive channel
Let 𝐏̃∈ℂ^N × N be a unitary matrix such that 𝐏∈ℂ^N × L comprises the first L columns of 𝐏̃ where L≤ N. Substituting 𝐏̃𝐏̃^H into Λ_lj(p) enables the separation of the Doppler effect and delay effect into one distinct matrix for each.
Λ_lj(p) =𝐏^H𝐃_l,l(ν_p)𝐏̃𝐏̃^H𝐇_l,j(τ_p)𝐏=Λ̃^ν_l(p)Λ̃^τ_lj(p)
Since the span of IBI is only dependent on the delay dispersion, i.e., Λ̃^̃τ̃_ij(p), we focus our analysis on purely delay-dispersive channels. In a purely delay-dispersive channel, the contribution of the j-th input to IBI affecting block l due to interacting with the p-th channel becomes:
β_l(p)=h_p∑_j=0,j≠ l^L-1Λ_lj^τ(p)𝐢_j
where Λ^τ_lj(p)=𝐏^H𝐇_l,j(τ_p)𝐏∈ℂ^K× K, and
[β_l(p)]_r=h_p∑_j=0,j≠ l∑_k=0^K-1 [Λ_lj^τ(p)]_r,k[𝐢_j]_k
The r,s element of the delay factor matrix, i.e., [Λ_ij^τ(p)]_r,s is given by (<ref>)
[Λ_lj^τ(p)]_r,s =∑_n,m=0^N-1𝐩_r^*[n] 𝐩_s[m]sinc(n-m-(j-l)N-τ_p)
=∑_q=-N+1^N-1𝐜_rs[q]sinc(q-(τ_p+(j-l)N))
where 𝐜_rs[q]=∑_n=max(-N/2+q,-N/2)^min(N/2+q,N/2)𝐩_r^*[n]𝐩_s[n-q], sinc(x)=sinπ x/π x.
The second line in (<ref>) is the r-s-th cross-correlation sequence, 𝐜_rs, shifted by τ_p+(j-l)N. 𝐜_rs is an index limited sequence which when shifted is convolved with a sequence that is infinite in extent for fractional values of τ_p. This elongation effect is the underlying cause of IBI that extends past the cyclic/zero prefix. To the best of our knowledge, no works exist which simplify the last line in (<ref>) to an analytical closed form expression for fractional shifts τ_p. Finding such an analytical expression would enable us to quantify the IBI energy which can provide us some measure of the expected degradation in SER performance.
In what follows we pursue analytical expressions for upper bounding IBI for any waveform set choice. Towards this end, we rely on the mathematical framework that was developed in <cit.>.
§.§ Upper bounding Energy of Cross-correlation Tail due to Fractional Shift
Rewriting (<ref>) in terms of {ℬ_W^τ} r[n], defined by (1) in <cit.> as the operation on a sequence r[n] that outputs a sequence limited in frequency to half-bandwidth W (normalized), scaled by 1/W, and shifted by 0<τ≤ 0.5, results in
[Λ_lj^τ(p)]_r,s ={ℬ_W^τ_p+(j-l)N𝐜_rs} ={ℬ_W^τ_p𝐜_rs}[(j-l)N]
In order to quantify IBI energy, we are interested in the quantity given by the LHS of (<ref>), i.e., energy of the sub-sampled tail (by a factor of N) of the r-sth cross-correlation sequence. The first line of the RHS of (<ref>) consists of an upper bound in terms of the cross-correlation tail energy, where E̅_-N,N
denotes the tail energy of the correlation sequence 𝐜_rs, according to Definition-1 in <cit.>
∑_i=-∞≠ j^∞{ℬ_W^τ_p𝐜_rs}^2[(i-j)N] ≤E̅_-N,N({ℬ_W^τ_p𝐜_rs})
≤E̅_-N,N({ℬ_W^0.5𝐜_rs})
≤∑_l=0^4N|c_r,s(l)/W|^2λ_l(1-λ_l)
where c_r,s(l)=∑_n=-2N^2N𝐜_rs[n]s_l^(0.5W,4N+1)[2n]. The inequality in the second line is based on our conjecture that: out of all fractional sample shifts, a half sample shift results in the largest tail energy. In the third line, the result of Theorem-1 in <cit.> (equation (15)) is applied.
We note that the LHS of (<ref>) involves a computation potentially involving an infinite number of terms; when a general infinite stream of blocks across time is considered. The given bound requires only a finite number of computations that does not depend on the number of blocks transmitted across time.
§ QUANTIFYING IBI POWER IN DELAY DISPERSIVE CHANNELS
In a delay-dispersive channel consisting of P paths, we can find the IBI energy affecting the r-th waveform by averaging (<ref>) over information symbols which are assumed to be unit variance i.i.d. Without loss of generality, we start by setting l=0 in (<ref>) and evaluating the following:
E_r^IBI = 𝔼{|∑_p=0^P-1h_pβ_0(p)|^2}
= 𝔼{|∑_p=0^P-1h_p∑_j=-∞,
j≠ 0^∞∑_s=0^K-1 [Λ_0j^τ(p)]_r,s[𝐢_j]_s|^2}
= ∑_p=0^P-1|h_p|^2∑_j=-∞,
j≠ 0^∞∑_s=0^K-1|{ℬ_W^τ_p𝐜_rs}[jN']|^2
= ∑_p=0^P-1|h_p|^2∑_j=-∞,
j≠ 0^∞∑_s=0^K-1|{ℬ_W^Δτ_p𝐜_rs}[jN'+⌊τ_p⌋]|^2
The simplification from the second to third line is due to 𝔼{ [𝐢_j]_s[𝐢_j']_s'}=δ(j-j',s-s') , 𝔼{ h_ph_p'}=|h_p|^2δ(p-p'), and Δτ_p≜(τ_p-⌊τ_p⌋).
Appending waveforms with a guard prefix of length g≥⌊τ_p ⌋, ∀ p, changes (<ref>) as follows:
E_r^IBI
=∑_p=0^P-1|h_p|^2∑_s=0^K-1∑_j=-∞,
≠ 0^∞|{ℬ_W^Δτ_p𝐜_rs}[j(N'+g)+⌊τ_p⌋]|^2
≤∑_p=0^P-1|h_p|^2∑_s=0^K-1∑_j=-∞,
≠[-(N'+g),(N'+g)]^∞|{ℬ_W^Δτ_p𝐜_rs}[j+⌊τ_p⌋]|^2
= ∑_p=0,
Δτ_p>0^P-1|h_p|^2∑_s=0^K-1E_-(N'+g-⌊τ_p⌋),(N'+g-⌊τ_p⌋)({ℬ_W^Δτ_p𝐜_rs})
We note that the guard prefix results in E̅_-N',N'({ℬ_W^0.5𝐜_rs})=0 when Δτ_p=0, hence the restriction in the last line of the sum indices for fractional delay paths.
To bound the total IBI for a subset of the waveforms across the range 0,..,η K-1,
E^IBI ≤ E^IBI_r
=∑_p=0,
Δτ_p>0^P-1|h_p|^2∑_r=0^η K-1∑_s=0^η K-1∑_l=0^4N_p|c_r,s(l;N_p)/W|^2λ_l(1-λ_l)
where N_p = N'+g-⌊τ_p⌋, c_r,s(l;N_p)=∑_n=-2N_p^2N_p𝐜_rs[n]s_l^(0.5W,4N_p+1)[2n].
Finally, for unit symbol energy, we can obtain a signal-to-inter-block interference (S2IBI) lower bound by taking the reciprocal of (<ref>).
§ RESULTS
We numerically evaluate performance in terms of BER vs. SNR across three waveforms comprised of orthonormal bases (ONB) in the following domains: time domain (TD), frequency domain (FD), and Prolate spheroidal domain (PS). The BER curves are generated across different values for the resource utilization percentage η%. In addition, we evaluate signal-to-IBI (S2IBI) to ascertain its role as the underlying differentiating factor in BER performance between the waveforms in the different domains. Our hypothesis is that IBI due to fractional delay taps is a significant factor which is grossly ignored in models assuming integer delay taps.
Our results are based on 100 frame realizations, each frame consisting of 21 blocks, each of length N=129 time domain samples. The N sample block is comprised of M=η N sub-waveforms modulated by QPSK symbols, where η varies across the values in the range [0.9 ,1] reflecting the percentage of nulled signaling dimensions as explained in Section II.
A delay-dispersive channel with delay spread spanning [0,τ_max=16] samples, i.e., 1/8th the block size and following an exponential delay profile is considered to act on the frame. We consider two channels of varying severity, as controlled by the rate of decay of the exponential delay profile: our mild channel has tap gains decaying according to e^-0.5 n (with uniformly random phase), and our severe channel has tap gains decaying according to e^-0.05n where n=0,..,15 for the integer tap case, and n=0,0.1,0.2,..,15 for the fractional tap case.
The frame structure is given by (<ref>)
𝐟=[0_D^T,𝐩_-10^T,..,0_D^T,𝐩_0^T,0_D^T,..,0_D,𝐩_10^T]
where 𝐩 = 𝐎𝐝, 𝐎∈ℂ^N× M is the signaling basis, 𝐝∈ℂ^M × 1 is a vector of QPSK symbols, and 0_D ∈ℝ^D× 1 is an all zero vector where D=τ_max=16 .
Figure <ref> shows the variation of S2IBI (dB) with η for the mild channel case. For the integer tap case shown in sub-figure <ref>-(b), S2IBI is very high, which is a result of the fact that the ZP length is equal to τ_max and thus the delay spread is fully encompassed leading to zero IBI.
In sub-figure <ref>-(a), the fractional tap channel case is shown. At η=1 all three signaling domains have the same S2IBI 28.7 dB which can be explained by the fact that the three waveforms are complete ONBs and thus are equivalent when η=1.
For η<1, S2IBI rises rapidly for the PS domain waveform at a rate of 12 dB per 0.02 reduction in η with an S2IBI reaching up to 110 dB. On the other hand, TD and FD S2IBI rise at a much slower rate, with an S2IBI reaching up to 30 dB for TD, and 31 dB for FD at η=0.9. Lower bounds for S2IBI based on the IBI upper bound given in (<ref>) are shown by black markers on top of dashed lines with the same color as the bounded S2IBI for a given domain. In general the bound is not very tight, however, it closely follows the general trend of the true S2IBI shown in solid lines. We note that IBI is effectively the energy of a sampled version of the cross-correlation sequence tail as indicated by (<ref>). This explains why the IBI upper bound is not expected to be very tight and as a consequence the S2IBI lower bound will also not be very tight.
Figure <ref> shows the variation of S2IBI (dB) with η for the severe channel case. For the integer tap case shown in sub-figure <ref>-(b) the result is nearly the same as for the mild channel since IBI is identically 0.
For the fractional case shown in sub-figure <ref>-(a), at η=1 all three signaling domains have the same S2IBI ≈26.7 dB. For η<1, S2IBI increases rapidly for the PS domain waveform but at a lower rate compared to the mild channel case shown in Fig <ref>-(a), reflecting the severity of the channel induced IBI. For TD and FD S2IBI rises to ≈27.4 dB and ≈ 30.6 dB at η=0.9 for TD and FD respectively. The theoretical lower bounds are looser than the ones in Fig. <ref> but fairly in line with the trend of the true S2IBI.
Figure <ref> shows the BER performance vs. SNR across the set of resource utilization percentages η=[0.9,0.92,0.93,0.95,0.96,0.98,1] for the mild channel case. For the fractional tap case shown in sub-figure <ref>-(a), TD has error floors ≈ 3× 10^-3, 10^-3, 5× 10^-4 for η=1,0.98,0.96 respectively. For values of η<0.95 TD has no visible error floor, however the performance is different (slightly worse) than in the integer tap case. For FD, the error floor persists for all η values ranging within [10^-3,2× 10^-3]. On the other hand, PS has an error floor 2× 10^-3 for η=1 since the IBI is high; the same level as FD and TD. For η<1, there are no visible error floors. For the integer tap case shown in sub-figure<ref>-(b), the performance curves do not change significantly for different values of η except for TD where error floors occur at BER 2× 10^-3 starting at 15 dB SNR for η=1, and BER 2× 10^-4 starting at 30 dB SNR for η=0.98. For η≤ 0.98 TD BER drops monotonically reaching down to 5× 10^-5 at ≈20 dB. For FD and PS, BER drops monotonically reaching down to 5× 10^-5 at ≈30 dB for PS, and between 2× 10^-4 and 5× 10^-4 at 35 dB for FD.
For the fractional tap channel in Fig. <ref>-(a), TD has error floors ranging between slightly less than 10^-3 for η=0.9 up to slightly less than 10^-2 for η=0.98. We note that for TD, BER curves behave in a convex manner with a minimum at ≈20 dB. This can be explained by the fact that LMMSE equalization is based on a regularization factor that accounts for noise but not for IBI. From Fig. <ref>, the S2IBI level being at ≈ 27.4 dB makes it somewhat on the order of the noise level. As a result, for SNRs higher than 20 dB, LMMSE is in "zero-forcing mode" leading to IBI amplification. For FD, the error floor does not go below 4× 10^-3. The convex behavior can also be seen in this case, with the minimum happening at a different point compared to TD, in agreement with the higher IBI for FD as indicated in Fig. <ref>. For PS, an error floor is present at η=1 but not for values η<1. We also note that for PS, the performance curves for η<1 for both the integer and fractional tap model are almost identical.
In Fig. <ref>-(b), just as in Fig. <ref>-(b) for all three signaling domains the BER is monotonically decreasing with SNR, however the improvement rates are much slower compared to the mild channel. We note that this is unlike the case where the BER curves have a non-decreasing behavior, i.e., increasing SNR will not help, in which case we refer to this behavior as an error floor. FD shows almost no dependence on the choice of η, similar to the mild channel case. However, for TD and PS, lowering η produces significant improvements in the rate of reduction of BER vs. SNR. Note that this improvement cannot be attributed to IBI since IBI is already 0 by virtue of the fact that the channel consists of integer taps.
We note that our proposed DPSS based waveform shows its clear advantage in severe channel scenarios where IBI is prominent. However, for shorter block lengths compared to what is used in our simulation, such as in the case of ultra-reliable low-latency communications (URLLC) <cit.>, the effect of IBI is expected to be more dominant even in mild channels. In addition, modulation (QPSK used in our simulation) order is expected to be a factor in amplifying the effect of IBI.
§ CONCLUSION
Inter-block interference is a problem that has its origins going back to the time-frequency concentration dichotomy. Limiting IBI can only be done at a cost in either time or bandwidth resources or in some other dimension.
In this work, we provide strong evidence that waveforms using discrete prolate spheroidal sequences are optimal in minimizing IBI. The issue addressed has relevance beyond IBI spread as it also concerns other forms of intra-block interference. Many existing waveform designs can be thought of as consisting of micro-blocks, and thus the present analysis can be extended to address inter-waveform interference.
Such a treatment can be key to addressing a number of pressing practical problems affecting prominent waveforms, namely fractional Doppler and fractional delay simultaneously. We plan to address such problems in our future works.
IEEEtran
|
http://arxiv.org/abs/2409.02355v1 | 20240904005812 | Algebraic Structures on Graphs Joined by Edges | [
"Daniel Pinzon",
"Daniel Pragel",
"Joshua Roberts"
] | math.CO | [
"math.CO",
"05C50, 05C25"
] |
arrows.meta
theoremTheorem[section]
proposition[theorem]Proposition
definition[theorem]Definition
corollary[theorem]Corollary
lemma[theorem]Lemma
example[theorem]Example
remark[theorem]Remark
conjecture[theorem]Conjecture
*theorem*Theorem
thmTheorem[section]
lem[thm]Lemma
cor[thm]Corollary
prop[thm]Proposition
rem[thm]Remark
ex[thm]Example
de[thm]Definition
tight_enumerate
equationsection
footer
[L] Acknowledgements: We are grateful for the work of undergraduate students Shahriyar Roshan Zamir & Hope Doherty who assisted with example calculations.
Algebraic Structures on Graphs Joined by Edges
Daniel Pinzon, Daniel Pragel, & Joshua Roberts
===================================================
footer
§ ABSTRACT
Let G_1 G_2, the j join of two graphs, be the union of two disjoint graphs connected by j edges in a one-to-one manner. In previous work by Gyurov and Pinzon <cit.>, which generalized the work of Badura <cit.> and Rara <cit.>, the determinant of the two joined graphs was decomposed to sums of determinants of these graphs with vertex deletions or directed graph handles. In this paper, we define a homomorphism from a quotient of graphs with the join operation to the monoid of integer matrices. We find the necessary and sufficient properties of a graph so that joining to any another graph will not change its determinant. We also demonstrate through examples that this decomposition allows us to more easily calculate determinants of chains of joined graphs. This paper begins the process of finding the algebraic structure of the monoid.
§ INTRODUCTION
Let 𝔾_j be the set of labeled finite simple directed graphs with at least 2j vertices such that, for G ∈𝔾_j, the set of vertices is V(G)={1, 2, …, m}, where m is the number of vertices of G. The set of edges is E(G) ⊆{(v,w) | v,w ∈ V, v≠ w}. We will at times refer to the “last" vertices of G using the convention -1,-2,-3,… for |V(G)|,|V(G)|-1,|V(G)|-2,…. Throughout this paper we will use the term graph in general to be a directed graph.
Let G, H ∈𝔾_j where V(G)=1,…,m and V(H)=1,…,n. Following <cit.>, we define the j-join of G and H with j edges as the graph formed by joining each of the “last" distinct j vertices of G with each of the corresponding “first" j distinct vertices of H in both directions. See <Ref> below.
There is a choice as to whether to put the instructions of the joining operation on the graph elements or on the operation. The labeling contains the information on where to join one graph with another so that we have one well-defined operation rather than many operations. This motivates the use of labeled graphs. It will be clear from the definition below that this operation is associative.
Let G, H ∈𝔾_j. The j-join of G and H, denoted G H, has the vertex and edge sets below where |V(G)|=m and each i∈ V(H) is relabeled as m+i in V(G H).
V(G j≍ H) = V(G) ∪{m+i| i∈ V(H)}, and
E(G j≍ H) = E(G) ∪{(m+1 -i, m+i),(m+i, m+1 -i) | 1 ≤ i ≤ j}
∪(m+i,m+j)| (i,j)∈ E(H).
From <cit.>, we can write the determinant of the adjacency matrix of the j-join of two graphs G and H as a sum of determinants of the adjacency matrices of modifications of G and H. The two modifications are vertex deletions and directed graph handles. We describe them below.
(Vertex deletion) For a graph G and a vertex v∈ V(G) we denote by G{v} the subgraph of G
obtained by removing the vertex v from V(G) and all edges that are incident with v from E(G).
Further, if R is a subset of vertices of G, we denote by G R the subgraph of G
obtained by deleting all vertices in R from G.
The operation of attaching a directed graph handle is attaching a copy of a directed path on 3 vertices, P_3, as described below.
(Directed Graph Handle)
For a graph G and vertices u,v∈ V(G), we denote G_[u,v] to be the graph where a new vertex
w=|V(G)|+1 is added to V(G) and a directed edge from vertex u to vertex w and a directed edge from vertex w to vertex v are added to E(G).
Vertex w is called the directed graph handle vertex of the directed graph handle [u,v].
Further, if B is a set of ordered pairs of elements of V(G), we denote G_B to be the graph
where for each [u,v]∈ B a new directed graph handle is attached to G.
We denote |R| as the number of elements of R and |B| as the number of handles in B. In this paper, we will regularly attach handles on graphs where a set of vertices R have been removed. If B is the set of handles, then we denote this graph as (G R)_B.
We will be using the main result from <cit.> given below which describes how to express the determinant of G H as a sum of the determinants of modifications of graphs. We sum over all possible vertex removals R from a set J={i| 1≤ i≤ j} which are the “first" j vertices of H. We also modify G in a similar, conjugated way. That is, if i∈ R is removed from H, then the conjugated vertex -i∈ R^*={-i| i∈ R} is removed from G.
Given B, the set of appended directed graph handles on H, the conjugate set is defined as
B^*={[-r,-c] | -r,-c ∈ V(G),[c,r]∈ B}.
Note that the direction of the conjugated handles are reversed. Below we give an example of a term in the sum.
Let G, H ∈𝔾_10 and m=|V(G)|. Then, J={1,…,10}. Let R={4} and let B={[1,3],[2,5]} be handles made from J R. Then, the conjugate removal set and handle set for G are R^*={-4}={m-3} and B^*={[-3,-1],[-5,-2]}={[m-2,m],[m-4,m-1]}. The figure below shows (G R^*)_B^* and (H R)_B. Note that the open circle at m-3∈ V(G) and 4∈ V(H) correspond to vertex deletions at those vertices. Also, notice that the handles on G are in the opposite direction of those on H.
[scale=2,dot/.style 2 args=circle,inner sep=1pt,fill,label=#2:#1,name=#1]
(-4,0) circle (.75cm);
(0,0) circle (.75cm);
every node=[circle,fill=white,inner sep=0.5mm]
(G) at (-4,0) 5050;
every node=[circle,fill=white,inner sep=0.5mm]
(H) at (0,0) 5050;
every node=[circle,fill=black,inner sep=2pt,xshift=-8cm]
at (35:0.75) ;
every node=[circle,fill=white,inner sep=1.5pt,xshift=-8cm]
[label=[xshift=8cm ]left:m-3](3) at (35:0.75) ;
every node=[circle,fill=black,inner sep=2pt,xshift=-8cm]
[label=[xshift=8cm ]left:m-4](3) at (60:0.75) ;
[label=[xshift=8cm ]left:m-2](2) at (15:0.75) ;
(v1) at (15:1.5) ;
(v2) at (-15:1.5) ;
[label=[xshift=8cm ]left:m-1](1) at (-15:0.75);
[label=[xshift=8cm ]left:m](0) at (-45:0.75) ;
[-Latex[length=3mm]] (3)–(v1);
[-Latex[length=3mm]] (v1)–(1);
[-Latex[length=3mm]] (2)–(v2);
[-Latex[length=3mm]] (v2)–(0);
every node=[circle,fill=black,inner sep=2pt,xshift=0cm]
[label=[xshift=0cm ]right:4] (h3) at (150:0.75) ;
every node=[circle,fill=white,inner sep=1.5pt,xshift=0cm]
at (150:0.75) ;
every node=[circle,fill=black,inner sep=2pt,xshift=0cm]
[label=[xshift=0cm ]right:5] (h4) at (120:0.75) ;
[label=[xshift=0cm ]right:3](h2) at (165:0.75) ;
(hv1) at (165:1.5) ;
(hv2) at (195:1.5) ;
[label=[xshift=0cm ]right:2](h1) at (195:0.75);
[label=[xshift=0cm ]right:1](h0) at (225:0.75) ;
[-Latex[length=3mm]] (h1)–(hv1);
[-Latex[length=3mm]] (hv1)–(h4);
[-Latex[length=3mm]] (h0)–(hv2);
[-Latex[length=3mm]] (hv2)–(h2);
The determinant of the disjoint union of the two graphs above represents (up to sign) one of the terms in the summation below. The summation does not sum over all possible handles, but rather a subset of them. An allowable handle set is a set of handles B such that a vertex appears only once in the set and, for any two handles [a,b], [c,d], either [a,b]<[c,d] or [a,b]>[c,d], where the inequality is component-wise.
The theorem below is a full generalization of ideas first introduced by Rara <cit.> (j=1) and continued in <cit.> and <cit.> (j=2). We adopt the notation |G| to mean the determinant of the adjacency matrix of G.
<cit.>
Let G and H be graphs of order m and n respectively where. Let J be the set of vertices of H that are joined to G. Then
|G H| = ∑_R ⊂ J∑_B (-1)^|R|+|B||(G R^*)_B^*| |(H R)_B|
where the summation is over all allowable handle sets B.
The following corollary, which is equivalent to Theorem 1 in <cit.>, is immediate by the preceding theorem.
|G 1≍ H| = |G||H| - |G ∖{-1}| · |H ∖{1}|.
The next corollary and proceeding figure below illustrate how to express the 2-join of two graphs using the Theorem <ref>.
|G 2≍ H| = |G||H| - |G ∖{-1}| · |H ∖{1}| - |G ∖{-2}| · |H ∖{2}|
+ |G ∖{-1,-2}| · |H ∖{1,2}|
- |G_[-1,-2]| · |H_[2,1] - |G_[-2,-1]| · |H_[1,2]|
§ AN EQUIVALENCE RELATION
For a given positive integer j, we can define an equivalence relation . We use the terms of the summation in Theorem <ref> as a motivation for this definition.
For G, H ∈𝔾_j and for any vertex deletion sets R_1,R_2 and corresponding allowable handle sets B_1, B_2, G H if and only if
|(G ∖ R_1∪ R_2^*)_B_1∪ B_2^*| = |(H ∖ R_1∪ R_2^*)_B_1∪ B_2^*|.
Clearly this is an equivalence relation on 𝔾_j. We define the quotient 𝒢_j = 𝔾_j/. The following theorem shows that the j-join is a well-defined operation on the quotient set. As a consequence of the following theorem, we can define an induced j-join product on the equivalence classes,
[G] [H]=[G H].
For a fixed integer j>0, the j-join is a well-defined binary operation over the equivalence relation .
Let G_1, G_2, H_1, H_2 ∈𝔾_j where we recall that their orders are greater than 2j. Assume that
G_1 G_2 and H_1 H_2. We need to show that
(G_1 H_1) (G_2 H_2).
Using <Ref>, we consider
|((G_1 H_1) ∖ R_1∪ R_2^*)_B_1∪ B_2^*|. We note that since since G_1 has at least 2j vertices and R_1, B_1 act on the first j vertices of G_i H_i, which are not the vertices involved in the j-join. Similarly, the conjugate sets only act on H_i. Using this fact and <Ref>, we have
|((G_1 H_1) ∖ R_1∪ R_2^*)_B_1∪ B_2^*|
=
|(G_1∖ R_1)_B_1) (H_1 ∖ R_2^*)_B_2^*|
= ∑_R ⊂ J∑_B (-1)^|R|+|B||(G_1 ∖ R_1∪ R^* )_ B_1∪ B^*| |(H_1 ∖ R ∪ R_2^*)_ B∪ B_2^*)|
= ∑_R ⊂ J∑_B (-1)^|R|+|B||(G_2 ∖ R_1∪ R^*)_B_1∪ B^* | |(H_2 ∖ R∪ R_2^*)_ B∪ B_2^*)|
= |((G_2 H_2) ∖ R_1∪ R_2^*)_B_1∪ B_2^*|,
where the third equality follows from the fact that G_1 G_2 and H_1 H_2.
§.§ Equivalence Classes
In this section, we will explore the necessary conditions for the existence of an identity element, a zero, and other equivalence classes. For simplicity, we shall use [G] instead of [G]_j when the j is clear from the context.
§.§.§ The Identity Class
We say that I∈𝔾_j is an identity graph if and only if for any G∈𝔾_j, [I G] =[G I] = [G]. That is, that joining any graph by an identity graph does not change its equivalence class and thus, in particular, preserves its determinant.
Let j∈ and I∈𝔾_j. If for any vertex deletion sets R_1,R_2 and corresponding handle sets B_1, B_2,
|(IR_1∪ R_2^*)_B_1∪ B_2^*|=
(-1)^|R_1|+|B_1| if R_1=R_2, B_1=B_2
0 otherwise,
then I is an identity graph.
Let j ∈, G ∈𝔾_j, and I be as above. Then using a similar argument as in the proof of Theorem <ref>,
|(I G) (R_1∪ R_2^*))_B_1∪ B_2^*|
=|(I R_1)_B_1 (G R_2^*)_B_2^*|
=∑_B∑_R ⊂ J (-1)^|R|+|B||(I (R_1∪ R^*)_(B_1∪ B^* )| |(G ( R∪ R_2^*))_B∪ B_2^*|
=(-1)^|R_1|+|B_1||(I (R_1∪ R_1^*)_(B_1∪ B_1^*)| |(G (R_1∪ R_2^*))_B_1∪ B_2^*|
= |(G (R_1∪ R_2^*))_B_1∪ B_2^*|.
The third equality results from the properties of I causing all terms of the summation to vanish except where R=R_1and B = B_1. Thus we have shown that if I satisfies the conditions of the theorem, then [I G] = [G]. Similarly, [G] = [G I].
For the 1-join, any path graph of order 4k is a member of the identity class.
The following figure demonstrates the existence of representatives of the identity classes for the j-join operation for j≥1. The graph G has m vertices labeled j+1 through m+j.
j-Join Identity
[scale=1]
(0,0) circle (1.5cm);
(C) at (0:0) |G|=(-1)^j;
every node=[circle,fill=black,inner sep=0.5mm,yshift=0,xshift=0]
[label=[xshift=0cm ]left:1] (1) at (4,1) ;
[label=[xshift=0cm ]right:m+2j] (m) at (6,1) ;
[label=[xshift=0cm ]left:j] (2) at (4,-1) ;
[label=[xshift=0cm ]right:m+j+1] (m-1) at (6,-1) ;
every node=[circle,fill=black,inner sep=0.25mm,yshift=0,xshift=0]
() at (5,.8) ;
() at (5,.6) ;
() at (5,.4) ;
() at (5,-.8) ;
() at (5,-.6) ;
() at (5,-.4) ;
every node=[circle,fill=white,inner sep=0.25mm,yshift=0,xshift=0]
[label=[xshift=0cm,yshift=-1cm ]above:j copies of P_2] () at (5,-.25) ;
(1)–(m);
(2)–(m-1);
figureDirected edges from the copies of P_2 to G are not necessary unless connectivity is desired.
The graph in <Ref> is a representative of the identity class for any j.
Consider any graph G, with |G|=(-1)^j and order m. Let I_j be a disjoint union of G with j copies of P_2. We label the vertices of I_j as shown in <Ref>. Since |P_2|=-1, then it is clear that |I_j|=1.
Next, note that in (I_j∖ R_1 ∪ R_2^*)_B_1 ∪ B_2^*, if R_1 ≠ R_2, then either there exists at least one copy of P_2 where one vertex is removed and the other is isolated or the single vertex is part of a handle. In the former case, we have that the determinant is zero. In the latter case, we use Harary's definition from <cit.> of the determinant of a graph and note that there are no spanning directed cycle decompositions of I_j. Thus in order for (I_j∖ R_1 ∪ R_2^*)_B_1 ∪ B_2^* to be nonzero it must be the case that R_1 = R_2.
Now, assume R_1 = R_2. Then, each vertex in R_1 results in the deletion of a copy of P_2 which results in a change in the determinant by a factor of -1. Thus there will be a total change of (-1)^|R_1| to the determinant.
Suppose that we have (I_j∖ R_1 ∪ R_1^*)_B_1 ∪ B_2^* with B_1 ≠ B_2 and ((I_j∖ R_1 ∪ R_1^*)_B_1 ∪ B_2^*) ≠ 0. Then, there exists at least one handle [a,b] in B_1 or B_2 that is not in the other handle set. Since |(I_j∖ R_1 ∪ R_1^*)_B_1 ∪ B_2^*| ≠ 0, then the vertex of that handle must be part of a directed cycle C that is a subgraph of (I_j∖ R_1 ∪ R_1^*)_B_1 ∪ B_2^*. This implies that C contains at least 4 copies of P_2 from I_j. Let n be the least positive integer such that n ∈ V(C). We observe that vertex n must be contained in one of the copies of P_2. So, either (n, -n) or (-n, n) ∈ E(C). Suppose (n, -n) ∈ E(C).
We observe that, beginning with vertex n, to travel along the directed cycle C, we move through a copy of P_2, then a handle, then followed by another copy of P_2 (in the opposite direction), and so on until arriving back at n. Formally, let a_1, a_2, …, a_k ∈{1, 2, …, j} be the positive vertices in P=I_j∩ C of the directed cycle starting and ending at n, in the order that they appear in C, so that a_1 = a_k = n. It follows that, for odd i ∈{1, …, k-1}, (a_i, -a_i) ∈ E(C) and the handle on P from -a_i to -a_i+1 is traversed in C. Additionally, for even i ∈{1, …, k-1}, (-a_i, a_i) ∈ E(C) and the handle on P from a_i to a_i+1 is traversed in C. Note that k≥ 4.
Furthermore, since n is the minimum of {a_1, a_2, …, a_k-1}, a_1 < a_3 and a_k-2 > a_k. This implies that there must be some i ∈{ 1, …, k-3} with a_i < a_i+2 and a_i+1 > a_i+3. But then the handles [a_i, a_i+1] and [a_i+2, a_i+3] must both be in B_1 or B_2. But this implies that B_1 or B_2 is not an allowable handle set.
The case when (-n,n) ∈ E(C) is handled analogously. Thus, we see that, if B_1 and B_2 are allowable handle sets and ((I_j∖ R_1 ∪ R_2^*)_B_1 ∪ B_1^*) ≠ 0, it must be the case that B_1 = B_2.
Thus, we see that (𝒢_j,) is a monoid with identity as given above.
§ ALGEBRAIC STRUCTURE OF THE JOIN OPERATION
Let S be a semigroup and x,y∈ S. A sandwich operation ∙ on S is defined as x∙ y=xay, where a is an element of S.
Hickey showed that S under this sandwich operation is also a semigroup denoted (S,a) which we will call the sandwich semigroup on a.
Let G be a representative of any equivalence class of graphs [G]∈𝒢_j=𝔾_j/ for some j∈. From the above, the equivalence class under the j-join is defined by all the modifications of G given by (G R_1∪ R_2^*)_B_1∪ B_2^* where R_i⊆ J = 1,...,j and B_i are handle sets on the vertices J R_i where the handles satisfy the conditions discussed in <Ref>. This also defines the conjugate sets R_2^*, B_2^*.
Now, consider the set of all allowable removal set and handle set pairs (R,B)| R⊆ J, B⊆𝒫(J-R2), B allowable. From <cit.>, looking at the Laplacian expansion, we see that there are k= 2j j elements. Let us enumerate these pairs as (R_1,B_1)=(∅,∅), (R_2,B_2)), ..., (R_k,B_k). The choice of how these are numbered is arbitrary, but we need to fix a convention here for the rest of the paper. Now, define the functions r_i:𝒢_j → as r_i([G])=((G R_i)_B_i), where for 1≤ i≤ k. We see that this function is well defined since any representative of the class will have the same determinant. Equivalently, we can define the conjugate modifications as c_l:𝒢_j → as c_l([G])=((G R_l^*)_B_l^*), 1≤ l≤ k.
Now, we can define an operation from any graph equivalence class to a k × k integer matrix ϕ:𝒢_j→ M(k,) as
ϕ([G])=[m_il]=[((r_i∘ c_l) (G))].
We can now prove the main results of our paper that the algebraic structure of the j-join under the j-join equivalence class is a sub-semigroup of the sandwich semigroup isomorphic to the k × k integer matrices under matrix multiplication. We begin with the following theorem that shows that our function is a homomorphism.
Let [G],[H]∈𝒢_j and ϕ as in the discussion above. Then,
ϕ([G] [H])=ϕ([G H])=ϕ([G]) E_j ϕ([H])
where E_j=[e_il=
(-1)^|R|+|B| i=l
0 otherwise].
Let [m_il] = ϕ([G H]) = [((r_i∘ c_l) (G H))] as defined above. Then, since |V(G)|, |V(H)|≥ 2j, then r_i(G H)=r_i(G) H and c_l(G H) = G c_l(H) and so (r_i∘ c_l) (G H) = r_i(G) c_l(H) = (G R_i)_B_i (H R_l^*)_B_l^* for some vertex removal sets R_i, R_l and handle sets B_i, B_l.
Then, calculating the determinant using <Ref>, we have
m_il=((r_i∘ c_l) (G H)) = ((G R_i)_B_i (H R_l^*)_B_l^*)=
∑_B∑_R ⊂ J (-1)^|R|+|B||(G R_i∪ R^*)_B_i∪ B^*| |(H R∪ R_l^*)_B∪ B_l^*)|
Note that the vertices of R_i^*, B_i^* are never the same as those of R,B since the number of vertices of G is at least 2j. We see this is the same for H as well.
Notice that the |(G R_i∪ R^*)_B_i∪ B^*| are the elements of the i^th row of ϕ([G]) and, equivalently, the |(H R∪ R_l^*)_B∪ B_l^*)| are the elements of the l^th column of ϕ([H]).
Then, we see that
m_il=∑_p ϕ([G])_ipE_j_ppϕ([H])_pl
which shows that ϕ is a homomorphism of monoids from the j-equivalent graphs under the j-join operation to the k× k matrices under the sandwich operation with sandwich element E_j. Therefore we can define
ϕ([G])∙ϕ([H])=ϕ([G]) E_j ϕ([H]) =ϕ([G] [H])=ϕ([G H]).
The homomorphism ϕ:(𝒢_j,)→ (M(k,),E) is one-to-one where (M(k,),E) is the sandwich monoid with sandwich element E as described above.
This proof follows straight from definitions. If [G_1], [G_2]∈𝒢_j such that [G_1]≠ [G_2], then there exists i,j such that ((r_i∘ c_l) (G_1))≠((r_i∘ c_l) (G_2)). But then, this implies that ϕ(G_1)≠ϕ(G_2) as matrices.
The following lemma gives us a matrix representation for this monoid.
[Hickey]
Let S be a semigroup and let a∈ S. If the semigroup (S, a) has identity element 1 then
* S has identity element
* the elements a, 1 lie in the unit group of S and are inverse to each other,
* (S,a)≅ S
Let γ:(M(k,),E)→ (M(k,)) be the isomorphism from part iii from the lemma above. Then, define Φ = γ∘ϕ. <Ref> above implies that (Φ(𝒢_j), ·) is a submonoid of (M(k,),·).
The monoid (𝒢_j,) is isomorphic to a submonoid of (M(k,),·)
It remains an open question what the structure of this submonoid is. The next two sections will begin to explore the structure.
§.§ The [0] and [n] classes
We generalize the identity class by defining the [n]_j, n∈{0,1,2,...} to have the properties such that if we join it to any graph on the left or right, then it multiplies the determinant of the graph by n. That is, for any N∈ [n]_j and any G, H∈𝔾_j, ϕ([G N H])=nϕ([G H])=nϕ([G])E_jϕ([H]). Hence, N must have the property that ϕ([N])=nE_j. Which gives us the following necessary and sufficient conditions on the graph N:
Let j∈, n∈∪{0} and N∈𝔾_j. Then, N∈ [n]_j if and only if N satisfies the condition that for any vertex deletion sets R_1,R_2 and corresponding allowable handle sets B_1, B_2 and their conjugate sets,
|(NR_1∪ R_2^*)_B_1∪ B_2^*|=
n(-1)^|R_1|+|B_1| if R_1=R_2, B_1=B_2
0 otherwise.
We observe that this implies that
* |N|=n
* For any vertex deletion set R and handle set B and their conjugate sets R^*, B^* where at least one set is nonempty,
|(N R)_B|=|(N R^*)_B^*|=0
An example of a graph with the properties above is similar to the example of the identity graph given in <Ref>. We use the fact that the determinant of the complete graph is |K_n+1| = (-1)^n(n). We need to include an extra copy of P_2 if n+j is odd so that the resulting determinant is n.
N=
K_n+1(∐_i=1^j P_2 ) if n+j is even
K_n+1(∐_i=1^j+1 P_2 ) if n+j is odd
The fact that these are the only graphs is proven in the same way that the uniqueness of the identity was proven.
§.§ Group Structure in the Semigroup
Consider [G]∈𝒢j such that there exists [G^-1] ∈𝒢_j so that [G G^-1]=Id_j. Then ϕ([G G^-1])=ϕ([G])E_jϕ([G^-1])=E_j. If we take the determinant of both sides, this gives us
(ϕ([G]))(ϕ([G^-1]))=1
Which means that all of these graphs map to invertible matrices with determinants ±1. So, if L is the set of all classes of graphs in 𝒢_j that have inverses, that is, the largest group in the semigroup 𝒢_j. Then Φ(L)⊂ GL ( 2j j,). We know that GL(n,) is finitely generated, so if we can find graphs that map to the generators then we will have a complete representation of this group.
§ APPLICATIONS
In this section we will show how we can use the homomorphism to easily calculate the n-fold join of various graphs.
§.§ Complete Graphs
§.§.§ 1-join
By work in <cit.>, the following theorem holds.
Let m and n be positive integers and K_m the complete graph on m vertices. Then
|K_m j K_n| = {[ (-1)^m+n(m+n-3), for j=1; 0, for j ≥ 2. ].
We will denote the the n-fold j-join of a graph G by nj G. That is,
nj G= G G ⋯ G_n -joins.
For example, the 0-fold j-join of a graph G is just the graph itself, the 1-fold j-join of G is G j G, and the 2-fold j-join is G j G j G.
For integers n,m where n ≥0 and m≥3,
ϕ(n1 K_m) = (-1)^mn+m+n[ -[ (m-2)n+(m-1) ] (n+1)(m-2); (n+1)(m-2) -[(m-2)n+(m-3)] ]
Note that this will immediately imply the following result by taking
the (1,1) element of the matrix given by ϕ( n1 K_m).
For integers n,m where n ≥0 and m≥3, the determinant of the n-fold join of the the complete graph on m vertices, with any labeling, is given by
| n1 K_m | = (-1)^(m+1)n[ (m-2)n+m-1 ].
The proof is by induction on n.
For n=0 we have,
ϕ( 01 K_m) = ϕ( K_m ) =
[ |K_m| |K_m {m}|; |K_m {1} |K_m {1,m} ]
= (-1)^m-1[ m-1 -(m-2); -(m-2) m-3 ].
Assuming the result for n, we now examine n+1, noting that for the 1-join, E_1 from Theorem <ref> is the matrix [ 1 0; 0 -1 ]. Then
ϕ( n+11 K_m) = ϕ( (n1 K_m) 1 K_m ) = ϕ( (n1 K_m) )· E_1 ·ϕ(K_m)
can be expressed as
99[ -[ (m-2)n+(m-1) ] (n+1)(m-2); (n+1)(m-2) -[(m-2)n+(m-3)] ]·[ 1 0; 0 -1 ]·[ m-1 -(m-2); -(m-2) m-3 ]
1111
scaled by (-1)^mn+n+1. A straightforward calculation gives the result.
§.§.§ 2-join
For integers n,m where n ≥1 and m≥3,
| n2 K_m | = 0.
Note that ϕ(K_m) and E_2 are, respectively the matrices
(-1)^m[ - (-1+m) (-2+m) (-2+m) - (-3+m) 1 1; (-2+m) -(-3+m) - (-3+m) (-4+m) -1 -1; (-2+m) - (-3+m) -(-3+m) (-4+m) -1 -1; -(-3+m) (-4+m) (-4+m) -(-5+m) 1 1; 1 -1 -1 1 0 0; 1 -1 -1 1 0 0; ],
and
E_2=[ 1 0 0 0 0 0; 0 -1 0 0 0 0; 0 0 -1 0 0 0; 0 0 0 1 0 0; 0 0 0 0 -1 0; 0 0 0 0 0 -1; ]
Then by direct calculation we see that
ϕ( K_m 2 K_m ) = ϕ(K_m) · E_2 ·ϕ(K_m)
is the zero matrix, which implies the result.
We note that the above theorem has significance despite the previously known result that |K_m j K_m| = 0. It can be the case that |G|=0 with |G j H| 0, as can be seen by taking G = P_3, H=P_5, with the canonical labeling and j = 1.
As we have discussed in this paper, the labeling of the graph contains the information on where to join the graph.
Let P_4 be a path graph on 4 vertices where we label them starting at one end as 1,3,2,4.
For integer n ≥0,
|n2P_4| = 1
Using Harary, we calculate
ϕ(P_4)=
[ 1 0 0 0 0 0; 0 -1 -1 0 -1 -1; 0 -1 0 0 0 0; 0 0 0 1 0 0; 0 -1 0 0 0 -1; 0 -1 0 0 -1 0; ]
Then, since the first row and column of both ϕ(P_4) and E_2 is a 1 at (1,1) element and zero elsewhere then the (1,1) element of the product will always be 1.
§ FUTURE WORK
Much work remains to be done in this area. The structure of the the semigroup is unknown as it depends on the surjectivity of ϕ. That is, will any integer matrix be the image of a graph under ϕ? A possible first step is to explore the generators and relators of the relevant submonoids.
Parallel work can be done using a different matrix other than the adjacency such as the Laplacian, Hermitian, etc. The main obstacle here is to recreate a similar sum decomposition for the determinant of join of graphs as done in <cit.>. The authors have preliminary work done in this direction for the Laplacian matrix.
From the point of view of calculations, the techniques shown above can be applied to find generating functions for the determinant of any chain of graphs with joined in various ways as done in <Ref>.
20
Bar Bar-Noy, A. and Naor J., Sorting, Minimal Feedback Sets and Hamilton Paths in Tournaments, SIAM Journal of Discrete Mathematics, 3(1) (1990), 7 - 20, url: <https://epubs.siam.org/doi/10.1137/0403002>.
Bad Badura, L., Two graphs with a common edge, Discussiones Mathematicae. Graph Theory, 34 (2014) no. 3, 497-507, University of Zielona Góra Press, url: <https://doi.org/10.7151/dmgt.1745>.
Lap Goldberg, L., Matrix Theory with Applications, McGraw-Hill International Editions, Mathematics and Statistics Series, 1991.
BP Gyurov B. and Pinzon, D., Determinant of Graphs Joined by j Edges, Thai Journal of Mathematics, 14 (2016) no. 2: 353-367, url: <https://thaijmath2.in.cmu.ac.th/index.php/thaijmath/article/view/602>.
H Harary, F., The Determinant of the adjacency matrix of a graph, SIAM Rev., 4 (1961), 202 - 210, url: <https://www.jstor.org/stable/2027712>.
JBH Hickey, J.B., Semigroups Under A Sandwich Operation, Proceedings of The Edinburgh Mathematical Society, 26(3) (1983) 371-382.
R Rara, H.M., Reduction procedures for calculating the determinant of the adjacency matrix of some graphs and the singularity of square planar grids, Discrete Mathematics, 151 (1996), 213-219, url: <https://core.ac.uk/download/pdf/81194617.pdf>.
BG Sookyang, S., Arworn, S., and Gyurov, B., Determinant of Graphs joined by two edges, Thai Journal of Mathematics, 10 (2012) no. 1: 101-111, url: <https://thaijmath2.in.cmu.ac.th/index.php/thaijmath/article/view/304>.
|
http://arxiv.org/abs/2409.03401v1 | 20240905103736 | Grand-potential phase field simulations of droplet growth and sedimentation in a two-phase ternary fluid | [
"Werner Verdier",
"Alain Cartalade",
"Mathis Plapp"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"physics.flu-dyn"
] |
Grand-potential phase field simulations of droplet growth and sedimentation
in a two-phase ternary fluid
Werner Verdier^1,2, Alain Cartalade^1, Mathis Plapp^2[Corresponding author, [email protected]]
^1 Université Paris-Saclay, CEA, Service de Thermo-hydraulique
et de Mécanique des Fluides, 91191 Gif-sur-Yvette, France
^2 Laboratoire de Physique de la Matière Condensée, CNRS,
École Polytechnique,
Institut Polytechnique de Paris, 91120 Palaiseau, France
=======================================================================================================================================================================================================================================================================
§ ABSTRACT
A methodology is built to model and simulate the dynamics of domain coarsening
of a two-phase ternary liquid with an arbitrary phase diagram. High numerical
performance is obtained through the use of the phase field-method for interface
capturing, a lattice Boltzmann method numerical scheme for all the model equations,
and a portable, parallel simulation code running on multiple GPUs. The model is
benchmarked against an analytic solution for a ternary diffusion couple.
It also reproduces the well-known power law for droplet coarsening during
Ostwald ripening without fluid flow. Large-scale simulations with flow illustrate
the effects of momentum transport and buoyancy, as well as droplet coalescence
and sedimentation.
Keywords: Phase-field models, Phase separation, Multicomponent mixtures, Lattice-Boltzmann method
§ INTRODUCTION
When a homogeneous solid or liquid mixture is rapidly quenched into a thermodynamically
unstable state, phase separation occurs. After an initial stage of spinodal decomposition,
during which microscopic fluctuations are amplified with time, spatial domains of the new
equilibrium phases form and are separated by well-defined interfaces with a characteristic thickness.
At later times, domain coarsening occurs: under the driving force of capillarity, the total interface
area is progressively reduced by the elimination of geometric features with high
curvature, such as protrusions or small domains.
This phenomenon has been extensively studied because it provides an example of a system
that never reaches equilibrium, but instead exhibits scaling laws <cit.>.
Indeed, in a system of infinite size, domain coarsening continues indefinitely
and leads to patterns that are scale-invariant, that is, the structure and geometry
of the pattern is statistically invariant in time, and only its overall scale grows
with time as a power law. The growth exponent depends on the underlying transport
processes (diffusion, hydrodynamics <cit.>, or elasticity)
and on the nature of the order parameter that describes the domain (scalar, vector or tensor,
conserved or non-conserved) <cit.>.
Besides its theoretical interest, phase separation is also important for a large number
of industrially relevant processes, such as the formation of porous glasses and membranes.
One example that motivated the present work is the conditioning of nuclear waste in glass: the radioactive substances are mixed with a glass matrix, a mixture of glass formers
that is optimized for long-term resistance to environmental stresses <cit.>.
In order to increase the amount of waste, glass ceramics are being studied as an alternative
to conventional glass. For a simplified ternary system, a liquid-liquid phase separation can occur
when its global composition lies within a miscibility gap. The composition of the first liquid phase is representative of waste,
whereas the composition of the second one is representative of the glass matrix <cit.>.
For the study of generic features of phase separation, such as spinodal decomposition
dynamics and scaling laws in coarsening, the Cahn-Hilliard equation <cit.> and its
various generalizations have been extensively used. Its appealing feature is that it can be
simply derived from out-of-equilibrium thermodynamics, taking as a starting point
the description of heterogeneous systems by a free-energy functional. The Cahn-Hilliard
equation can describe the complete time evolution of a phase-separating system, from
the initial spinodal decomposition to the late-stage coarsening, if transport is
governed by diffusion. It can be coupled with the Navier-Stokes
equation <cit.> to model
coarsening that is mediated by hydrodynamics, as well as the crossover between different
coarsening regimes <cit.>.
While the Cahn-Hilliard equation is thus an excellent tool to explore generic features
of phase-separating systems, it is rather difficult to adapt it for the description
of specific materials. There are two reasons for this. Firstly, the Cahn-Hilliard
equation is formulated for a binary system, with a concentration as the only dynamic
variable, and a free-energy functional that consists of a double-well potential and a
gradient term. In this formulation, the interface free energy is controlled by the interplay
of these two terms. It is therefore difficult to modify the bulk thermodynamic properties,
which are set by the curvature of the free-energy function, while leaving the interface energy
invariant <cit.>. Secondly, for multi-component
mixtures, there are several independent chemical compositions, and the gradient energy
coefficient becomes a symmetric matrix, with entries that are in principle determined
by the pairwise interaction between the different components. However, since those are
not directly known, the system is underdetermined, that is, for a target interface
energy, there are many possible choices for the gradient coefficients. This choice
has non-trivial consequences for the interface structure: interface adsorption of
certain components can occur, which modifies the interface energy and the macroscopic
conservation laws for moving interfaces <cit.>.
An alternative way for describing interfaces in multicomponent mixtures has been
developed in the framework of phase-field theory <cit.>.
The interface is described by a
scalar phase field, which can be seen as a smoothed indicator function (with 1
and 0 corresponding to presence or absence of the phase). The equation of motion
for this field derives from a Ginzburg-Landau free energy functional and is
coupled to the concentration fields. The latter are then governed by their own
free energy functions, which do not come into play for the determination of the
interface properties, and which can therefore be chosen at will. The connection
between the phase-field model and the more traditional free-boundary problem can
be made using the well-established technique of matched asymptotic expansions
<cit.>.
For the quantitative description of moving interfaces in multi-component mixtures,
the grand-potential model
(which is a reformulation of the earlier Kim-Kim-Suzuki model <cit.>)
has been widely used and benchmarked <cit.>.
It corresponds to a grand-canonical formulation of the
mixture thermodynamics, with the chemical potentials as basic variables, which
is suitable because two phases that coexist in a mixture have different
compositions but equal chemical potentials.
Here, we couple a grand-canonical phase-field model for multicomponent mixtures
with a description of fluid flow using the lattice-Boltzmann method
(LBM) <cit.> to achieve
a model for domain coarsening that can be used for specific substances with given
thermodynamics (obtained, for example, from a CALPHAD database <cit.>). This model
cannot describe the initial stages of spinodal decomposition, because the grand-potential
formulation requires a monotonic relation between composition and chemical potentials, but it
provides a quantitative description of the late-stage domain coarsening regime. Since, in
real substances, the two phases generally do not have the same density, we include
buoyancy in the model, which makes it possible to describe sedimentation of droplets
of the heavier phase.
We develop an implementation in which all the equations, including the phase-field
model, are solved within the LBM formalism <cit.>.
Indeed, while LBM was initially developed for fluid flow, it was soon recognized
that it is a general method for the time integration of partial differential
equations, and we have previously used this formulation for solidification
problems <cit.>. The advantage is that the simulation
code is entirely formulated in the LBM framework, and therefore high-performance
algorithms developed for LBM can be used. In particular, LBM is well adapted
for GPU parallelization.
In the present manuscript, we will first describe the grand-potential phase-field
model we use, which is similar to previously published work. Specifically, we
model a two-phase three-component fluid. We will then detail our
numerical implementation, and present a detailed benchmark study on a ternary
diffusion couple, which shows that non-trivial interface conditions can be accurately
resolved by our formulation. Then, we present simulations of domain coarsening,
in two dimension without fluid flow as a benchmark, and in three dimensions
with fluid flow, in a large system which contains many droplets. These simulations
are carried out with the high-performance computing code
LBM_Saclay <cit.>. These illustrative simulations
demonstrate that our code can be used to perform large-scale studies, and thus is a
suitable tool for the investigation of practical problems.
§ MODEL
§.§ Thermodynamics
We consider a ternary mixture, and denote by c^A(x⃗,t), c^B(x⃗,t),
and c^C(x⃗,t) the local molar fractions of the A, B, and C components.
Since c^A+c^B+c^C=1, only two concentration fields are independent; we choose
to eliminate c^C by writing c^C=1-c^A-c^B, and write the (Helmholtz) free energy
density of a homogeneous system as f(c^A,c^B). For a specific substance, this
free energy function can be typically obtained from a CALPHAD <cit.> database.
Such a database usually gives the molar Gibbs free energy, but we will suppose that
the molar volume is constant and independent of the composition, so that Gibbs and
Helmholtz free energies are equivalent. The variables that are thermodynamically
conjugate to the compositions are the diffusion potentials, that is, the difference
of the chemical potentials of A and C and B and C, respectively,
μ^α = V_a ∂ f/∂ c^α (α=A,B)
where V_a is the atomic volume (the molar volume divided by Avogadro's number).
This factor has been included to give the diffusion potentials (which will also be
called chemical potentials in the following) their usual dimension of energy <cit.>.
For a phase-separating system, the “free energy landscape” f(c^A,c^B) has a
double-well structure, with two convex regions separated by a concave one; the latter
corresponds to the compositions for which a homogeneous system is thermodynamically
unstable. At equilibrium, the system is in a coexistence of two phases, called
“phase 0” and “phase 1” in the following, which have different compositions,
each one being located in one of the wells of the free energy function.
Minimization of the free energy under the constraint of global mass conservation
yields the conditions for phase coexistence:
V_a . ∂ f_0∂ c^α|_c^α,_0
= V_a . ∂ f_1∂ c^α|_c^α,_1 =
μ^α,,
f_0(c^A,_0, c^B,_0) - f_1(c^A,_1,
c^B,_1)
= 1V_a∑_α=A,Bμ^α,
(c^α,_0 - c^α,_1) ,
where μ^α,, c_0^α,, and c_1^α, are the chemical
potentials and the compositions of phase 0 and 1 at two-phase equilibrium.
Geometrically, this corresponds to a “common tangent plane” to the free energy landscape.
From this geometric picture, or from a simple counting of degrees of freedom (three equations
for four unknown equilibrium compositions), it is clear that the solution to these
equations is not unique. In the space of compositions, which is usually visualized in
the Gibbs simplex, the various possible equilibria are indicated by tie lines
(see Figure <ref> below).
The global equilibrium corresponds to the unique tie line which contains the
composition inventory (the average composition) of the closed system.
The conditions for phase coexistence can be conveniently reformulated in the
grand-canonical framework for open system, in which the fundamental variables
are the diffusion potentials, and the relevant thermodynamic potential is the
grand potential,
ω (μ^A, μ^B) = f(c^A(μ^A, μ^B), c^B(μ^A,μ^B))
- 1V_a∑_α=A,Bμ^α c^α(μ^A, μ^B),
which is the Legendre transform of the free energy. In writing down this formula,
we have supposed that the functions μ^α(c^A,c^B) can be inverted to yield
c^α(μ^A,μ^B). This is guaranteed only if these functions are monotonous,
which corresponds to a convex free energy landscape. In the case of a phase-separating
system, the Legendre transform must be taken separately for each convex region in
composition space (see below for details), which yields two different grand potential
functions for the two phases, ω_0 and ω_1. Phase coexistence is then
simply given by
ω_0(μ^A,μ^B)=ω_1(μ^A,μ^B).
This defines a coexistence line in the space of the intensive variables (the two diffusion
potentials), each point of which corresponds to a tie line in composition space.
In general, the calculation of the Legendre transform in Eq. (eq:grandpotential)
cannot be performed analytically, since free energy functions typically involve both polynomials and logarithms of the concentrations. It can always be performed numerically, but in
the case of interest here
we can exploit the fact that during the late stages of the phase separation
process, the concentrations will be close to the global equilibrium concentrations.
Therefore, only the vicinity in concentration space of the reference equilibrium
is relevant, and we can perform a second-order Taylor expansion of the free energy
around the equilibrium concentrations for each phase,
as was already done in previous works <cit.>,
f_π(c^A,c^B) = f_π(c^A,_π,c^B,_π)+∑_αμ^α,/V_a(c^α-c^α,_π)
+ 1/2∑_α,β K^αβ_π(c^α-c^α,_π)(c^β-c^β,_π).
Here and in the remainder of the text, exponents in Greek letters
and the index π identify components A or B, and phase 0 or 1, respectively.
In Eq. (<ref>), we have used Eq. (eq_mudef) in the first order terms, and
the second order coefficients are given by
K^αβ_π=.∂^2 f/∂ c^α∂ c^β|_c^A,_π,c^B,_π.
In this quadratic approximation, Eq. (<ref>) yields a linear relation between the
chemical potentials and the compositions,
μ^α_π(c^A,c^B)=μ^α,+K^α A_π(c^A-c^A,_π)
+K^α B_π(c^B-c^B,_π).
This equation can easily be inverted to obtain c^α(μ^A,μ^B), which, together with Eq. (<ref>), yields an anaytic approximation for the grand potential.
The procedure outlined above is valid in the late stage of phase separation for arbitraty
free energy functions. Since, in the present contribution, we do not intend to model a
particular material, but are rather interested in benchmark calculations, we choose as
an example a simple model system, in which the matrix of second order coefficients is
diagonal, with equal values for the two phases,
K_π^αβ = K δ_αβ,
where δ_αβ is the Kronecker symbol. This corresponds to circular parabolic
free energy wells with equal curvatures for both phases.
Furthermore, we will switch to the
new variables μ̃^α=μ^α-μ^α, and
ω̃_π=ω_π(μ^A,μ^B)-ω_π(μ^A,,μ^B,), which amounts
to choosing the reference values for the chemical potentials and the grand potential.
In these variables, the grand potentials can be expressed as
ω̃_π (μ̃^A, μ̃^B)=- 12 K V_a^2∑_α=A,B(μ̃^α)^2- 1V_a∑_α=A,Bμ̃^α c^α,_π.
The phase diagram obtained from this model with c^A,_0=c^B,_0=0.3 and
c^A,_1=c^B,_1=0.4 is displayed in Figure <ref>.
§.§ Grand potential phase field formalism
In a multi-component Cahn-Hilliard model, the free energy landscape defined above
would be supplemented by square gradient terms in the free energy functional
to capture the free energy cost of inhomogeneities, which then adds non-local terms
(containing Laplace operators) to the diffusion potentials. Instead, we choose to
describe phases and interfaces by an additional scalar phase field
φ(x⃗, t), with values in [0, 1], whose extrema identify one of
the two bulk liquid phase. The smooth variations of φ will locate the
diffuse interfaces.
The model is based on a phenomenological grand potential functional
<cit.> over the volume V of the system,
function of the phase field φ and of the fields of intensive thermodynamical
variables μ^A, μ^B,
Ω[φ, μ^A, μ^B] =
∫_V (ω_ int(φ)+ω_ th(φ,μ^A,μ^B)).
The two contributions describe the interfacial and bulk (thermodynamic) contributions to
the total grand potential, respectively.
The interfacial part,
ω_ int(φ) = ζ2 |∇φ|^2 + H(φ),
contains a square gradient energy term and the double-well function (φ) =
8φ^2(1-φ)^2. They are parametrized by the
characteristic energy scales H ([E] · L^-3, height of the double-well)
and ζ ([E] · L^-1, gradient energy coefficient). The system
evolves to minimize Ω: the double-well term, favoring a sharp profile
for φ at the interface, will then play against the gradient energy
term, which favors smooth variations of φ.
Assuming a plane interface at equilibrium (where the bulk term ω_ th is
absent), the interface solution is given by the hyperbolic tangent profile
φ_0(x) = 12( 1 + tanh(2x / W)),
with
W = √(ζ / H)
the characteristic interface width. Reintroducing this profile in the integral
(<ref>) gives a characteristic value of the surface tension
due to the diffuse interface,
σ = 23 HW .
This relation also identifies H as the characteristic energy-per-volume
scale of the excess free energy stored in the diffuse interface.
More details about these calculations can be found in introductory
texts about the phase-field method, for example in Refs. <cit.>.
The bulk contribution is an interpolation between the grand potentials of the
two phases,
ω(φ, μ^A, μ^B) = [1-p(φ)] ω_0(μ^A, μ^B) + p(φ) ω_1(μ^A,μ^B) ,
with p(φ)=3φ^2-2φ^3 an interpolation function odd around φ = 1/2,
satisfying p({0, 1}) = {0, 1} and p'({0, 1}) = 0, that can be seen as a smoothed
step function <cit.>.
The interface-tracking equation, also known as the Allen-Cahn equation, is obtained
by relating linearly the time evolution of φ and the decrease in Ω,
expressed as the variational derivative,
∂_t φ = - M_φδΩδφ.
Here, M_φ is a kinetic coefficient (phase-field mobility). The terms in this
equations that are generated from the interfacial energy stabilize the interface
profile, whereas the bulk term creates a driving force for interface motion if
the grand potentials of the two phases differ.
It is convenient to remove the energy dimensions: define the dimensionless
chemical potentials μ^α = μ̃^α/ (K V_a), and
ω_π (μ^A, μ^B) = ω̃_π (μ̃^A,
μ̃^B)/K
= - 12∑_α=A,B( (μ^α)^2 +
2 μ^α c^α,_π).
We can now write the interface tracking equation (<ref>)
in terms of the fields φ, μ^A and μ^B as
τ_φ∂_t φ = W^2 ∇^2φ - '(φ) + λ p'(φ) Δω(μ^A,
μ^B)
with the phase-field relaxation time τ_φ=1/(M_φ H) and the difference of the
grand potential densities
Δω = ω_0 - ω_1
= - ∑_α=A,Bμ^α( c^α,_0 -
c^α,_1 ) ,
where the thermodynamical coupling parameter is λ = K / H. We also define the scaled
phase-field mobility M_φ=M_φ H=W^2/τ_φ, which has the dimension
of a diffusion coefficient.
§.§ Species diffusion and mixed formulation
In the absence of hydrodynamic flow, the redistribution of chemical species
occurs by diffusion. The concentration of each species is a locally conserved
quantity, hence
∂_t c^α(x⃗,t) = - ∇·j⃗^α(x⃗,t).
According to the thermodynamics of irreversible processes, the species currents write
j⃗^α(x⃗,t)=-∑_β=A,B M^αβ∇μ^β,
where M^αβ are the components of the atomic mobility matrix.
Furthermore, in the grand-canonical setting, where the natural variables are
the chemical potentials, the composition can be obtained from the grand-potential
functional by
c^α = - V_aδΩδμ^α = - V_a( p(1-φ)
∂ω_0∂μ^α + p(φ)
∂ω_1∂μ^α) .
In the original grand-potential method for a binary alloy <cit.>, the
concentration was eliminated from Eq. (eq:conservation) in favor of the chemical
potential; however, for multi-component alloys, this requires numerous matrix
inversions <cit.>. Therefore, we prefer
to use a mixed formulation, as in some previous works <cit.>: we keep
both the concentrations and the compositions as dynamic variables.
This allows to solve Eq. (<ref>), using the explicit algorithm of
one's choice to update c^α ; μ^α is then updated by inverting
Eq. (<ref>). Remark that the inversion step has a simple
analytical expression with the quadratic free energies, namely
μ^α = c^α - p(1 - φ) c^α,_0 - p(φ) c^α,_1 .
Here, we will use a diagonal mobility matrix
and remove the dimension of energy by defining M^α=KM^αα;
M^α then has the dimension of a diffusion coefficient.
If the diffusion coefficient is different in the two phases, the mobility is
linearly interpolated, M^α(φ)=M_0^α(1-φ)+M_1^αφ for both components α=A,B.
Notice that in the bulk phases, Eq. (<ref>) reduces
to Fick's laws with fluxes M^α∇ c^α because of the quadratic
free energies. This will be helpful in preliminary tests since analytical
solutions are available in this case ( sec. <ref>).
§.§ Equivalent sharp-interface model
The fact that phase field models only track an implicit, diffuse interface
greatly facilitates the numerical treatment of two-phase problems. However,
the behavior of the physical variables other than φ at the phase
interface is at first glance unspecified. This is in contrast to the
sharp-interface formulations of free-boundary problems, whose well-posedness
depends on the presence of explicit boundary condition at the interface
for values and fluxes of the relevant transport fields.
An important effort in the phase field literature is made to bridge both
formulations by extracting implicit interface conditions from the phase field
equations in the asymptotic limit of small W (“thin-interface limit"),
and to add corrections if necessary to obtain adjustable interface properties.
This is done using the formalism of matched asymptotic analysis in terms of an asymptotic
parameter ε expressing the scale separation between the interface thickness
and the physically relevant “outer” scales. As this process
is lengthy and has already been presented in much detail in other works,
we will only present its results here. The interested reader can refer to
<cit.> for a step-by-step detail of the calculations ; to
<cit.> and <cit.> for a more detailed
definition of the curvilinear coordinates and subsequent reexpression of
the differential operators ; and to <cit.> and then
<cit.> for a discussion of what can be considered
a “thin” interface.
The asymptotic analysis of the model without flow was realized
following the formalism of Almgren <cit.>. It was seen
that the case of a ternary system or the presence of a closure relation,
Eq. (<ref>), introduce little novelty in the calculations.
The result is a Gibbs-Thomson relation for the grand potential at the
interface,
Δω = -δκ - β V
with κ the local curvature of the interface, V its normal
velocity, and the associated coefficients are
δ =2/3W/λ
β =2/3W/λ M_φ-19/120∑_α=A,BW(c_0^α,-c_1^α,)^2/M^α
The numerical factors are due to integrations across the diffuse interface
profile and depend on the choice of the double-well function and the interpolation
function p(φ). The asymptotics also yields the composition balance at the interface,
V[c^α]_-^+=-[M^α∂_nμ^α]_-^+
where [ · ]^+_- denotes the jump of a field across the interface,
and ∂_n the gradient projected on the interface normal.
§.§ Flow coupling
To include hydrodynamic flow dynamics, we couple our phase-field model to the
incompressible Navier-Stokes equation,
∇·u =0
ρ_0[∂_tu+∇·(uu)] =-∇p_h+∇·[ρ_0ν(φ)(∇u+∇u^T)]+F_tot
∂_tφ+u·∇φ =W^2/τ_φ∇^2φ-1/τ_φω_dw^'(φ)+λ/τ_φp^'(φ)Δω(μ^A,μ^B)
∂_tc^α+u·∇c^α =∇·[M^α(φ)∇μ^α]
μ^α =c^α-p(1-φ)c_0^α,-p(φ)c_1^α,
where u⃗ is a phase-averaged velocity field <cit.>,
ρ_0 is the constant density, p_h is the hydrodynamic
pressure enforcing the condition ∇·u=0
(Lagrange multiplier of that condition), ν(φ) is the (phase-dependent) kinematic
viscosity, the component index α is A,B and the total force F_tot
is the sum of the gravity force F_g=φΔρg
in the Boussinesq approximation (with Δρ the density different between the
phases, phase 1 being denser than phase 0, and g⃗ the constant
acceleration vector due to gravity) and the surface tension
force F_σ defined by <cit.>:
F_σ=3/2σ W[ω_dw^'(φ)/W^2-∇^2φ]∇φ
For the equilibrium solution φ=φ_0, the term inside
the bracket is equivalent to κ|∇φ|
where κ is the curvature defined by κ=-∇·n.
The surface tension force can be expressed by F_s=δ_dσκn,
i.e. the quantity σκn is spread over δ_d=(3/2)W|∇φ|^2
where W is the thickness of the diffuse interface.
In the phase-field equation Eq. (<ref>), for simplifying
the notations in the next Section, we define the following source
term:
𝒮_φ(φ,μ^A,μ^B)=-1/τ_φω_dw^'(φ)+λ/τ_φp^'(φ)Δω(μ^A,μ^B)
which contains the contributions of the double-well (first term) and
the thermodynamic imbalance Δω (second term)
responsible for the displacement of the interface. For u=0,
the interface is displaced by diffusion until the thermodynamic equilibrium is
reached, i.e. when the grand-potential densities of each phase are
equal ω_0(μ^A,μ^B)=ω_1(μ^A,μ^B).
§ LATTICE BOLTZMANN METHOD
The numerical resolution of the model is done through
the LBM, which consists of a discretization of the Boltzmann equation
in phase-space. This defines a regular lattice, and the time-explicit
resolution of the equation is assimilable to the distribution functions
undergoing a collision step on the nodes followed by a transport step
along the edges. According to the choice of collision operator (here
the simple BGK operator), of equilibrium functions, and of additional
source and force terms, the moments of the distribution function thus
solved can be shown to be solution of conservative time-evolution
PDEs such as the ones of the present model. This is coherent with
a discretized Chapman-Enskog expansion.
LBM is a very popular method to simulate various conservative PDEs and more particularly the Navier-Stokes equations. For those latter ones, more traditional methods use a prediction-correction algorithm that requires solving a time-consuming Poisson equation. The LBM benefits from the efficiency of the equivalent articifial compressibility algorithm. The collision operator is local and each discrete distribution function follows an identical evolution equation. The method is therefore very simple to implement and is parallel in nature by efficiently exploiting the shared memory. Moreover, coupled with a distributed memory parallelism (e.g. MPI), that algorithm is very efficient. The interested reader can refer to references such as <cit.> for more details on the LBM.
In this work, the standard D2Q9 and D3Q19 lattices are used for all 2D or 3D simulations,
respectively. Those lattices correspond to a discretization of the
velocity-space with 9 or 19 discrete velocities c_k
defined by c_k=(δ x/δ t)e_k
with δ x the space step, δ t the time step, and e_k
the direction vectors. Each set of directions is weighted by a scalar
value w_k. The directions e_k and their weights
are listed in Table <ref> for D2Q9 and Table <ref>
for D3Q19.
We also define a characteristic speed c_s defined by c_s=(1/√(3))δ x/δ t.
The distribution functions and the discrete Lattice
Boltzmann equations (LBE) with the BGK collision operator are detailed
next in Section <ref> for fluid flow, in section
<ref> for phase-field and in Section <ref>
for the composition equations. Many alternative collision operators exist in LBM literature. The most popular operators are the "Two-Relxation-Times" (TRT) and "Multiple-Relaxation-Times" (MRT) which define respectively two (for TRT) or more (for MRT) additional collision rates to tune for improving stability and accuracy. Here, we present a proof of concept of a thermodynamic model that is based on the grand-potential functional and coupled with fluid flow. The ratio of diffusion coefficients are almost 1 and the kinematic viscosity of each phase is identical. The BGK operator is sufficient in this work.
§.§ LBM for fluid flow
For simulating the fluid flow, the numerical scheme works on the distribution
function f_k(x,t) for which its evolution is given
by the lattice Boltzmann equation:
f_k(x+c_kδ t,t+δ t)=f_k(x,t)-1/τ_f(φ)+0.5[f_k(x,t)-f_k^eq(x,t)]+δ tℱ_k(x,t)
where the relaxation rate τ_f is related to the kinematic
viscosity by ν(φ)=τ_f(φ)c_s^2δ t and
the source term ℱ_k(x,t) contains the
total force term F_tot of Eq. (<ref>). Several
forcing schemes exist in the LBM literature, here we choose one of
the two most popular methods <cit.>:
ℱ_k(x,t) =Γ_k(x,t)(c_k-u)·F_tot
Γ_k(x,t) =w_k[1+c_k·u/c_s^2+(c_k·u)^2/2c_s^4-u^2/2c_s^2]
The moment of order zero of Eq. (<ref>) is ∑_kℱ_k=0
and its moment of first-order is equal to ∑_kℱ_kc_k=F_tot.
For recovering the incompressible Navier-Stokes equation with the
Chapman-Enskog expansion, the equilibrium distribution function f_k^eq
has to be defined by <cit.>:
f_k^eq(x,t)=w_k[p_h+ρ_0c_s^2(c_k·u/c_s^2+(c_k·u)^2/2c_s^4-u^2/2c_s^2)]-δ t/2ℱ_k
where the term ℱ_kδ t/2 has been substracted for
capturing the second-order accuracy when the external force term is
taking into account in Eq. (<ref>). That trick is
equivalent to a change of variables in the distribution function to conserve
an explicit algorithm after the trapezoidal integration of the Boltzmann
equation. Because of that change of variables,
the term ∑_kℱ_kc_kδ t/2
must be added for updating the velocity for each time-step. After
collision and streaming, the hydrodynamic pressure is obtained by
the moment of order zero of f_k, and the velocity by the moment
of first order:
p_h =∑_kf_k
ρ_0u =1/c_s^2∑_kf_kc_k+δ t/2F_tot
When a high contrast of density exists between both phases, another
popular method is widely applied for defining the equilibrium distribution
function f_k^eq. In that case, the density becomes an interpolation
of the bulk density of each phase (e.g. ϱ(φ)), and
a dimensionless pressure p^⋆=p_h/ϱ(φ)c_s^2
is introduced in the equilibrium Eq. (<ref>) <cit.>. In that
case, two supplementary forces must be added in F_tot,
the pressure and viscosity forces <cit.>, in
order to recover the incompressible Navier-Stokes. Here, we assume
that the density is identical in both phases, so that the classical
equilibrium Eq. (<ref>) is sufficient for the simulations.
The surface tension force Eq. (<ref>) requires
computing a gradient term and a laplacian term. The gradient is evaluated
by the directional derivatives:
e_k·∇φ|_x=1/2δ x[φ(x+e_kδ x)-φ(x-e_kδ x)]
where the number of directional derivatives is equal to the number
of moving directions e_k on the lattice i.e. N_pop.
The gradient is obtained by:
∇φ|_x=3∑_k=0^N_popw_ke_k(e_k·∇φ|_x).
For the calculation of the ∇^2φ, all
directions of propagation are taken into account by
(e_k·∇)^2φ|_𝐱=1/δ x^2[φ(x+e_kδ x)-2φ(x)+φ(x-e_kδ x)]
which are used to compute the laplacian:
∇^2φ|_x=3∑_k≠0w_k(e_k·∇)^2φ|_x.
§.§ LBM for phase-field
For the phase-field equation, we introduce a new distribution function
g_k(x,t) evolving with the LBE:
g_k(x+c_kδ t,t+δ t)=g_k(x,t)-1/τ_g+0.5[g_k(x,t)-g_k^eq(x,t)]+δ t𝒢_k(x,t)
Eq. (<ref>) is an Advection-Diffusion type Equation
(ADE). The equilibrium distribution function g_k^eq is designed
such as its moments of order zero, one, and two are respectively equal
to φ, uφ and Iφ
where I is the identity tensor
of second order:
g_k^eq(x,t)=w_kφ[1+c_k·u/c_s^2]-δ t/2𝒢_k
The first term in the right-hand side of Eq. (<ref>) is the
classical term to recover the ADE after the Chapman-Enskog procedure,
and the source term 𝒢_k is simply defined such as its
moment of order zero is equal to 𝒮_φ(φ,μ^A,μ^B):
𝒢_k=w_k𝒮_φ(φ,μ^A,μ^B)
where 𝒮_φ, defined by Eq. (<ref>).
The interface mobility M_φ=W^2/τ_φ is a constant which is related
to the relaxation rate τ_g by M_φ=τ_gc_s^2δ t.
Finally, after collision and streaming, the phase-field φ
is updated at each time-step by:
φ=∑_kg_k+δ t/2𝒮_φ
§.§ LBM for composition equations
For the treatment of the composition equations, we introduce two distribution functions
h_k^α(x,t) (for α=A,B) which evolve
with the LBE:
h_k^α(x+c_kδ t,t+δ t)=h_k^α(x,t)-1/τ_h^α(φ)+0.5[h_k^α(x,t)-h_k^eq,α(x,t)]
Eqs (<ref>) look like advection-diffusion equations,
but the equilibrium functions h_k^eq,α must be defined
such that its moment of order two is equal to Iμ^α
where μ^α is related to the composition c^α
(moment of order zero) by Eq. (<ref>). In that relation,
c_0^α, and c_1^α, are two scalar input values which
are the two thermodynamic equilibrium compositions of each phase 0
and 1. Because the moments of order 2 differs from the moment of order
0, the equilibrium distribution function must be slightly modified
compared to the classical equilibrium for an ADE by:
h_k^eq,α(x,t)=
c^α-(1-w_0)μ^α for k=0
w_kμ^α+w_kc^αc_k·u/c_s^2 for k≠0
In that equilibrium distribution, the moment of order zero is given
by the first term of the first line and the moment of order one is
given by the second term of the second line. That method is extensively
used for simulating the advective Cahn-Hilliard equation (<cit.>).
The mobility coefficients M^α(φ) are related to the relaxation rate τ_h^α(φ) by M^α(φ)=τ_h^α(φ)c_s^2δ t.
After collision and streaming, the composition is obtained by the
moment of order zero of h_k^α:
c^α=∑_kh_k^α
§ NUMERICAL SIMULATIONS
The methods of the previous section have been implemented in a C++ code: LBM_Saclay <cit.>. Its main feature is its multi-architecture portability by simple modifications
of compilation options in the makefile. This is possible thanks to the Kokkos library <cit.>. Thus the same code can be run either on multi-CPU partition or on multi-GPU partition of a supercomputer. Time performances of LBM_Saclay have been compared between graphic cards and standard CPU in our previous paper <cit.>. Because of the efficiency of the former ones, the simulations of this work have run on the multi-GPU partitions of the supercomputers Jean-Zay (IDRIS, France) and Topaze (CCRT, France).
§.§ Ternary diffusion couple
We start by presenting a quantitative validation of the model and its
implicit Gibbs-Thomson condition for a plane interface without
flow. This demonstrates that the phase field model is able
to reproduce the known solution of a sharp-interface problem, the
symmetrical ternary diffusion couple. Phase field studies of this
problem already exist in the literature, and we use this as a test for our model <cit.>. An interface
splits the infinite one-dimensional space in two domains at x_I(t), with
phase 0 on its left and phase 1 on its right. The interface is displaced
at velocity ẋ_I(t) by the interdiffusion of the components. This
problem can be stated as the one-dimensional free boundary problem
∂_tc^α =M^α∂_xx^2μ^α, x<x_I(t) or x>x_I(t)
μ^α =μ_int^α, x=x_I(t)
ẋ_I(c^α|_x_I^--c^α|_x_I^+) =-M^α(∂_xc^α|_x_I^--∂_xc^α|_x_I^+) x=x_I(t)
μ^α =μ_-∞^α x=-∞
μ^α =μ_+∞^α x=+∞
with the initial conditions
x_I(0)=0; μ^α=μ_-∞^α (x<0); μ^α=μ_+∞^α (x>0).
The problem corresponds to evaluating our phase field model in 1D
with φ=0 or φ=1 and taking the interface conditions
(<ref>), (<ref>) with β=0 and κ=0.
In the sharp-interface view, the relation between μ^α and c^α
is linear, allowing the problem to be rewritten into the classical
Stefan problem in terms of either c^α or μ^α
only. It is then known <cit.> that a self-similar
solution exists with
x_I(t)=ξ√(t)
and
μ^α(x,t)=μ_-∞^α+(μ_int^α-μ_-∞^α)erfc(-x/2√(M^αt))/erfc(-ξ/2√(M^α)) -∞<x<x_I(t)
μ_+∞^α+(μ_int^α-μ_+∞^α)erfc(x/2√(M^αt))/erfc(ξ/2√(M^α)) x_I(t)<x<+∞
The statement and solution of the problem in terms of the
composition fields is analogous but with a discontinuity at the interface,
in coherence with chemical equilibrium. The couple of interface values
(μ_int^A,μ_int^B) are a solution
of the system, meaning
ω_0(μ_int^A,μ_int^B)-ω_1(μ_int^A,μ_int^B)=0.
A ternary system has access to a continuous set of such equilibrium
couples (phase diagram tie-lines). The additional dynamical constraints
select one particular equilibrium in this continuum: the coefficient ξ is determined
as the solution of a transcendental equation that couples the thermodynamic
equilibrium and dynamical parameters, see Eq. (22) in <cit.>.
1-3 2-3 3-3 5-7 6-7 7-7 9-11 10-11
3cDomain 3cPhase-field parameters 3cTransport parameters1-3 2-3 3-3 5-7 6-7 7-7 9-11 10-11 11-11
Symbol Value Dim Symbol Value Dim Symbol Value Dim[-L_x,L_x] [-1,1] [L] W 1.2×10^-3 [L] (c_0^A,,c_0^B,) (0.3,0.3) [–][-L_y,L_y] [-0.002,0.002] [L] λ 155.94541910≅155.95 [–] (c_1^A,,c_1^B,) (0.4,0.4) [–]N_x nodes 3000 [–] M_φ 1.2 [L]^2/[T] (M^A,M^B) (1,0.8) [L]^2/[T]N_y nodes 6 [–] (c^A,c^B)_-∞ (0.4,0.175) [–]δ x 1/1500 [L] (c^A,c^B)_+∞ (0.225,0.6) [–]δ t 1.481481×10^-9 [T]
Parameters for the simulation
of symmetrical diffusion couple. λ^⋆ is the value that
cancels the kinetic coefficient β (Eq. (<ref>)).
In all simulations, that coefficient will be simply noted λ
and approximated by 155.95. The values of (c^A,c^B)_-∞
and (c^A,c^B)_+∞ are respectively equivalent to (μ^A,μ^B)_-∞=(0.1,-0.125)
and (μ^A,μ^B)_+∞=(-0.175,0.2).
The values of collision rates corresponding respectively to M_φ, M^A and M^B are: τ_g=0.6333, τ_h^A=0.6111 and τ_h^B=0.5888.
To reproduce this problem numerically, the phase-field model is solved
on a thin 2D domain (3000 × 6 LBM lattice nodes). The phase
field is initialized with the equilibrium hyperbolic tangent profile.
The composition fields are initialized as step functions. The parameters used are
referenced in table <ref> and they produce an analytical
solution with ξ=-0.269824. Figure <ref>
compares the interface velocity and the concentration fields simulated with the ones
expected from the analytical solution. Both are in excellent agreement, numerically
confirming the interface condition (<ref>)–(<ref>)
reconstructed by the phase field model. Note that we have chosen the system's half-length
L and the diffusion time t_D=min_α(L^2/M^α) as
units in these comparisons.
§.§ Simulations of Ostwald ripening
Next, the phase-field model is exploited to simulate the Ostwald ripening
of a set of droplets (phase 1) inside a continuous matrix (phase 0).
As the Allen-Cahn model cannot describe the initial regime of phase separation
from homogeneous mixtures, the initial condition starts from already developed droplets,
as detailed below. In this problem, the characteristic
length scale is the average spacing between droplets. However, this
length is not static and increases as the growth proceeds and the
droplet number decreases. In the numerical simulations, the distance between
droplets is bounded by the domain size. For this reason, we once again
set our numerical units of length and time as the half-length of the
domain and the diffusion time, respectively, ℓ=L and
t_D=min_α(ℓ^2/M^α).
§.§.§ Initial condition
As already mentioned above, to study the growth kinetics,
we must start from pre-existing droplets. The initial
condition must be carefully constructed to satisfy the condition for
growth, namely that the global composition inventory lies in the miscibility
gap and that the phase fraction be sufficiently
high to measure a statically relevant average. In addition, before
the growth, there will be a transient regime during which the droplets
reach local equilibrium with the surrounding matrix (analogous to
a diffusion couple). We want the droplets to always tend to grow during
this regime, and not to shrink and risk disappearance.
We therefore choose to parametrize the initialization of the composition
c(x,0) and the phase-field φ(x,0)
with three scalar values: the initial global composition of the system
c_g=(c_g^A,c_g^B) and the initial phase
fraction Φ of droplets. The phase-field φ(x,0)
can be easily initialized by creating spherical droplets with randomly
generated positions and radii. For the compositions, we could then
initialize them inside and outside those interfaces using the reference
tie-line’s end-points. However, specifying Φ,
c_g and the tie-line would overspecify the initialization
because of the lever rule. Besides, if the initial composition of
the matrix is at equilibrium c_0^eq, the droplets
can disappear during the transient regime. It is necessary to design
an initial condition on c^A(x,0) and c^B(x,0)
such that the droplets grow from the outset of the time evolution.
When the thermodynamic equilibrium is reached, because of the conservation
rule, the global composition c_g are related to
the equilibrium compositions and the equilibrium phase fraction Φ^eq
by:
c_g=(1-Φ^eq)c_0^eq+Φ^eqc_1^eq
That relation can be inverted and yields:
Φ^eq=|c_0^eq-c_g|/|c_0^eq-c_1^eq|
One way to avoid droplet disappearance is to initialize the
system with a phase fraction of droplets Φ which is lower than
the equilibrium Φ^eq. In that case, we need to initialize
the composition in the matrix and/or in the droplets out of equilibrium
(with an additional degree of freedom due to the ternary case). Here,
the droplet compositions are considered at equilibrium c_1^eq
and we choose to supersaturate the matrix by offsetting its composition
along the tie-line, as
c_0^ini=c_0^eq-δ(c_1^eq-c_0^eq)
where the coefficient δ is defined by:
δ =Φ^eq-Φ/1-Φ
where Φ^eq is given by Eq. (<ref>). When δ>0
(i.e. when Φ<Φ^eq), the matrix composition is offset
inside the miscibility gap. This ensures that the initial transient leading
towards local equilibrium always corresponds to an increase of the global
phase fraction of droplets Φ. Finally, the initial conditions of compositions write
c(x,0)=
(1-δ)c_0^eq+δc_1^eq if φ(x)<1/2
c_1^eq if φ(x)≥1/2
which represents an initial supersaturation in the matrix, whereas
the droplet composition is supposed to be at equilibrium.
The details of the initialization routine we
used are given next. For a specified target phase fraction s,
we initialize the droplet geometry according to the procedure illustrated in
Figure <ref> which results in lists of positions and
volumes for each droplet. Then, to initialize φ(x) on each
lattice node, search for the closest droplet where we define the hyperbolic
tangent profile using its radius and center. Next, the composition fields
are initialized on each lattice node as Eq. (<ref>) for a given composition
inventory c_g, followed by the chemical potential fields
at equilibrium everywhere, μ^α(x)=0. Finally,
for simulations with flow, the flow field is initialized at rest and the pressure
field at zero.
§.§.§ Geometry measurements
To measure the droplet count N, we use an algorithm for the measure
of the Euler characteristic, to be understood here as the number of
connected regions of φ=1. The algorithm is adapted from reference
<cit.> and it only requires a single pass over the
lattice. The average radius ⟨ R⟩ is measured
using the integral estimate
∫4/Wφ(1-φ)d^2x ≈∑_i2π∫_0^+∞4/Wφ_0(1-φ_0)rdr
≈2π W∑_i[R_i/W+𝒪(e^-4R_i/W)]
≈2π N⟨ R⟩
where the index i stands for the droplet number. Indeed, the integrand is a function that
has a sharp peak in the interfaces, and the integral approximately gives the total length
of interface present in the system. Assuming the droplets are spherical and sufficiently far
from each other, the integrals can be calculated exactly and a Taylor
expansion gives the desired result, plus an error terms in 𝒪(e^-4R_i/W);
the numerical calculation of this integral is easy to implement
and very fast (linear in lattice points and no stencil involved).
§.§.§ Ripening without flow
We first consider the ripening of a population of two-dimensional droplets without
flow to test our initialization procedure and to reproduce the known growth law
for ⟨ R⟩. The domain ranges between [-1,1]×[-1,1] and the boundary conditions are periodic. The parameters of the simulation are listed in Table <ref>.
The initialization produced 2737 droplets. Figure <ref>
shows the phase field and chemical potential fields in a small part of
the domain between [0,0.3]×[0,0.45] containing only a small fraction of the droplet count. The phase 1 domains follow the expected evolution: the smaller
droplets shrink and vanish while the larger ones grow. The chemical
potentials are initialized at zero (equilibrium). The Gibbs-Thomson
condition creates differences between droplets and their surrounding
matrix and neighbours of different size. As the grains grow and rarify,
the chemical potentials gradually homogeneize. Figure <ref>
presents the evolution of the mean grain radius compared to a power
law with exponent 1/3. With this, we confirm the adequacy of the
model to reproduce the growth kinetics.
1-3 2-3 3-3 5-7 6-7 7-7 9-11 10-11 11-11
3cDomain (D2Q9) 3cPhase-field parameters 3cTransport parameters1-3 2-3 3-3 5-7 6-7 7-7 9-11 10-11 11-11
Symbol Value Dim Symbol Value Dim Symbol Value Dim[-L_x,L_x] [-1,1] [L] W 3δ x=1.464×10^-3 [L] (c_0^A,,c_0^B,) (0.3,0.3) [–][-L_y,L_y] [-1,1] [L] λ 155.95 [–] (c_1^A,,c_1^B,) (0.4,0.4) [–]N_x nodes 4096 [–] M_φ 1.2 [L]^2/[T] (M_0^AA,M_0^BB) (1,0.8) [L]^2/[T]5-7 6-7 7-7
N_y nodes 4096 [–] 3cDroplet initialization (M_1^AA,M_1^BB) (1,0.8) [L]^2/[T]5-7 6-7 7-7
δ x 4.882×10^-4 [L] (c_g^A,c_g^B) (0.31,0.31) [–] δ t 2.5×10^-8 [T] Φ 0.08 [–] S_avg±Δ S (1.02±0.957)×10^-4 [L]^2 1-3 2-3 3-3 5-7 6-7 7-7 9-11 10-11 11-11
Parameters for the simulation
of 2D Ostwald ripening. The initial volumes of the grains are uniformly
sampled in the range [S_avg-Δ S,S_avg+Δ S]. The values of collision rates corresponding respectively to M_φ, M^AA and M^BB are: τ_g=0.8776, τ_h^A=0.8146 and τ_h^B=0.7517.
§.§.§ Ripening and sedimentation
Next, we look at a three-dimensional ripening process including the effect of buoyancy.
Table <ref> lists the
parameters. The gravity acts along the x-axis and the bounce-back
LBM algorithm is used on the top and bottow walls, enforcing u=0
and ∇φ=∇c=∇μ=0.
The lateral boundaries are periodic.
1-3 2-3 3-3 5-7 6-7 7-7 9-11 10-11 11-11
3cDomain (D3Q19) 3cPhase-field parameters 3cDroplet initialization1-3 2-3 3-3 5-7 6-7 7-7 9-11 10-11 11-11
Symbol Value Dim Symbol Value Dim Symbol Value Dim[-L_x,L_x] [-16,16] [L] W 4δ x=6.25×10^-2 [L] (c_g^A,c_g^B) (0.32,0.32) [–][-L_y,L_y] [-4,4] [L] λ 155.95 [–] Φ 0.08 [–][-L_z,L_z] [-4,4] [L] M_φ 1.2 [L]^2/[T] V_avg±Δ V 0.08±0.072 [L]^35-7 6-7 7-7 9-11 10-11 11-11
N_x nodes 2048 [–] 3cFlow parameters 3cTransport parameters5-7 6-7 7-7 9-11 10-11 11-11
N_y nodes 512 [–] ρ_0=Δρ 1 [M]/[L]^3 (c_0^A,,c_0^B,) (0.3,0.3) [–]N_z nodes 512 [–] ν 1 [L]^2/[T] (c_1^A,,c_1^B,) (0.4,0.4) [–]δ x 1.5625×10^-2 [L] σ 10^-3 [MLT^-2]/[L] (M_0^AA,M_0^BB) (1,0.8) [L]^2/[T]δ t 4.2×10^-5 [T] Δρ g_x 1 [M]/[LT]^2 (M_1^AA,M_1^BB) (1,0.8) [L]^2/[T]1-3 2-3 3-3 5-7 6-7 7-7 9-11 10-11 11-11
Parameters for the simulation
of 3D Ostwald ripening with a gravity-driven flow. The initial grains
volumes are uniformly sampled in the range [V_avg-Δ V,V_avg+Δ V]. The values of collision rates corresponding respectively to ν, M_φ, M^AA and M^BB are: τ_f=0.6111, τ_g=0.6333, τ_h^A=0.6111 and τ_h^B=0.5888.
Figure <ref> shows successive snapshot pictures of
the droplet geometry. The initial condition (Figure <ref>)
is composed of 2035 droplets. At the beginning, diffusive Ostwald ripening
takes place, and after some time (Figure <ref>), the smallest droplets
have vanished and the remaining ones start to sediment. The droplets then
accelerate (Figure <ref>), and some have already reached the bottom
wall. A droplet can be seen hanging from the top wall due to capillarity.
Many droplets lose their spherical shape due to coalescence events. Much
later (Figure <ref>), the top of the domain contains almost
no droplets anymore due to the combined effect of ripening, sedimentation
and coalescence. A large drop of the dense phase forms. This huge mass
continues to sediment, but the surrounding smaller droplets rise due to a recirculation
of the flow field. At the final time of the simulation (Figure <ref>),
only a few droplets are left at the bottom. They are still subject
to evaporation due to the very large drop that formed in the center.
The flow recirculation is clearly seen at the final time step on Figure
<ref>. At earlier times, small droplets may also
be seen rising around the large domain of phase 1 before they quickly
evaporate.
Figure <ref>(a) tracks the number of droplets in the simulation as a
function of time. Around t=4t_D, the very large drop
forms in the center and its size becomes comparable to the system size.
Hence, this single drop totally dominates the droplet distribution: the remaining droplets
are few in number, and are very small in comparison (as seen in Figure <ref>).
Because of the small system size and since all small droplets
evaporate in favor of the largest drop, the growth dynamics rapidly stops to obey a power law.
The development of the sedimentation dynamics is illustrated in Figure fig:Time-evolution(b),
which displays the maximum fluid velocity in the domain as a function of time. In the
beginning, this maximum velocity increases monotonically: the average droplet size
increases with time, which leads to higher sedimentation velocities. After the formation
of the large drop at t=4t_D, the velocity saturates and oscillates. In this regime,
the finite system size limits the sedimantation velocity of the central
drop, and the highest velocities occur in the upward recirculation.
§ CONCLUSION AND OUTLOOK
We have constructed and tested a model for the simulation of late-stage coarsening in
phase-separating liquids, taking into account buoyancy effects which lead to droplet
sedimentation. Two-phase coexistence and the motion of interfaces are described by
a grand-canonical phase-field model, which can be adapted to arbitrary thermodynamic
properties as described by a free-energy model. The coupled model is simulated
numerically by the lattice-Boltzmann method, which is used to integrate in time
all equations of the model. This makes the simulation code monolithic despite its
multiphysics nature. Therefore, it is particularly well adapted for high-performance
simulations on GPU architectures.
We have demonstrated the capabilities of our approach
by several test cases. In Section <ref> we started by verifying that this model reconstructs the correct coupling between interface kinetics and thermodynamics of a three-components system when compared with the equivalent sharp-interface problem. Next, simulations of Ostwald ripening have been carried out in two dimensions in a large system, and we have verified that the mean droplet radius follows the expected power law with time. Finally, one large-scale three-dimensional simulation of simultaneous coarsening and droplet sedimentation has been performed. It has demonstrated that our model can be
used to study the interplay between coarsening and fluid flow created by droplet sedimentation.
With respect to a multi-component Cahn-Hilliard model, our approach has at least
two advantages: as already stated, it can be adapted to simulate any desired substance
for which free-energy data are available, without restrictions on the choice of
the surface tension. Furthermore, the equations of the phase-field model are second
order in space, rather than the fourth order Cahn-Hilliard equation, which makes its
numerical integration more efficient. The price to pay is that the initial stages of
phase separation cannot be described by this model, which makes it necessary to construct
an initial state ad hoc by making hypotheses on the initial droplet size distribution.
However, since the memory of the initial state is rapidly lost during the coarsening of
a disordered droplet distribution, this should not be a severe problem.
We have written down a model for a ternary mixture here, but there is no difficulty
in generalizing the approach to mixtures with more than three components, as long
as the necessary thermodynamic and kinetic data (free energies and mobilities)
are available. An interesting question that could be studied in the future is the
interplay between coarsening and sedimentation in the macroscopic redistribution
of components: since the sedimentation of the droplets leads to an accumulation
of the denser phase at the bottom of the system, and the sedimentation velocity
is linked to the droplet size, there is a non-trivial coupling between the two
phenomena.
§ ACKNOWLEDGEMENTS
This work was granted access to the HPC resources of IDRIS (super-computer Jean-Zay, partition V100) and CCRT (Topaze, partition A100). Alain Cartalade wishes to thank the SIVIT project involving Orano and EDF for the financial support.
10
gunton1983phase
JD Gunton, M San Miguel, and PS Sahni.
The dynamics of first-order phase transitions.
In C. Domb and J. L. Lebowitz, editors, Phase Transitions and Critical Phenomena, Vol. 8, pages 267–466, New York, 1983. Academic Press.
siggia_1979
Eric D. Siggia.
Late stages of spinodal decomposition in binary mixtures.
Phys. Rev. A, 20:595–605, Aug 1979.
Bray94
A.J. Bray.
Theory of phase-ordering kinetics.
Advances in Physics, 43(3):357–459, 1994.
Bray02
A. J. Bray.
Theory of phase-ordering kinetics.
Advances in Physics, 51(2):481–587, 2002.
gin2017radionuclides
Stephane Gin, Patrick Jollivet, Magaly Tribet, Sylvain Peuget, and Sophie
Schuller.
Radionuclides containment in nuclear glasses: an overview.
Radiochimica Acta, 105(11):927–959, 2017.
Schuller_etal_JACS2011
Sophie Schuller, Olivier Pinet, and Bruno Penelon.
Liquid–liquid phase separation process in borosilicate liquids
enriched in molybdenum and phosphorus oxides.
Journal of the American Ceramic Society, 94(2):447–454, 2011.
Pinet_etal_JNM2019
O. Pinet, J.-F. Hollebecque, I. Hugon, V. Debono, L. Campayo, C. Vallat, and
V. Lemaitre.
Glass ceramic for the vitrification of high level waste with a high
molybdenum content.
Journal of Nuclear Materials, 519:121–127, 2019.
Cahn58
J. W. Cahn and J. E. Hilliard.
Free energy of a non-uniform system. 1. interfacial free energy.
J. Chem. Phys., 28:258–267, 1958.
brackbill_kothe_zemach_1992
J.U Brackbill, D.B Kothe, and C Zemach.
A continuum method for modeling surface tension.
Journal of Computational Physics, 100(2):335–354, 1992.
jacqmin_1999
David Jacqmin.
Calculation of two-phase navier–stokes flows using phase-field
modeling.
Journal of Computational Physics, 155(1):96–127, 1999.
henry_tegze_2018
Hervé Henry and György Tegze.
Self-similarity and coarsening rate of a convecting bicontinuous
phase separating mixture: Effect of the viscosity contrast.
Phys. Rev. Fluids, 3:074306, Jul 2018.
henry_tegze_2019
Hervé Henry and György Tegze.
Kinetics of coarsening have dramatic effects on the microstructure:
Self-similarity breakdown induced by viscosity contrast.
Phys. Rev. E, 100:013116, Jul 2019.
semprebon_ciro_kruger_2016
Ciro Semprebon, Timm Krüger, and Halim Kusumaatmaja.
Ternary free-energy lattice boltzmann model with tunable surface
tensions and contact angles.
Phys. Rev. E, 93:033305, Mar 2016.
rasolofomanana_et_al_2022
M.A. Rasolofomanana, C. Cardon, M. Plapp, T. Philippe, H. Henry, and R. Le
Tellier.
Diffuse-interface modelling of multicomponent diffusion and phase
separation in the u-o-zr ternary system.
Computational Materials Science, 214:111650, 2022.
ProvatasElder
N. Provatas and K. Elder.
Phase-field methods in materials science and engineering.
Wiley-VCH, Weinheim, 2010.
Steinbach09
I. Steinbach.
Phase-field models in materials science.
Model. Simul. Mater. Sci. Eng., 17(7):073001, 2009.
PlappHandbook
M. Plapp.
Phase-field models.
In T. Nishinaga, editor, The Handbook of Crystal Growth, 2nd
edition, Vol. 1B, pages 631–668, Amsterdam, 2015. Elsevier.
karma_rappel_1998
Alain Karma and Wouter-Jan Rappel.
Quantitative phase-field modeling of dendritic growth in two and
three dimensions.
Phys. Rev. E, 57:4323–4349, Apr 1998.
almgren_1999
Robert F. Almgren.
Second-order phase field asymptotics for unequal conductivities.
SIAM Journal on Applied Mathematics, 59(6):2086–2107, 1999.
echebarria_et_al_2004
Blas Echebarria, Roger Folch, Alain Karma, and Mathis Plapp.
Quantitative phase-field model of alloy solidification.
Phys. Rev. E, 70:061604, Dec 2004.
badillo_2012
Arnoldo Badillo.
Quantitative phase-field modeling for boiling phenomena.
Phys. Rev. E, 86:041603, Oct 2012.
kim_kim_suzuki_1999
Seong Gyoon Kim, Won Tae Kim, and Toshio Suzuki.
Phase-field model for binary alloys.
Phys. Rev. E, 60:7186–7197, Dec 1999.
plapp_2011_2
Mathis Plapp.
Unified derivation of phase-field models for alloy solidification
from a grand-potential functional.
Phys. Rev. E, 84:031601, Sep 2011.
Choudhury12
A. Choudhury and B. Nestler.
Grand-potential formulation for multicomponent phase transformations
combined with thin-interface asymptotics of the double-obstacle potential.
Phys. Rev. E, 85:021602, 2012.
Plapp16
M. Plapp.
Phase-field modelling of solidification microstructures.
Journal of the Indian Institute of Science, 96(3):179–198,
2016.
the_lattice_boltzmann_method
Timm Krüger, Halim Kusumaatmaja, Alexandr Kuzmin, Orest Shardt, Goncalo
Silva, and Erlend Magnus Viggen.
The Lattice Boltzmann Method: Principles and Practice.
Springer International Publishing, Cham, 2017.
calphad
Larry Kaufman and Henry L. Bernstein.
Computer calculation of phase diagrams with special reference to
refractory metals.
New York : Academic Press, 1970.
cartalade2016
Alain Cartalade, Amina Younsi, and Mathis Plapp.
Lattice boltzmann simulations of 3d crystal growth: Numerical schemes
for a phase-field model with anti-trapping current.
Computers & Mathematics with Applications, 71(9):1784–1798,
2016.
verdier_kestener_cartalade_2020
Werner Verdier, Pierre Kestener, and Alain Cartalade.
Performance portability of lattice boltzmann methods for two-phase
flows with phase change.
Computer Methods in Applied Mechanics and Engineering,
370:113266, 2020.
LangerBegRohu
J. S. Langer.
An introduction to the kinetics of first-order phase transitions.
In C. Godrèche, editor, Solids far from equilibrium, Edition
Aléa Saclay, pages 297–363, Cambridge, UK, 1991. Cambridge University
Press.
bayle_2020
R. Bayle, O. Cueto, S. Blonkowski, T. Philippe, H. Henry, and M. Plapp.
Phase-field modeling of the non-congruent crystallization of a
ternary Ge–Sb–Te alloy for phase-change memory applications.
Journal of Applied Physics, 128(18):185101, November 2020.
bayle_2020_phd
Raphael Bayle.
Simulation des mécanismes de changement de phase dans des
mémoires PCM avec la méthode multi-champ de phase.
PhD thesis, 2020.
Thèse de doctorat dirigée par Plapp, Mathis, Institut
polytechnique de Paris 2020.
folch_et_al_1999
R. Folch, J. Casademunt, A. Hernández-Machado, and L. Ramírez-Piscina.
Phase-field model for hele-shaw flows with arbitrary viscosity
contrast. ii. numerical study.
Phys. Rev. E, 60:1734–1740, Aug 1999.
sun_beckermann_2007
Y. Sun and C. Beckermann.
Sharp interface tracking using the phase-field equation.
Journal of Computational Physics, 220(2):626–653, jan 2007.
He-Shan-Doolen_PRE-Rapid1998
Xiaoyi He, Xiaowen Shan, and Gary D. Doolen.
Discrete boltzmann equation model for nonideal gases.
Phys. Rev. E, 57:R13–R16, Jan 1998.
He-Luo_Incompressible_JSP1997
X. He and L.-S. Luo.
Lattice boltzmann model for the incompressible navier-stokes
equation.
Journal of Statistical Physics, 88(3/4):pp. 927–944, 1997.
zu_he_2013
Y. Q. Zu and S. He.
Phase-field-based lattice boltzmann model for incompressible binary
fluid systems with density and viscosity contrasts.
Phys. Rev. E, 87:043301, Apr 2013.
Fakhari_etal_PRE2017
Abbas Fakhari, Travis Mitchell, Christopher Leonardi, and Diogo Bolster.
Improved locality of the phase-field lattice-boltzmann model for
immiscible fluids at high density ratios.
Phys. Rev. E, 96:053301, Nov 2017.
fakhari_rahimian_2010
Abbas Fakhari and Mohammad H. Rahimian.
Phase-field modeling by the method of lattice boltzmann equations.
Phys. Rev. E, 81:036707, Mar 2010.
LBMsaclaycode
code.
, 2018.
kokkos
H. Carter Edwards, Christian R. Trott, and Daniel Sunderland.
Kokkos: Enabling manycore performance portability through polymorphic
memory access patterns.
Journal of Parallel and Distributed Computing, 74(12):3202 –
3216, 2014.
Domain-Specific Languages and High-Level Frameworks for
High-Performance Computing.
heulens_blanpain_moelans_2011
J. Heulens, B. Blanpain, and N. Moelans.
Phase-field analysis of a ternary two-phase diffusion couple with
multiple analytical solutions.
Acta Materialia, 59(10):3946 – 3954, 2011.
lahiri_abinandanan_choudhury_2017
Arka Lahiri, T. A. Abinandanan, and Abhik Choudhury.
Theoretical and numerical study of growth in multi-component alloys.
Metallurgical and Materials Transactions A, 48:4463–4476,
October 2017.
maugis_et_al_1997
P. Maugis, W.D. Hopfe, J.E. Morral, and J.S. Kirkaldy.
Multiple interface velocity solutions for ternary biphase infinite
diffusion couples.
Acta Materialia, 45(5):1941 – 1954, 1997.
wiemker_2013
Rafael Wiemker.
Total Euler Characteristic as a Noise Measure to aid Transfer
Function Design.
In Mario Hlawitschka and Tino Weinkauf, editors, EuroVis - Short
Papers. The Eurographics Association, 2013.
|
http://arxiv.org/abs/2409.03752v1 | 20240905175912 | Attention Heads of Large Language Models: A Survey | [
"Zifan Zheng",
"Yezhaohui Wang",
"Yuxin Huang",
"Shichao Song",
"Bo Tang",
"Feiyu Xiong",
"Zhiyu Li"
] | cs.CL | [
"cs.CL"
] |
Foundation Model or Finetune? Evaluation of few-shot semantic segmentation for river pollution
Marga Don10009-0001-5435-8935
Stijn Pinson 2 Blanca Guillen Cebrian 2
Yuki M. Asano 1,3
September 9, 2024
===============================================================================================
§ ABSTRACT
Since the advent of ChatGPT, Large Language Models (LLMs) have excelled in various tasks but remain largely as black-box systems. Consequently, their development relies heavily on data-driven approaches, limiting performance enhancement through changes in internal architecture and reasoning pathways. As a result, many researchers have begun exploring the potential internal mechanisms of LLMs, aiming to identify the essence of their reasoning bottlenecks, with most studies focusing on attention heads.
Our survey aims to shed light on the internal reasoning processes of LLMs by concentrating on the interpretability and underlying mechanisms of attention heads. We first distill the human thought process into a four-stage framework: Knowledge Recalling, In-Context Identification, Latent Reasoning, and Expression Preparation. Using this framework, we systematically review existing research to identify and categorize the functions of specific attention heads. Furthermore, we summarize the experimental methodologies used to discover these special heads, dividing them into two categories: Modeling-Free methods and Modeling-Required methods. Also, we outline relevant evaluation methods and benchmarks. Finally, we discuss the limitations of current research and propose several potential future directions. Our reference list is open-sourced at <https://github.com/IAAR-Shanghai/Awesome-Attention-Heads>.
§ INTRODUCTION
The Transformer architecture <cit.> has demonstrated outstanding performance across various tasks, such as Natural Language Inference and Natural Language Generation. However, it still retains the black-box nature inherent to Deep Neural Networks (DNNs) <cit.>. As a result, many researchers have dedicated efforts to understanding the internal reasoning processes within these models, aiming to uncover the underlying mechanisms <cit.>. This line of research provides a theoretical foundation for models like BERT <cit.> and GPT <cit.> to perform well in downstream applications. Additionally, in the current era where Large Language Models (LLMs) are widely applied, interpretability mechanisms can guide researchers in intervening in specific stages of LLM inference, thereby enhancing their problem-solving capabilities <cit.>.
Among the components of LLMs, attention heads play a crucial role in the reasoning process. Particularly in the recent years, attention heads within LLMs have garnered significant attention, as illustrated in Figure <ref>. Numerous studies have explored attention heads with specific functions. This paper consolidates these research efforts, organizing and analyzing the potential mechanisms of different types of attention heads. Additionally, we summarize the methodologies employed in these investigations.
§.§ Structure of Our Work
The logical structure and classification method of this paper are illustrated in Figure <ref>. We begin with the background of the problem in Section <ref>, where we present a simplified representation of the LLMs' structures (Section <ref>) and explain the related key terms (Section <ref>). In Section <ref>, we first summarize the four stages of human thought processes from a cognitive neuroscience perspective and apply this framework to analyze the reasoning mechanisms of LLMs. Using this as our classification criterion, we categorize existing work on attention heads, identifying commonalities among heads that contribute to similar reasoning processes (Sections <ref>–<ref>) and exploring the collaborative mechanisms of heads functioning at different stages (Section <ref>).
Investigating the internal mechanisms of models often requires extensive experiments to validate hypotheses. To provide a comprehensive understanding of these methods, we summarize the current experimental methodologies used to explore special attention heads in Section <ref>. We divide these methodologies into two main categories based on whether they require additional modeling: Modeling-Free (Section <ref>) and Modeling-Required (Section <ref>).
In addition to the core sections shown in Figure <ref>, we summarize the evaluation tasks and benchmarks used in relevant studies in Section <ref>.
Furthermore, in Section <ref>, we compile research on the mechanisms of Feed-Forward Networks (FFNs) and Mechanical Interpretability to help deepen our understanding of LLM structures from multiple perspectives.
Finally, in Section <ref>, we offer our insights on the current state of research in this field and outline several potential directions for future research.
§.§ Comparison with Related Surveys
To the best of our knowledge, there is no survey focused on the mechanisms of LLMs' attention heads. Specifically, <cit.> mainly discusses non-Transformer architectures, with little focus on attention heads. The surveys by <cit.> cover older content, primarily focusing on the various attention computation methods that emerged during the early development of the Transformer. However, current LLMs still use the original scaled-dot product attention, indicating that many of the derived attention forms have become outdated. Although <cit.> focuses on the internal structure of LLMs, it only summarizes experimental methodologies and overlooks research findings related to operational mechanisms.
Compared to the aforementioned surveys, the strengths of our work are:
* Focus on the latest research. Although earlier researchers explored the mechanisms of attention heads in models like BERT, many of these conclusions are now outdated. This paper primarily focuses on highly popular LLMs, such as LLaMA and GPT, consolidating the latest research findings.
* An innovative four-stage framework for LLM reasoning. We have distilled key stages of human thought processes by integrating knowledge from cognitive neuroscience, psychology, and related fields. And we have applied these stages as an analogy for LLM reasoning.
* Detailed categorization of attention heads. Based on the proposed four-stage framework, we classify different attention heads according to their functions within these stages, and we explain how heads operating at different stages collaborate to achieve alignment between humans and LLMs.
* Clear summarization of experimental methods. We provide a detailed categorization of the current methods used to explore attention head functions from the perspective of model dependency, laying a foundation for the improvement and innovation of experimental methods in future research.
§.§ Out-of-scope Topics
* This paper primarily targets the attention heads within current mainstream LLM architectures, specifically those with a decoder-only structure. As such, we do not discuss early studies related to the Transformer, such as those focusing on attention heads in BERT-based models.
* Some studies on mechanistic interpretability propose holistic operational principles that encompass embeddings, attention heads, and MLPs. However, this paper focuses exclusively on attention heads. Consequently, Sections <ref> through <ref> do not cover the roles of other components within the Transformer architecture; these are only briefly summarized in Section <ref>.
§ BACKGROUND
§.§ Mathematical Representation of LLMs
To facilitate the discussion in subsequent sections, we first define the relevant notations[Currently, there are two main layer normalization methods in LLMs: Pre-Norm and Post-Norm <cit.>. However, since these are not the focus of this paper, we will omit Layer Normalization in our discussion.].
As shown in Figure <ref>, a model ℳ consists of an embedding layer, L identical decoder blocks, and an unembedding layer. The input to ℳ are one-hot sentence tokens, with a shape of {0,1}^N × |𝒱|, where N is the length of the token sequence and |𝒱| represents the vocabulary size.
After passing through the embedding layer, which applies semantic embedding 𝐖_𝐄∈ℝ^|𝒱| × d and positional encoding 𝐏_𝐄 (e.g., RoPE <cit.>), the one-hot matrix is transformed into the input 𝐗_0,0∈ℝ^N × d for the first decoder, where d represents the dimension of the token embedding (latent vector).
In the ℓ-th ℓ(1≤ℓ≤ L) decoder block, there are two residual blocks. Each decoder blocks contain H attention heads. The first residual block combines the input matrix 𝐗_ℓ,0∈ℝ^N × d with the output 𝐗_ℓ^attn obtained from the multi-head attention operation, producing 𝐗_ℓ,1 (as shown in Equation <ref>). Subsequently, 𝐗_ℓ,1 serves as the input for the second residual block. Here, Attn_ℓ^h(·) (1≤ℓ≤ L, 1≤ h ≤ H) represents the computation function of the h-th attention head in the ℓ-th layer, where 1 ≤ h ≤ H.
𝐗_ℓ^attn = ∑_h=1^HAttn_ℓ^h(𝐗_ℓ,0)
𝐗_ℓ,1 = 𝐗_ℓ,0 + 𝐗_ℓ^attn
Similarly, as shown in Equation <ref>, the second residual block combines 𝐗_ℓ,1 with the output 𝐗_ℓ^ffn obtained after passing through the FFN, yielding the final output 𝐗_ℓ+1,0 of the ℓ-th decoder block. This output also serves as the input for the ℓ+1-th decoder block. Here, FFN_ℓ(·) consists of linear layers (and activation functions) such as GLU (Gated Linear Units), SwiGLU <cit.>, or MoE <cit.>.
𝐗_ℓ^ffn = FFN_ℓ(𝐗_ℓ,1)
𝐗_ℓ+1,0 = 𝐗_ℓ,1 + 𝐗_ℓ^ffn
Here, we will concentrate on the details of Attn_ℓ^h(·). This function can be expressed using matrix operations.
Specifically, each layer's Attn_ℓ^h(·) function corresponds to four low-rank matrices: 𝐖_𝐐_ℓ^h, 𝐖_𝐊_ℓ^h, 𝐖_𝐕_ℓ^h∈ℝ^d ×d/H, 𝐎_ℓ^h∈ℝ^d/H× d. By multiplying 𝐗_ℓ,0 with 𝐖_𝐐_ℓ^h, the query matrix 𝐐_ℓ^h∈ℝ^N ×d/H is obtained. Similarly, the key matrix 𝐊_ℓ^h and the value matrix 𝐕_ℓ^h can be derived.
The function Attn_ℓ^h(·) can then be expressed as Equation <ref> <cit.>.
Attn_ℓ^h(𝐗_ℓ,0) = softmax(𝐐_ℓ^h⊤·𝐊_ℓ^h) ·𝐕_ℓ^h·𝐎_ℓ^h
§.§ Glossary of Key Terms
This paper introduces several specialized terms related to reasoning mechanism modeling and experimental exploration methods. Below, we provide explanations for the key terms [For more definitions of specialized terms, please refer to the work of <cit.>.].
* Circuits. Circuits are abstractions of the reasoning logic in deep models. The model ℳ is viewed as a computational graph. There are two main approaches to modeling circuits. One approach treats the features in the latent space of ℳ as nodes and the transitions between features as edges <cit.>. The other approach views different components of ℳ, such as attention heads and neurons, as nodes; and the interactions between these components, such as residual connections, as edges <cit.>. A circuit is a subgraph of ℳ. Current research on attention heads primarily uses the second definition.
* Residual stream. The residual stream after layer ℓ is the sum of the embedding and the outputs of all layers up to layer ℓ, and is the input to layer ℓ+1. <cit.> offers a perspective of the residual stream as a shared bandwidth. Different layers can transmit information through this shared bandwidth (with lower layers writing information and higher layers reading it), as illustrated in Figure <ref>.
* Knowledge circuit. As defined in <cit.>, a knowledge circuit is a critical subgraph in ℳ to view the knowledge mechanism of Transformers. Knowledge circuits focus on how different components collaborate to express internal knowledge.
* Activation patching & Ablation study. Activation patching involves replacing the activation values in certain layers of a model to analyze the contribution of these layers to the model's decision-making. Ablation study, on the other hand, involves removing a component from the LLM and observing the changes in the output <cit.>.
Both methods require calculating the effect towards output after the operation. Specifically, there are three types of effects: direct effect, indirect effect, and total effect, as shown in Figure <ref>.
The key difference between them is that Activation Patching does not remove components, whereas Ablation Study logically removes a component.
* Logit lens. When calculating effects like those shown in Figure <ref>, logit lens can quantify this effect. Specifically, it uses uembedding layer to map an intermediate representation vector to the logits values of the vocabulary, allowing for the comparison of logits differences or other metrics <cit.>.
§ OVERVIEW OF SPECIAL ATTENTION HEADS
Previous research has shown that the decoder-only architecture described in Section <ref> follows the Scaling Law, and it exhibits emergent abilities once the number of parameters reaches a certain threshold <cit.>. Many LLMs that have emerged subsequently demonstrate outstanding performance in numerous tasks, even close to human. However, researchers still do not fully understand why these models are able to achieve such remarkable results. To address this question, recent studies have begun to delve into the internal mechanisms of LLMs, focusing on their fundamental structure—a neural network composed of multi-attention heads and FFNs.
We have observed that many studies concentrate on the functions of attention heads, attempting to explain their reasoning processes. Additionally, several researchers have drawn parallels of reasoning methods between LLMs and human <cit.>. Therefore, in this section, we will use the framework of human cognitive paradigms as a guiding method to classify the functions of different attention heads.
§.§ How does Brain / Attention Head think?
By summarizing and analyzing existing research, we find that the role of an attention head, as its name suggests, is quite analogous to the functions of the human brain. From a behavioral neuroscience perspective, the process by which the human brain thinks about specific problems can be abstracted into a four-step process: Knowledge Recalling (KR), In-Context Identification (ICI), Latent Reasoning (LR), and Expression Preparation (EP). These four steps can interact and transition between each other, as illustrated in Figure <ref>.
When solving a problem, humans first need to recall the knowledge they have learned that is relevant to the issue at hand. This process is known as Knowledge Recalling (KR). During this stage, the hippocampus integrates memories into the brain's network <cit.> and activates different types of memories as needed <cit.>.
Confronted with the specific text of the problem, humans need to perform In-Context Identification (ICI). This means that the brain not only focuses on the overall structural content of the text <cit.> but also parses the syntactic <cit.> and semantic <cit.> information embedded within it.
Once the brain has acquired the aforementioned textual and memory information, it attempts to integrate this information to derive conclusions, a process known as Latent Reasoning (LR). This stage primarily includes arithmetic operations <cit.> and logical inference <cit.>.
Finally, the brain needs to translate the reasoning results into natural language, forming an answer that can be expressed verbally. This is the Expression Preparation (EP) stage. At this point, the brain bridge the gap between “knowing” and “saying” <cit.>.
As indicated by the arrows in Figure <ref>, these four stages are not executed in a strictly one-direction fashion when humans solve problems; rather, they can jump and switch between each other. For example, the brain may “cycle” through the identification of contextual content (the ICI stage) and then retrieve relevant knowledge based on the current context (the KR stage). Similarly, if latent reasoning cannot proceed further due to missing information, the brain may return to the Knowledge Recalling and In-Context Identification stages to gather more information.
We will now draw an analogy between these four steps and the mechanisms of attention heads, as depicted in Figure <ref>. Previous research has shown that LLMs possess strong contextual learning abilities and have many practical applications <cit.>. As a result, much of the work on interpretability has focused on the ability of LLMs to capture and reason about contextual information. Consequently, the functions of currently known special attention heads are primarily concentrated in the ICI and LR stages, while there are fewer attention heads that operate in the KR and EP stages.
my-box=[
rectangle,
draw=hidden-draw,
rounded corners,
align=left,
text opacity=1,
minimum height=1.5em,
minimum width=5em,
inner sep=2pt,
fill opacity=.8,
line width=0.8pt,
]
leaf-head=[my-box, minimum height=1.5em,
draw=gray!80,
fill=gray!35,
text=black, font=,
inner xsep=2pt,
inner ysep=4pt,
line width=0.8pt,
]
leaf-task=[my-box, minimum height=2.5em,
draw=red!80,
fill=red!20,
text=black, font=,
inner xsep=2pt,
inner ysep=4pt,
line width=0.8pt,
]
leaf-paradigms=[my-box, minimum height=2.5em,
draw=orange!70,
fill=orange!15,
text=black, font=,
inner xsep=2pt,
inner ysep=4pt,
line width=0.8pt,
]
leaf-others=[my-box, minimum height=2.5em,
draw=yellow!80,
fill=yellow!15,
text=black, font=,
inner xsep=2pt,
inner ysep=4pt,
line width=0.8pt,
]
leaf-other=[my-box, minimum height=2.5em,
draw=green!80,
fill=green!15,
text=black, font=,
inner xsep=2pt,
inner ysep=4pt,
line width=0.8pt,
]
leaf-application=[my-box, minimum height=2.5em,
draw=blue!80,
fill=blue!15,
text=black, font=,
inner xsep=2pt,
inner ysep=4pt,
line width=0.8pt,
]
modelnode-task=[my-box, minimum height=1.5em,
draw=red!80,
fill=red!20,
text=black, font=,
inner xsep=2pt,
inner ysep=4pt,
line width=0.8pt,
]
modelnode-paradigms=[my-box, minimum height=1.5em,
draw=orange!70,
fill=orange!15,
text=black, font=,
inner xsep=2pt,
inner ysep=4pt,
line width=0.8pt,
]
modelnode-others=[my-box, minimum height=1.5em,
draw=yellow!80,
fill=yellow!15,
text=black, font=,
inner xsep=2pt,
inner ysep=4pt,
line width=0.8pt,
]
modelnode-other=[my-box, minimum height=1.5em,
draw=green!80,
fill=green!15,
text=black, font=,
inner xsep=2pt,
inner ysep=4pt,
line width=0.8pt,
]
modelnode-application=[my-box, minimum height=1.5em,
draw=blue!80,
fill=blue!15,
text=black, font=,
inner xsep=2pt,
inner ysep=4pt,
line width=0.8pt,
]
§.§ Knowledge Recalling (KR)
For LLMs, most knowledge is learned during the training or fine-tuning phases, which is embedded in the model's parameters. This form of knowledge is often referred to as LLMs' “parametric knowledge”. Similar to humans, certain attention heads in LLMs recall this internally stored knowledge—such as common sense or domain-specific expertise—to be used in subsequent reasoning. These heads typically retrieve knowledge by making initial guesses or by focusing on specific content within the context, injecting the memory information into the residual stream as initial data.
In general tasks, <cit.> identified that some attention heads function as associative memories, gradually storing and retrieving knowledge during the model's training phase. The so-called Memory Head <cit.> can retrieve content related to the current problem from the parametric knowledge. This content could be knowledge learned during pre-training or experience accumulated during previous reasoning processes.
In specific task scenarios, such as when LLMs tackle Multiple Choice Question Answering (MCQA) problems, they may initially use Constant Head to evenly distribute attention scores across all options, or they might use Single Letter Head to assign a higher attention score to one option while giving lower scores to others, thereby capturing all potential answers <cit.>.
Additionally, in the context of Binary Decision Tasks (BDT)[A Binary Decision Task is a problem where the solution space is discrete and contains only two options, such as yes-no questions or answer verification.], <cit.> found that LLMs often exhibit a negative bias when handling such tasks. This could be because the model has learned a significant amount of negative expressions related to similar tasks from prior knowledge during training. Consequently, when the model identifies a given text as a binary task, a Negative Head may “preemptively” choose the negative answer due to this prior bias.
§.§ In-Context Identification (ICI)
Understanding the in-context nature of a problem is one of the most critical process to effectively addressing it. Just as humans read a problem statement and quickly pick up on various key pieces of information, some attention heads in LLMs also focus on these elements. Specifically, attention heads that operate during the ICI stage use their QK matrices to focus on and identify overall structural, syntactic, and semantic information within the in-context. This information is then written into the residual stream via OV matrices.
§.§.§ Overall Structural Information Identification
Identifying the overall structural information within a context mainly involves LLMs attending to content in special positions or with unique occurrences in the text.
Previous Head <cit.> (also referred to as Positional Head in <cit.>) attend to the positional relationships within the token sequence. They capture the embedding information of the current token and the previous token.
Rare Words Head <cit.> focus on tokens that appear with the lowest frequency, emphasizing rare or unique tokens.
Duplicate Token Head excel at capturing repeated content within the context, giving more attention to tokens that appear multiple times <cit.>.
Besides, as LLMs can gradually handle long texts, this is also related to the “Needle-in-a-Haystack” capability of attention heads. (Global) Retrieval Head can accurately locate specific tokens in long texts <cit.>. These heads enable LLMs to achieve excellent reading and in-context retrieval capabilities.
§.§.§ Syntactic Information Identification
For syntactic information identification, sentences primarily consist of subjects, predicates, objects, and clauses. Syntactic Head can distinctly identify and label nominal subjects, direct objects, adjectival modifiers, and adverbial modifiers.
Some words in the original sentence may get split into multiple subwords because of the tokenizer (e.g., “happiness” might be split into “happi” and “ness”). The Subword Merge Head focus on these subwords and merge them into one complete word <cit.>.
Additionally, <cit.> proposed the Mover Head cluster, which can be considered as “argument parsers”. These heads often copy or transfer sentence's important information (such as the subject's position) to the [END] position[The [END] position refers to the last token's position in the sentence being decoded by the LLM. Many studies indicate that summarizing contextual information at this position facilitates subsequent reasoning and next-token prediction.].
Name Mover Head and Backup Name Mover Head can move the names in the text to the [END] position.
Letter Mover Head can extract the first letters of certain words in the context and aggregate them at the [END] position <cit.>.
Conversely, Negative Name Mover Head prevent name information from being transferred to the [END] position <cit.>.
§.§.§ Semantic Information Identification
As for semantic information identification,
Context Head <cit.> extract information from the context that is related to the current task.
Further, Content Gatherer Head <cit.> “move” tokens related to the correct answer to the [END] position, preparing to convert them into the corresponding option letter for output.
The Sentiment Summarizer proposed by <cit.> can summarize adjectives and verbs that express sentiment in the context near the [SUM] position[[SUM] position is next to [END] position.], making it easier for subsequent heads to read and reason.
Capturing the message about relationship is also important for future reasoing. Semantic Induction Head <cit.> capture semantic relationships within sentences, such as part-whole, usage, and category-instance relationships.
Subject Head and Relation Head <cit.> focus on subject attributes and relation attributes, respectively, and then inject these attributes into the residual stream.
§.§ Latent Reasoning (LR)
The KR and ICI stages focus on gathering information, while Latent Reasoning (LR) is where all the collected information is synthesized and logical reasoning occurs. Whether in humans or LLMs, the LR stage is the core of problem-solving. Specifically, QK matrices of a head performs implicit reasoning based on information read from the residual stream, and then the reasoning results or signals are written back into the residual stream through OV matrices.
§.§.§ In-context Learning
In-context Learning is one of the most widely discussed areas. It primarily includes two types: Task Recognition (TR) and Task Learning (TL) <cit.>. Both involve learning from context to infer the problem's solution, but they differ in their reliance on semantic information. TR has labels with clear semantics, such as “positive” and “negative” in sentiment classification. In contrast, TL depends on learning the specific mapping function between example-label pairs, where the example and label do not have a semantic connection.
For Task Recognition: Summary Reader <cit.> can read the information summarized at the [SUM] position during the ICI stage and use this information to infer the corresponding sentiment label.
<cit.> proposed that the output of certain mid-layer attention heads can combine into a Function Vector. These heads abstract the core features and logical relationships of a task, based on the semantic information identified during ICI, and thereby trigger task execution.
For Task Learning, the essence of solving these problems is enabling LLMs to inductively find patterns.
Induction Head is among the most widely studied attention heads <cit.>. They capture patterns such as “… [A][B] … [A]” where token [B] follows token [A], and predict that the next token should be [B]. Specifically, Induction Head can get information about all tokens in the context and the previous token from Previous Head. It then match this with information at the [END] position to perform further reasoning.
Induction Head tends to strictly follow a pattern once identified and complete fill-in-the-blank reasoning. However, in most cases, the real problem will not be identical to the examples—just as a student's exam paper will not be exactly the same as their homework. To address this, <cit.> introduced the In-context Head, whose QK matrix calculates the similarity between information at the [END] position and each label. The OV matrix then extracts label features and weights them according to the similarity scores to determine the final answer (take all labels into consideration rather than only one label).
§.§.§ Effective Reasoning
Some studies have identified heads related to reasoning effectiveness. Truthfulness Head <cit.> and Accuracy Head <cit.> are heads highly correlated with the truthfulness and accuracy of answers. They help the model infer truthful and correct results in QA tasks, and modifying the model along their activation directions can enhance LLMs' reasoning abilities.
However, not all heads positively impact reasoning. For example, Vulnerable Head <cit.> are overly sensitive to certain specific input forms, making them susceptible to irrelevant information and leading to incorrect results. During reasoning, it is advisable to minimize the influence of such heads.
§.§.§ Task Specific Reasoning
Finally, some heads are specialized for specific tasks.
In MCQA tasks, Correct Letter Head <cit.> can complete the matching between the answer text and option letters by comparing the index of each option, determining the final answer choice.
When dealing with tasks related to sequential data, Iteration Head <cit.> can iteratively infer the next intermediate state based on the current state and input.
For arithmetic problems, Successor Head <cit.> can perform increment operations on ordinal numbers.
These examples illustrate how various attention heads specialize in different aspects of reasoning, contributing to the overall problem-solving capabilities of LLMs.
§.§ Expression Preparation (EP)
During the Expression Preparation (EP) stage, LLMs need to align their reasoning results with the content that needs to be expressed verbally. Specifically, EP heads may first aggregate information from various stages.
<cit.> proposed the Mixed Head, which can linearly combine and aggregate information written into the residual stream by heads from the ICI and LR stages (such as Subject Heads , Relation Heads, Induction Heads, etc.). The aggregated results are then mapped onto the vocabulary logits value via the OV matrix.
Some EP heads have a signal amplification function. Specifically, they read information about the context or reasoning results from the residual stream, then enhance the information that needs to be expressed as output, and write it back into the stream.
Amplification Head <cit.> and Correct Head <cit.> amplify the signal of the correct choice letter in MCQA problems near the [END] position. This amplification ensures that after passing through the Unembedding layer and softmax calculation, the correct choice letter has the highest probability.
In addition to information aggregation and signal amplification, some EP heads are used to align the model's reasoning results with user's instructions.
In multilingual tasks, the model may sometimes fail to respond in the target language that user wanted. Coherence Head <cit.> ensure linguistic consistency in the generated content. They help LLMs maintain consistency between the output language and the language of user's query when dealing with multilingual inputs.
Faithfulness Head <cit.> are strongly associated with the faithfulness of CoT[CoT stands for Chain-of-Thought <cit.>. Faithfulness refers to whether the model's generated response accurately reflects its internal reasoning process and behavior, i.e., the consistency between output and internal reasoning.]. Enhancing the activation of these heads allows LLMs to better align their internal reasoning with the output, making the CoT results more robust and consistent.
However, for some simple tasks, LLMs might not require special EP heads to refine language expression. At this situation, the information written back into the residual stream during the ICI and LR stages may be directly suitable for output, i.e., skip the EP stage and select token with highest probability.
§.§ How Attention Heads Working Together?
If we divide the layers of a LLM (e.g., GPT-2 Small) into three segments based on their order—shallow (e.g., layers 1-4), middle (e.g., layers 5-8), and deep (e.g., layers 9-12)—we can map the relationship between the stages where heads act and the layers they are in, according to Section <ref>-<ref>. The figure is illustrated in Figure <ref>.
Some researchers have explored the potential semantic meanings embedded in the query vector 𝐪_ℓ, j^h=𝐐_ℓ^h[:, j] and key vector 𝐤_ℓ, j^h=𝐊_ℓ^h[:, j] when attention heads collaborate. For example, in MCQA problem, during the ICI stage, a Content Gatherer Head moves the tokens of the correct answer text to the [END] position. Then, in the LR stage, the Correct Letter Head uses the information passed by the Content Gatherer Head to identify the correct option. The query vector in this context effectively asks, “Are you the correct label?” while recalling the gathered correct answer text. The key vector represents, “I'm choice [A/B/C/D], with corresponding text [...]”. After matching the right key vector to the query vector, we can get the correct answer choice.
Consider the Parity Problem[The Parity Problem involves determining the parity (odd or even) of the sum of a given sequence. The sequence only consists of 0/1 values. For instance, the sum of the sequence “001011” is 3, so it is an odd sequence. Let s_i indicates the parity of the sum of first i digits. So the corresponding s_[0:t] to “001011” is eeeooeo, where e represents even and o represents odd. When querying an LLM, the prompt format is “[0/1 seq.] [EOI] [s_[0:t]] [END]”, with [EOI] as the End-Of-Input token. The expected answer is the final state s_t.].
During the ICI stage, a Mover Head transmits the position of the [EOI] token, which separates the input sequence and the intermediate state sequence, to the [END] position.
In the LR stage, an Iteration Head first reads the [EOI]'s position index from [END] and uses its query vector to ask, “Are you position t?” The key vector for each token responds, “I'm position t^'.” This querying process identifies the last digit in the input sequence, which, combined with s_t-1, allows the model to calculate s_t.
Further research has explored integrating multiple special attention heads into a cohesive working mechanism. Take the IOI (Indirect Object Identification) task, which tests the model's ability to deduce the indirect object in a sentence, as an example. Figure <ref> outlines the process.
* In the KR stage, the Subject Head and Relation Head focus on “Mary” and “bought flowers for”, respectively, triggering the model to recall that the answer should be a person's name <cit.>.
* Then in the ICI stage, the Duplicate Head identifies “John”, while the Name Mover Head focuses on both “John” and “Mary”.
* During the iterative stages of ICI and LR, the Previous Head and Induction Head work together to attend to “John”. All this information is written to the residual stream. Then the Inhibition Head detects that “John” appears multiple times and is the subject, thereby suppressing the logits value of “John”.
* Finally in the stage of EP, the Amplification Head boosts the logits value for “Mary”.
§ UNVEILING THE DISCOVERY OF ATTENTION HEADS
How can we uncover the specific functions of those special heads mentioned in Section <ref>? In this section, we will unveiling the discovery methods. Current research primarily employs experimental methods to validate the working mechanisms of those heads. We categorize the mainstream experimental approaches into two types based on whether they require the construction of new models: Modeling-Free and Modeling-Required. The classification scheme and method examples are shown in Figure <ref>.
§.§ Modeling-Free
Modeling-Free methods do not require setting up new models, making them widely applicable in interpretability research. These methods typically involve altering a latent state computed during the LLMs' reasoning process and then using Logit Lens to map the intermediate results to token logits or probabilities. By calculating the logit (or probability) difference, researchers can infer the impact of the change. Modeling-Free methods primarily include Activation Patching and Ablation Study. However, due to the frequent interchange of these terms in the literature, a new perspective is required to distinguish them. This paper further divides these methods into Modification-Based and Replacement-Based Methods based on how the latent state representation is altered, as summarized in Table <ref>.
Modification-Based methods involve altering the values of a specific latent state while retaining some of the original information.
Directional Addition retains part of the information in the original state and then directionally adds some additional information. For instance, <cit.> input texts containing positive and negative sentiments into LLMs, obtaining positive and negative representations from the latent state. The difference between these two representations can be seen as a sentiment direction in the latent space. By adding this sentiment direction vector to the activation of the attention head under investigation, the effect on the output can be analyzed to determine whether the head has the ability to summarize sentiment.
Conversely, Directional Subtraction retains part of the original state information while directionally removing some of it <cit.>. This method can be used to investigate whether removing specific information from a latent state affects the model's output in a significant way, thereby revealing whether certain attention heads can backup or fix the deleted information.
In contrast to Modification-Based methods, Replacement-Based methods discard all information in a specific latent state and replace it with other values.
Zero Ablation and Mean Ablation replace the original latent state with zero values or the mean value of latent states across all samples from a dataset, respectively. This can logically “eliminate” the head or cause it to lose its special function, allowing researchers to assess its importance.
Naive Activation Patching is the traditional patching method. It involves using a latent state obtained from a corrupted prompt to replace the original latent state at the corresponding position. For example, consider the original prompt “John and Mary went to the store.” Replacing “Mary” with “Alice” results in a corrupted prompt. By systematically replacing the latent state obtained under the original prompt with the one obtained under the corrupted prompt across each head, researchers can preliminarily determine which head has the ability to focus on names based on the magnitude of the impact <cit.>.
§.§ Modeling-Required
Modeling-Required methods involve explicitly constructing models to delve deeper into the functions of specific heads. Based on whether the newly constructed models require training, we further categorize Modeling-Required methods into Training-Required and Training-Free methods, as summarized in Table <ref>.
Training-Required methods necessitate training the newly established models to explore mechanisms.
Probing is a common training-based method. This approach extracts activation values from different heads as features and categorizes heads into different classes as labels. A classifier is then trained on this data to learn the relationship between the activation patterns and the head's function. Subsequently, the trained classifier can serve as a probe to detect which heads within the LLMs possess which functions <cit.>.
Another approach involves training a simplified transformer model on a clean dataset for a specific task. Researchers investigate whether the heads in this simplified model exhibit certain functionalities, which can then be extrapolated whether similar heads in the original model possess the same capabilities. This method reduces computational costs during training and analysis, while the constructed model remains simple and highly controllable <cit.>.
Training-Free methods primarily involve designing scores that reflect specific phenomena. These scores can be viewed as mathematical models that construct an intrinsic relationship between the attributes of components and certain model characteristics or behaviors.
For instance, when investigating Retrieval Heads, <cit.> defined a Retrieval Score. This score represents the frequency with which a head assigns the highest attention score to the token it aims to retrieve across a sample set, as shown in Equation <ref>. A high Retrieval Score indicates that the head possesses a strong “Needle in a Haystack” ability.
Similarly, when exploring Negative Heads, <cit.> introduced the Negative Attention Score (NAS), as shown in Equation <ref>. Here, i denotes the i-th token in the input prompt, and t_Yes and t_No represent the positions of “Yes” and “No” in the prompt, respectively. A high NAS suggests that the head focuses more on negative tokens during decision-making, making it prone to generating negative signals.
RetrievalScore_ℓ^h = |𝒟_right∩𝒟_all|/|𝒟_all|
NAS_ℓ^h = ∑_i(Attn_ℓ^h[i, t_Yes] + Attn_ℓ^h[i, t_No]) ·log(Attn_ℓ^h[i, t_No]/Attn_ℓ^h[i, t_Yes])
In addition to scoring, researchers have proposed other novel training-free modeling methods.
<cit.> introduced the concept of an Information Flow Graph, where nodes represent tokens and edges represent information transfer between tokens via attention heads or FFNs. By calculating and filtering the importance of each edge to the node it points to, key edges can be selected to form a subgraph. This subgraph can then be viewed as the primary internal mechanism through which LLMs perform reasoning.
§ EVALUATION
This section summarizes the benchmarks and datasets used in the interpretability research of attention heads. Based on the different evaluation goals during the mechanism exploration process, we categorize them into two types: Mechanism Exploration Evaluation and Common Evaluation. The former is designed to evaluate the working mechanisms of specific attention heads, while the latter assesses whether enhancing or suppressing the functions of certain special heads can improve the overall performance of LLMs.
§.§ Mechanism Exploration Evaluation
To delve deeper into the internal reasoning paths of LLMs, many researchers have synthesized new datasets based on existing benchmarks. The primary feature of these datasets is the simplification of problem difficulty, with elements unrelated to interpretability, such as problem length and query format, being standardized. As shown in Table <ref>, these datasets essentially evaluate the model's knowledge reasoning and knowledge recalling capabilities, but they simplify the answers from a paragraph-level to a token-level.
Take exploring sentiment-related heads as an example, <cit.> created the ToyMovieReview and ToyMoodStory datasets, with specific prompt templates illustrated in Figure <ref>. Using these datasets, researchers employed sampling methods to calculate the activation differences of each head for positive and negative sentiments. This allowed them to identify heads with significant differences as potential candidates for the role of Sentiment Summarizers.
§.§ Common Evaluation
The exploration of attention head mechanisms is ultimately aimed at improving the comprehensive capabilities of LLMs. Many researchers, upon identifying a head with a specific function, have attempted to modify that type of head—such as by enhancing or diminishing its activation—to observe whether the LLMs' responses become more accurate and useful. We classify these Common Evaluation Benchmarks based on their evaluation focus, as shown in Table <ref>. The special attention heads discussed in this paper are closely related to improving LLMs' abilities in four key areas: knowledge reasoning, sentiment analysis, long context retrieval, and text comprehension.
§ ADDITIONAL TOPICS
In this section, we summarize various works related to the LLMs interpretability. Although these works may not introduce new special heads as discussed in Section <ref>, they delve into the underlying mechanisms of LLMs from other perspectives. We will elaborate on these studies under two categories: FFN Interpretability and Machine Psychology.
§.§ FFN Interpretability
As discussed in Section <ref>, apart from attention heads, FFNs also plays a significant role in the LLMs reasoning process. This section primarily summarizes research focused on the mechanisms of FFNs and the collaborative interactions between attention heads and FFNs.
One of the primary functions of FFNs is to store knowledge acquired during the pre-training phase.
<cit.> proposed that factual knowledge stored within the model is often concentrated in a few neurons of the MLP.
<cit.> observed that the neurons in the FFN of GPT models can be likened to key-value pairs, where specific keys can retrieve corresponding values, i.e., knowledge.
<cit.> discovered a hierarchical storage of knowledge within the model's FFN, with lower layers storing syntactic and semantic information, and higher layers storing more concrete factual content.
FFNs effectively complement the capabilities of attention heads across the four stages described in Section <ref>. The collaboration between FFNs and attention heads enhances the overall capabilities of LLMs.
<cit.> proposed that attention heads and FFNs can work together to enrich the representation of a subject and then extract its related attributes, thus facilitating factual information retrieval during the Knowledge Recall (KR) stage.
<cit.> found that, unlike attention heads, which focus on global information and perform aggregation, FFNs focus only on a single representation and perform local updates. This complementary functionality allows them to explore textual information both in breadth (attention heads) and depth (FFNs).
In summary, each component of LLMs plays a crucial role in the reasoning process. The individual contributions of these components, combined with their interactions, accomplish the entire process from Knowledge Recalling to Expression.
§.§ Machine Psychology
Current research on the LLMs interpretability often draws parallels between the reasoning processes of these models and human thinking. This suggests the need for a more unified framework that connects LLMs with human cognition. The concept of Machine Psychology has emerged to fill this gap <cit.>, exploring the cognitive activities of AI through psychological paradigms.
Recently, <cit.> and <cit.> have proposed different approaches to studying machine psychology.
Hagendorff's work focuses on using psychological methods to identify new abilities in LLMs, such as heuristics and biases, social interactions, language understanding, and learning. His research suggests that LLMs display human-like cognitive patterns, which can be analyzed to improve AI interpretability and performance.
Johansson's framework integrates principles of operant conditioning <cit.> with AI systems, emphasizing adaptability and learning from environmental interactions. This approach aims to bridge gaps in AGI research by combining insights from psychology, cognitive science, and neuroscience.
Overall, Machine Psychology provides a new perspective for analyzing LLMs. Psychological experiments and behavioral analyses may lead to new discoveries about these models. As LLMs are increasingly applied across various domains of society, understanding their behavior through a psychological lens becomes increasingly important, which offers valuable insights for developing more intelligent AI systems.
§ CONCLUSION
§.§ Limitations in Current Research
Firstly, we observe that the application scenarios explored in current research are relatively simple and limited to specific types of tasks, lacking generalizability. For instance, studies like <cit.> and <cit.> have discovered reasoning circuits in LLMs through tasks such as the IOI task and the Color Object Task. However, these circuits have not been validated across other tasks, making it difficult to prove whether these mechanisms are universally applicable.
Secondly, most research focuses on the mechanisms of individual heads, with only a few researchers delving into the collaborative relationships among multiple heads. As a result, the existing work lacks a comprehensive framework for understanding the coordinated functioning of all the attention heads in LLMs.
Finally, the conclusions of most studies lack mathematical proofs. Many studies start by proposing a hypothesis about a circuit or mechanism based on an observed phenomenon, followed by experiments designed to validate the hypothesis. The downside of this research approach is that experiments cannot establish the theoretical soundness of the mechanism, nor can they determine whether the mechanism is merely coincidental.
§.§ Future Directions and Challenges
Building on the limitations discussed above and the content presented earlier, this paper outlines several potential research directions for the future:
* Exploring mechanisms in more complex tasks. Investigate whether certain attention heads possess special functions in more complex tasks, such as open-ended question answering, math problems.
* Mechanism's robustness against prompts. Research has shown that current LLMs are highly sensitive to prompts, with slight changes potentially leading to opposite responses <cit.>. Future work could analyze this phenomenon through the lens of attention head mechanisms and propose solutions to mitigate this issue.
* Developing new experimental methods. Explore new experimental approaches, such as designing experiments to verify whether a particular mechanism is indivisible or whether it has universal applicability.
* Building a Comprehensive Interpretability Framework. This framework should encompass both the independent and collaborative functioning mechanisms of most attention heads and other components.
* Integrating Machine Psychology. Incorporate insights from Machine Psychology to construct an internal mechanism framework for LLMs from an anthropomorphic perspective, understanding the gaps between current LLMs and human cognition and guiding targeted improvements.
§ LIMITATION
Current research on the interpretability of LLMs’ attention heads is relatively scattered, primarily focusing on the functions of individual heads, with a lack of overarching frameworks. As a result, the categorization of attention head functions from the perspective of human cognitive behavior in this paper may not be perfectly orthogonal, potentially leading to some overlap between different stages.
unsrtnat
|
http://arxiv.org/abs/2409.03346v1 | 20240905084544 | Sketch: A Toolkit for Streamlining LLM Operations | [
"Xin Jiang",
"Xiang Li",
"Wenjia Ma",
"Xuezhi Fang",
"Yiqun Yao",
"Naitong Yu",
"Xuying Meng",
"Peng Han",
"Jing Li",
"Aixin Sun",
"Yequan Wang"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
Few-Shot Continual Learning for Activity Recognition in Classroom Surveillance Images
^† Corresponding author ([email protected])
^* Equal Contribution
Yilei Qian^*, Kanglei Geng^*, Kailong Chen, Shaoxu Cheng, Linfeng Xu^†, Hongliang Li, Fanman Meng, Qingbo Wu
School of Information and Communication Engineering
University of Electronic Science and Technology of China, Chengdu, China
{yileiqian, kangleigeng, chenkailong, shaoxu.cheng}@std.uestc.edu.cn, {lfxu, hlli, fmmeng, qbwu}@uestc.edu.cn
September 9, 2024
===================================================================================================================================================================================================================================================================================================================================================================================
[2]Indicates equal contribution.
[1]Corresponding author.
§ ABSTRACT
Large language models (LLMs) represented by GPT family have achieved remarkable success. The characteristics of LLMs lie in their ability to accommodate a wide range of tasks through a generative approach. However, the flexibility of their output format poses challenges in controlling and harnessing the model's outputs, thereby constraining the application of LLMs in various domains. In this work, we present , an innovative toolkit designed to streamline LLM operations across diverse fields. comprises the following components: (1) a suite of task description schemas and prompt templates encompassing various NLP tasks; (2) a user-friendly, interactive process for building structured output LLM services tailored to various NLP tasks; (3) an open-source dataset for output format control, along with tools for dataset construction; and (4) an open-source model based on LLaMA3-8B-Instruct that adeptly comprehends and adheres to output formatting instructions. We anticipate this initiative to bring considerable convenience to LLM users, achieving the goal of “plug-and-play” for various applications. The components of will be progressively open-sourced at <https://github.com/cofe-ai/Sketch>.
§ INTRODUCTION
Generative pre-trained large language models (LLMs) have achieved remarkable success, with notable examples including GPT <cit.>, LLaMA <cit.>, and FLM <cit.> series. One of the key advantages of these models lies in their powerful generalization capabilities: a single model is capable of handling a diverse range of tasks.
However, accurately generating formatted outputs, such as JSON, remains challenging for LLMs because they do not always strictly follow instructions.
On the demand side, AI-driven applications urgently require the integration of structured outputs (JSON) from LLMs into their data streams. This has heightened the urgency for LLMs to produce controlled and structured outputs as demanded.
The requirement for structured outputs from LLMs can be resolved through a multitude of approaches.
In-context learning is a typical approach. It not only enhances model performance but also offers a certain degree of format control without incurring additional computational costs for model fine-tuning.
However, this approach faces challenges, such as an inability to determine when to end the generation.
Besides, it needs long-text ability when meeting complex questions, as it relies on extensive input examples to ensure accurate decision-making.
Moreover, tasks that require complex constraints on format and content, such as relation extraction and event extraction, pose significant difficulties for in-context learning.
Supervised fine-tuning (SFT) refers to the process of training a pre-trained model on a labelled dataset specifically tailored for a particular task.
Although SFT can enhance performance on specific tasks and has generalization capabilities, its ability to control the format of the output remains unsatisfactory.
After all, the integration of LLM outputs into applications typically demands the output format that is entirely compliant with specified requirements, a feat that LLMs, proficient in “next token prediction”, are unable to ensure.
Another issue is that, to the best of our knowledge, there is a lack of open-source models and datasets specifically addressing the problem of formatted output control.
This somewhat limits the application of LLMs across various fields.
To ensure that the outputs of LLMs conform to formatting requirements, numerous decoding control tools (guidance[<https://github.com/guidance-ai/guidance>], outlines<cit.>, llama.cpp[<https://github.com/ggerganov/llama.cpp>], lm-format-enforcer[<https://github.com/noamgat/lm-format-enforcer>] ) based on regular expressions or context-free grammars (CFGs) have been developed.
These tools first convert the user's requirements for output format into formal languages. Under the constraints of these formal languages, these models could decode responses that meet the formatting requirements. More importantly, as these tools are involved in the decoding process of the model, they could potentially impair the model's performance<cit.>, especially if the model itself is not adept at generating structured outputs.
To address those issues, an open-source model that excels in generating structured responses according to requirements, along with a framework for streamlining various LLM-based operations, holds significant value.
In this work, we introduce , a toolkit designed to assist users in effectively operating LLMs and generating results in their expected format.
The core idea of is as follows: targeting on various NLP tasks, we establish a collection of task description schema, within which users can delineate their own tasks, including task objectives, labelling systems, and most critically, the specifications for the output format. An LLM can then be deployed out of the box to handle these unfamiliar tasks, ensuring the correctness and conformity of the output format. This approach not only streamlines the process for users but also enhances the reliability and precision of the model's outputs, making it a versatile and robust solution for a wide array of NLP applications.
The main contributions are as follows:
* We propose , an innovative operating framework simplifying the process for LLM users, enabling “plug-and-play” functionality for task-specific applications with predefined schemas. The proposed makes it easier to instantiate and manage NLP tasks.
* To optimize the performance within framework, we build a dataset and conduct model fine-tuning based on LLaMA3-8B-Instruct, ensuring superior task handling and output consistency.
Both the dataset and fine-tuned model will soon be made available to the public.
* By integrating constrained decoding frameworks, ensures precise control over the model's output format, enhancing the reliability and precision of outcomes, and facilitating direct application of large models in industry settings.
§ ARCHITECTURE
is designed to enable controlled formatting and easy interaction with LLMs. In this section, we detail the architecture of and how to use it easily. Figure <ref> illustrates the concepts and internal workflow of . The workflow consists of four steps: schema selection, task instantiation, prompt packaging, and generation. In practical applications, the complex aspects of this process are transparent to the user.
First, users are guided to choose the appropriate schema from a predefined set that aligns with the specific NLP task requirements. A schema, in essence, is a class (or a JSON Schema [<https://json-schema.org>] in practice) that standardizes the user's description of tasks.
Second, in the task instantiation phase, users populate the chosen schema with task-specific details such as description, label set, choice type, and output format, resulting in a task instance(in JSON format) that adheres to the corresponding schema.
Third, based on the task instance, the prompt packaging step involves automatically converting the task input into a structured prompt tailored for LLM interaction.
Last, during generation, can not only infer the model to get the anticipated response but also optionally integrate the lm-format-enforcer, a control architecture that constrains LLM outputs to comply with the specified output format.
§.§ Schema Selection
Schema is the bridge between tasks and LLMs.
It outlines a descriptive framework for each kind of task based on the task-specific characteristics.
A schema can be represented by either a Pydantic model or a JSON Schema.
When customizing a specific task, users are advised to select the most appropriate schema and instantiate the task within its constraints.
This process can be achieved through a Python API, and we also provide a more intuitive interactive method in the form of filling out a form generated by based on the schema.
To date, as the initial phase of 's development, we have experimentally built a set of schemas for tasks, including over ten subcategories under the three main categories of text classification, text generation, and information extraction, as shown in Table<ref>.
A selection of the schemas we have crafted is showcased in Appendix <ref>.
For an extensive view of the task schemas available, please visit our project repository at <https://github.com/cofe-ai/Sketch>
§.§ Task Instantiation
We define Task Instance as a standardized description of a particular task within the constraints of the schema it belongs to, and the process of creating it by the user is referred to as Task Instantiation. A task instance typically includes the following basic fields:
Task specification fields delineates the task, which may encompass the “taskDesc” field detailing the task's purpose, along with the “labelSet” and “choiceType” fields that respectively define the classification schema and the number of options. We establish different required fields for various tasks.
Output format field specifies and constrains the format that users expect the model to output. We choose JSON schema as the descriptive language. This field serves a dual function: it is integrated into the prompt to direct the model's output, and it is also converted into a decoding control mechanism in the form of regular expressions, typically enforced through finite state machines (FSMs). This strategy intervenes in the model's decoding process to guarantee that the output is 100% compliant in form with the users' expectations.
A comprehensive illustration of task instances, replete with intricate details and concrete examples, can be found in Appendix <ref>.
§.§ Prompt Packaging
The process of packaging an instantiated task and input is crucial for ensuring LLMs understand the task requirements and process the input correctly. This step involves combining the structured task description with the user's input data into a format that is optimized for interaction with the LLM.
Input Integration. The user's input, whether it be a common text snippet or any other form of information relevant to the task, will be integrated into the prompt most intuitively. This integration is guided by a prompt template associated with the schema, ensuring that the input is presented in a manner that is coherent and comprehensible to LLMs.
As shown in Figure <ref>, for a NER task, the packaged prompt might include [Task Description], [Label Architecture], [Output Format (Json Schema)], and [Input Data] to be processed. This ensures that the LLM understands the task criteria and outputs the result in the desired format.
§.§ Generation
The final step in the workflow of involves the interaction with LLMs to generate the desired output.
is able to generate the expected response directly with a good performance. Besides, there are some more precise control methods. Throughout this process, we ensure that the model's output conforms to the required format from two perspectives.
Constrained Generation. Considering that even with meticulous fine-tuning, LLMs cannot guarantee 100% accuracy in output format, we integrate a mature decoding control framework, lm-format-enforcer. It employs CFG to ensure that the model’s responses align perfectly with the predefined output format.
Simultaneously, recognizing that any constraints to the decoding process may impact the model's performance, this strict output format control is made optional in .
Output Validation. Given that not all JSON Schema properties are supported by decoding control frameworks, the output produced by the LLM cannot be assured to adhere to the constraints of the specified output format. To ensure compliance with the expected format, we employ the jsonschema tool[<https://github.com/python-jsonschema/jsonschema>] for validation. For outputs that do not meet the expected format, we take measures such as resampling or directly throwing exceptions.
By following these detailed steps, ensures that users can effectively utilize LLMs for a variety of NLP tasks, with the assurance that the outputs will be both accurate and in the correct format. This streamlined process makes it easier for users to interact with complex models and harness their power for practical applications.
§.§ Code Example
Listing <ref> demonstrates the basic usage of through a simple named entity recognition (NER) task. is still under development prior to its release, and the APIs may change at any time.
frame=lines,
framesep=2mm,
baselinestretch=1.2,
fontsize=
listingExample of 's Usage
[autogobble,frame=none, bgcolor=bggray]python
import llm_sketch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("CofeAI/Sketch-8B", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("CofeAI/Sketch-8B")
my_ner_task = llm_sketch.schemas.NER(
taskDesc="Extract the named entities from the given text.",
entityTypes=[
"name": "person",
"name": "organization",
"name": "location"
],
outputFormat=
"type": "array",
"items":
"type": "object",
"properties":
"name": "type": "string", "description": "the entity name",
"entity_type":
"type": "string",
"description": "entity type",
"enum": ["person", "organization", "location"],
,
,
"required": ["name", "entity_type"],
,
,
)
inputs = [
"Kamala Harris pledges 'new way forward' in historic convention speech"
]
for inpt in inputs:
ner_res = llm_sketch.generate(model, tokenizer, inpt, my_ner_task, strict=True)
print(ner_res)
# ['name': 'Kamala Harris', 'entity_type': 'person']
§ -8B FINE-TUNING APPROACH
We fine-tune LLaMA3-8B-Instruct to enhance the model's capability to generate structured data that adheres to JSON schema constraints across a variety of tasks. Our training process emphasizes two key aspects: ensuring strict adherence to the specified JSON schema constraints in the model's outputs and fostering robust generalization across various tasks. To achieve these goals, we carefully design a specialized fine-tuning dataset.
§.§ Data Preparation
The capability of the model to adhere to formats and its ability to understand and tackle tasks are distinct attributes. To enhance these aspects, we have constructed two targeted datasets: NLP task data and schema following data. The primary objective of NLP task data is to enable models to learn how to tackle NLP tasks. However, considering the limitations in output format diversity of manually curated fine-tuning data for NLP tasks, we propose the automated construction of schema following data to enhance the model's adherence to the output format schema.
NLP Task Data.
We assemble a comprehensive collection of over 20 datasets, encompassing more than ten subcategories within three primary domains: text classification, text generation, and information extraction. Through the meticulous design of output formats for each dataset, we construct a task instance set of size 53. Among them, 37 task instances are dedicated to training, while the remainder are reserved for evaluation.
Schema Following Data.
To ensure the diversity of JSON schemas, we generated 10,000 JSON schemas with widths and depths within 5 with a random schema generation method. Then, we utilized LLaMA3-8B-Instruct, under the constraint of a decoding control tool, to generate JSON instances that conform to the schemas.
Following the patterns of NLP task data, we designed a task that involves selecting values from a randomly generated list of given values to construct JSON objects that match specific schemas.
Finally, we constructed 20,000 pieces of fine-tuning data for this task by modifying the values in the JSON instances generated by LLaMA3-8B-Instruct.
§.§ Fine-tuning Method
Reinforcement learning is one of the popular ways to tune the LLMs. LeCun holds the opinion, “I do favor MPC over RL”[<https://x.com/ylecun/status/1827787323108393027>]. We have a similar opinion so we use the easy fine-tuning methods under data-driven. Indeed, it doesn't mean reinforcement learning is useless, but it could be used in the following steps such as resort. Generating valid outputs that conform to the JSON Schema is not simply a matter of mimicking formats, it necessitates a thorough comprehension of the schema's descriptions. Consequently, data adhering to the schema is essential for enhancing the model's ability.
The training objective of -8B considers two aspects: enhancing the model's adherence to format and improving its NLP task performance. To this end, we use the proposed mixed dataset comprising NLP task data and schema following data for fine-tuning.
The inclusion of NLP task data markedly boosts the model’s capabilities in handling NLP tasks while the schema following data is crucial for enhancing the model’s adherence to various output format requirements.
We use fine-tuning method to optimize the proposed model, the objective ℒ(θ) could be formatted as:
ℒ(θ) = -∑^m_t=1 logP_θ(ŷ_̂t̂=y_t|y_1:t-1,X)
where X = {x_1, x_2, …, x_n} represents an input sequence of length n, which is the constructed prompt. Y = {y_1, y_2, …, y_m} is the label of the generated sequence of length m, and Ŷ = {ŷ_1, ŷ_2, …, ŷ_m} is the actual output of the model. Note that both Y and Ŷ exclude the prompt and consist only of the response. θ denotes the model parameters, and P_θ represents the conditional probability under the parameters θ.
Each sample consisting of X and Y is sampled from a carefully constructed mixed dataset. The optimal fine-tuning effect is achieved by appropriately balancing the ratio of NLP task data to schema-following data in the mixed dataset.
§ EXPERIMENTS
§.§ Experiment Settings
In this section, we validate the model's generalization capabilities through experiments and discuss the effectiveness and optimal configuration of our fine-tuning data.
Experiment Data Settings.
We use publicly available NLP task datasets (See Appendix <ref> for details) for experiments. For each dataset, we carefully construct different task instances, expanding a single dataset into multiple experimental datasets with varying outputFormat and other task-related parameter configurations. To validate different hypotheses, we selectively exclude some data from the training set to create test datasets. These test datasets include three types: (1) the output format not seen in the training set while other output formats from the same dataset are included, (2) the entire dataset is not present in the training set, and (3) the entire tasks are not included in the training set.
Fine-tuning Settings.
We experiment on LLaMA3-8B-Instruct since it has strong foundational capabilities. We fine-tune the model for 8 epochs with a global batch size of 128, setting the learning rate to 1e-6 and weight decay to 0.1. The learning rate is decayed to 0 using a linear schedule. We select the best checkpoint from the model at the end of every epoch.
Evaluation Methods.
To comprehensively evaluate the model's schema adherence and NLP task performance, we assess from two perspectives:
* We define a metric to assess the model's ability to produce outputs that conform to the outputFormat: Legal Output Ratio. First, we determine whether the model's output can be converted into a JSON object; if not, the output is considered invalid. Next, we check if the JSON object meets the outputFormat requirements; otherwise, it is considered invalid. The legal output ratio is calculated by dividing the number of valid outputs by the total number of test samples.
* To evaluate NLP task performance, we employ traditional metrics like F1-score or accuracy, tailored to the specific requirements of each task.
§.§ Comparison with Baselines
To evaluate generalization, we fine-tune -8B-w.o.-ner with a partially removed dataset and benchmark it against mainstream models, including GPT-4o, DeepSeek, and ChatGLM. Using identical prompts across models, we gather results via API and assess performance. We also compare -8B-w.o.-ner with the original LLaMA3-8B-Instruct (local inference). Additionally, we evaluate DeepSeek's one-shot results and GPT-4o's constrained decoding. -8B-w.o.-ner and LLaMA-8B-Instruct use FSM and CFG constraints for decoding. The comparison covers three dataset types: (1) unknown format, with output formats absent in training data, (2) unknown domain, with datasets from untrained domains, and (3) unknown task, focusing on task types not covered during training. NER is the test task for the Unknown Task category.
Schema Adherence Comparison.
Table <ref> illustrates notable differences in schema adherence among baseline models under unconstrained output conditions. For simpler formats like S10T8 and HOTEL, LLaMA3-8B-Instruct achieves nearly 100% on legal output ratio but fails completely on 20NEWS. Across most datasets, its legal output ratio ranges from 50% to 75%, averaging 64.9%. In contrast, -8B-w.o.-ner achieves an average legal output ratio of 96.2% under unconstrained conditions, with its lowest performance on CNL03 still at 83.8%. This demonstrates -8B-w.o.-ner's strong generalization in format adherence.
Performance Comparison.
We compare with LLaMA3-8B-Instruct to assess training effectiveness and with mainstream models to evaluate performance level:
1. vs LLaMA3-8B-Instruct. Table <ref> shows that -8B-w.o.-ner consistently outperforms LLaMA3-8B-Instruct under the same decoding strategy, both on individual subsets and in average scores. Furthermore, the unconstrained -8B-w.o.-ner surpasses LLaMA3-8B-Instruct across all decoding strategies. The results indicate that the fine-tuning method enhances NLP task performance and demonstrates strong generalization to unknown output formats and tasks.
2. vs Mainstream Models. Comparing -8B-w.o.-ner with mainstream models like DeepSeek, ChatGLM, and GPT-4o on unknown format datasets, -8B-w.o.-ner significantly outperforms all, achieving nearly 100% legal output ratio on 20NEWS where others struggle (GPT-4o below 50%). On unknown domain datasets, it performs similarly to DeepSeek and GPT-4o but surpasses ChatGLM. However, its smaller model size leads to some limitations on unknown task datasets compared to larger models.
Constrained Decoding Evaluation.
The analysis also reveals that FSM and CFG constraints do not consistently produce the expected outcomes. FSM constraints result in lower task evaluation scores for both -8B and LLaMA3-8B-Instruct. While CFG constraints improve overall average scores, they fail to enhance task evaluation scores on datasets with hard output formats (20NEWS), despite increasing the legal output ratio. This suggests that current constrained decoding methods are not yet consistently reliable for real-world NLP tasks.
§.§ Generalization Capability Analysis
Output Format Generalization Capability.
We first evaluate -8B's generalization capability across different output formats within the same dataset.
As shown in the “Unknown Format” column of Table <ref>, the output formats of the two datasets (S10T8 and 20NEWS) used for evaluation are not in -8B's training set.
We can observe that in S10T8, both LLaMA3-8B-Instruct and -8B achieve high precision (0.997 and 0.982) in adhering to the required output format, which is likely due to the format simplicity.
For 20NEWS, due to the complex format, LLaMA3-8B-Instruct is completely unable to follow the required output format. Surprisingly, despite not being trained in this specific format, -8B shows an impressive ability to follow output format. This demonstrates -8B's generalization ability on unseen formats within the trained dataset.
Domain Generalization Capability.
Further, we evaluate -8B's cross-domain generalization capability within the same task (NER). This is crucial for models' application in various scenarios from various users. We continue to evaluate -8B on two tasks: aspect-level sentiment analysis, and text topic classification. We construct two datasets that are untrained and completely different from the domains of -8B's training datasets. The results in Table <ref> column “Unknown Domain” show that -8B significantly outperforms LLaMA3-8B-Instruct on these two datasets (domains), both in terms of adherence to formatting requirements and NLP task F1/Accuracy. It is important to note that -8B has never encountered data from these three domains during training.
This illustrates a fact: -8B is capable of enhancing its performance across different domains within a task by training on data associated with specific domains (or taxonomies).
Task Generalization Capability.
Ultimately, we evaluate the ability to generalize across tasks. This ability is known as the most formidable aspect of generalization. While we can endeavour to build an extensive array of NLP task categories, the spectrum of potential tasks is infinite. As such, LLM users across a myriad of sectors undoubtedly desire a model that can extend its reach to cover their unique and unconventional task needs. This is why we present the evaluation results of -8B-w.o.-ner in Table <ref>. We completely exclude the NER datasets from the training set and evaluate how the output format following capabilities of -8B-w.o.-ner improve on NER tasks.
Remarkably, -8B-w.o.-ner demonstrates significant improvement in the two NER datasets, with L.O.R. increasing from 0.520 to 0.939 and from 0.645 to 0.968, respectively.
Consequently, it can be concluded that for an unfamiliar NLP task, -8B is likely a superior choice compared to LLaMA3-8B-Instruct, even though it has not been trained on such tasks.
§.§ Data Configuration Experiment
Fine-tuning data is central to this work. We analyze how data proportion and scale affect model performance. The evaluation focuses on the model's results on a test set with seven tasks: three with unseen output formats, three from unseen domains, and one entirely new task.
Data Proportion.
Different sampling proportion affects the performance of pretraining foundation models. Similar to this phenomenon, the sampling proportion of schema following data leads to a decline in task performance.
To assess the effectiveness and configuration of NLP task data and schema following data, we conduct experiments using a fixed 20k dataset with various proportions, including a setup without schema following data. Performance is evaluated on the test set, with results shown in Table <ref>.
From the table, we observe that the schema following data proportion is positively correlated with the legal output ratio. Schema following data significantly enhances the model's ability to follow output formats. However, when the schema following data proportion exceeds 25%, performance on the test set declines from 0.688 to 0.655, indicating that excessive schema following data negatively impacts task performance. Therefore, we determine that a 7:1 ratio of Task Data to schema following data is optimal.
Data Volume.
We conduct experiments to analyze the impact of data size on results. Four fine-tuning datasets with 10k, 20k, 30k, and 40k samples (with a 7:1 ratio of Task Data to schema following data) are used to train the model, which are then evaluated on the same test set.
As shown in Table <ref>, the best legal output ratio 0.697 is achieved with the 30k dataset. Increasing the data size to 40k leads to a noticeable decline in both performance and legal output ratio. This suggests that more fine-tuning data does not always yield better results. We ultimately select the 30k dataset for training the -8B model.
§ RELATED WORK
Significant advancements have been made in the realm of format-constrained generation for LLMs. We roughly divide these methods into three categories: pre-generation tuning, in-generation control, and post-generation parsing.
Pre-generation Tuning.
Pre-generation tuning encompasses a suite of techniques designed to fine-tune the behaviour of LLMs before the actual text-generation process begins. This approach involves modifying the model's training data<cit.> or prompts<cit.> to align more closely with the specific format constraints required by the task at hand.
In-generation Control.
There are numerous frameworks dedicated to intervening in the decoding process of LLMs to control the permissible range of the next token, ensuring that the output of the LLM meets the format requirements. The predominant control strategies employed include JSON Schema (Jsonformer[<https://github.com/1rgs/jsonformer>], lm-format-enforcer and outlines), regular expression (guidance, lm-format-enforcer
and outlines) and context-free grammar
(llama.cpp). Although these methods typically ensure high accuracy in response format, they often lead to a decrease in the usefulness of the responses<cit.>, which is one of the starting points for the work presented in this paper.
Post-generation Parsing.
This category involves techniques that parse the output of LLMs after generation to ensure it conforms to specific formats. These methods often rely on post-processing algorithms to refine the raw output into a structured format. Guardrails[<https://github.com/guardrails-ai/guardrails>] is a framework of this kind, designed to enforce constraints on the output of LLMs by filtering or modifying the generated text to ensure it adheres to predefined guidelines or specifications.
§ CONCLUSIONS AND FUTURE WORK
In this work, we propose to simplify and optimize the applications of LLMs. Using a schema-based approach, can tackle the challenges in structured output generation and model generalization. Key contributions include the schema architecture for task description, data and model fine-tuning for improved performance, and the integration of a constrained decoding framework for precise output management.
Experimental results not only demonstrate the enhanced capability of the fine-tuned -8B in adhering to output formats but also validate the effectiveness of the fine-tuning data we build, particularly the schema following data.
Future work involves expanding task categories, optimizing model performance, lowering entry barriers, and exploring new applications in diverse domains. 's innovative approach and ongoing development promise to drive advancements in LLM applications and unlock new possibilities for harnessing the power of LLMs.
§ ACKNOWLEDGMENTS
This work is supported by the National Science and Technology Major Project (No. 2022ZD0116300) and the National Science Foundation of China (No. 62106249).
10
gpt3
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.
Language models are few-shot learners.
Advances in neural information processing systems, 33:1877–1901, 2020.
DBLP:conf/nips/BrownMRSKDNSSAA20
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
Language models are few-shot learners.
In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
DBLP:journals/corr/abs-2003-04807
Iñigo Casanueva, Tadas Temcinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic.
Efficient intent detection with dual sentence encoders.
CoRR, abs/2003.04807, 2020.
rte1
Ido Dagan, Oren Glickman, and Bernardo Magnini.
The PASCAL recognising textual entailment challenge.
In Joaquin Quiñonero Candela, Ido Dagan, Bernardo Magnini, and Florence d'Alché-Buc, editors, Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers, volume 3944 of Lecture Notes in Computer Science, pages 177–190. Springer, 2005.
Llama3
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurélien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Rozière, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Graeme Nail, Grégoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo
Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel M. Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, and et al.
The llama 3 herd of models.
CoRR, abs/2407.21783, 2024.
DBLP:conf/acl/FitzGeraldHPMRS23
Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gökhan Tür, and Prem Natarajan.
MASSIVE: A 1m-example multilingual natural language understanding dataset with 51 typologically-diverse languages.
In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 4277–4302. Association for Computational Linguistics, 2023.
DBLP:conf/sigsoft/GranoSMVCP17
Giovanni Grano, Andrea Di Sorbo, Francesco Mercaldo, Corrado Aaron Visaggio, Gerardo Canfora, and Sebastiano Panichella.
Android apps and user feedback: a dataset for software evolution and quality improvement.
In Federica Sarro, Emad Shihab, Meiyappan Nagappan, Marie Christin Platenius, and Daniel Kaimann, editors, Proceedings of the 2nd ACM SIGSOFT International Workshop on App Market Analytics, WAMA@ESEC/SIGSOFT FSE 2017, Paderborn, Germany, September 5, 2017, pages 8–11. ACM, 2017.
DBLP:conf/icml/GreeneC06
Derek Greene and Padraig Cunningham.
Practical solutions to the problem of diagonal dominance in kernel document clustering.
In William W. Cohen and Andrew W. Moore, editors, Machine Learning, Proceedings of the Twenty-Third International Conference (ICML 2006), Pittsburgh, Pennsylvania, USA, June 25-29, 2006, volume 148 of ACM International Conference Proceeding Series, pages 377–384. ACM, 2006.
DBLP:journals/corr/abs-1911-10422
Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid Ó Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz.
Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals.
CoRR, abs/1911.10422, 2019.
LANG1995331
Ken Lang.
Newsweeder: Learning to filter netnews.
In Armand Prieditis and Stuart Russell, editors, Machine Learning Proceedings 1995, pages 331–339. Morgan Kaufmann, San Francisco (CA), 1995.
freelm
Xiang Li, Xin Jiang, Xuying Meng, Aixin Sun, and Yequan Wang.
Freelm: Fine-tuning-free language model.
CoRR, abs/2305.01616, 2023.
flm101b
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, et al.
Flm-101b: An open llm and how to train it with $100 k budget.
arXiv preprint arXiv:2309.03852, 2023.
teleflm
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Chao Wang, Xinzhang Liu, Zihan Wang, Yu Zhao, Xin Wang, Yuyao Huang, Shuangyong Song, Yongxiang Li, Zheng Zhang, Bo Zhao, Aixin Sun, Yequan Wang, Zhongjiang He, Zhongyuan Wang, Xuelong Li, and Tiejun Huang.
Tele-flm technical report.
CoRR, abs/2404.16645, 2024.
DBLP:conf/nlpcc/LiLPCPWLZ20
Xinyu Li, Fayuan Li, Lu Pan, Yuguang Chen, Weihua Peng, Quan Wang, Yajuan Lyu, and Yong Zhu.
Duee: A large-scale dataset for chinese event extraction in real-world scenarios.
In Xiaodan Zhu, Min Zhang, Yu Hong, and Ruifang He, editors, Natural Language Processing and Chinese Computing - 9th CCF International Conference, NLPCC 2020, Zhengzhou, China, October 14-18, 2020, Proceedings, Part II, volume 12431 of Lecture Notes in Computer Science, pages 534–545. Springer, 2020.
maas-EtAl:2011:ACL-HLT2011
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
Learning word vectors for sentiment analysis.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics.
DBLP:conf/emnlp/NarayanCL18
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization.
In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1797–1807. Association for Computational Linguistics, 2018.
GPT-4
OpenAI.
GPT-4 technical report.
CoRR, abs/2303.08774, 2023.
corona
Ashish Patil.
Medical ner.
Kaggle, 2020.
DBLP:conf/semeval/PontikiGPMA15
Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos.
Semeval-2015 task 12: Aspect based sentiment analysis.
In Daniel M. Cer, David Jurgens, Preslav Nakov, and Torsten Zesch, editors, Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2015, Denver, Colorado, USA, June 4-5, 2015, pages 486–495. The Association for Computer Linguistics, 2015.
DBLP:conf/semeval/PontikiGPPAM14
Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar.
Semeval-2014 task 4: Aspect based sentiment analysis.
In Preslav Nakov and Torsten Zesch, editors, Proceedings of the 8th International Workshop on Semantic Evaluation, SemEval@COLING 2014, Dublin, Ireland, August 23-24, 2014, pages 27–35. The Association for Computer Linguistics, 2014.
DBLP:conf/conll/SangM03
Erik F. Tjong Kim Sang and Fien De Meulder.
Introduction to the conll-2003 shared task: Language-independent named entity recognition.
In Walter Daelemans and Miles Osborne, editors, Proceedings of the Seventh Conference on Natural Language Learning, CoNLL 2003, Held in cooperation with HLT-NAACL 2003, Edmonton, Canada, May 31 - June 1, 2003, pages 142–147. ACL, 2003.
tam2024let
Zhi Rui Tam, Cheng-Kuang Wu, Yi-Lin Tsai, Chieh-Yen Lin, Hung-yi Lee, and Yun-Nung Chen.
Let me speak freely? a study on the impact of format restrictions on performance of large language models.
arXiv preprint arXiv:2408.02442, 2024.
DBLP:journals/eswa/TanZ08
Songbo Tan and Jin Zhang.
An empirical study of sentiment analysis for chinese documents.
Expert Syst. Appl., 34(4):2622–2629, 2008.
llama1
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample.
Llama: Open and efficient foundation language models.
CoRR, abs/2302.13971, 2023.
llama-2
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov,
and Thomas Scialom.
Llama 2: Open foundation and fine-tuned chat models.
CoRR, abs/2307.09288, 2023.
DBLP:conf/iclr/WangSMHLB19
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman.
GLUE: A multi-task benchmark and analysis platform for natural language understanding.
In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
DBLP:conf/nips/Wei0SBIXCLZ22
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou.
Chain-of-thought prompting elicits reasoning in large language models.
In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.
willard2023efficient
Brandon T Willard and Rémi Louf.
Efficient guided generation for llms.
arXiv preprint arXiv:2307.09702, 2023.
yao2024opendomainimplicitformatcontrol
Yiqun Yao, Wenjia Ma, Xuezhi Fang, Xin Jiang, Xiang Li, Xuying Meng, Peng Han, Jing Li, Aixin Sun, and Yequan Wang.
Open-domain implicit format control for large language model generation, 2024.
amazon_polarity
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun.
Character-level convolutional networks for text classification.
In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, editors, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649–657, 2015.
DBLP:conf/emnlp/ZhangZCAM17
Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning.
Position-aware attention and supervised data improve slot filling.
In Martha Palmer, Rebecca Hwa, and Sebastian Riedel, editors, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 35–45. Association for Computational Linguistics, 2017.
zhou2023controlledtextgenerationnatural
Wangchunshu Zhou, Yuchen Eleanor Jiang, Ethan Wilcox, Ryan Cotterell, and Mrinmaya Sachan.
Controlled text generation with natural language instructions.
In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 42602–42613. PMLR, 2023.
DBLP:conf/iclr/Zhou00CP24
Wenxuan Zhou, Sheng Zhang, Yu Gu, Muhao Chen, and Hoifung Poon.
Universalner: Targeted distillation from large language models for open named entity recognition.
In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024.
§ SCHEMA EXAMPLES
§ TASK INSTANCE EXAMPLES
§ NLP TASK DATASETS
In this appendix, we provide a detailed description of the datasets utilized in this paper. These datasets are categorized into one of the following task categories:
* Information extraction Information extraction (IE) encompasses the task of discerning and extracting structured information from unstructured and/or semi-structured machine-readable documents. This category includes various sub-tasks such as relation extraction, named entity recognition, event extraction, and aspect-level sentiment analysis. The following datasets are utilized to facilitate these tasks:
* Relation extraction: SemEval-2010 Task 8<cit.>, TACRED<cit.>;
* Named entity recognition: CoNLL-2003<cit.>, UniversalNER<cit.>, Medical NER<cit.>;
* Aspect-level sentiment analysis: SemEval-2014 Task 4<cit.>; SemEval-2015 Task 12<cit.>;
* Event extraction: DuEE<cit.>;
* Text classification Text classification is the task of assigning predefined categories to text documents. It encompasses a wide range of tasks, such as sentiment analysis, topic classification, intent recognition, sentence similarity, We use the following datasets:
* Sentiment analysis: APP_REVIEWS<cit.>, ChnSentiCorp<cit.>, IMDB<cit.>;
* Topic classification: 20 Newsgroups<cit.>, AG News<cit.>, BBC News<cit.>, DBPedia<cit.>;
* Intent recognition: MASSIVE<cit.>, BANKING77<cit.>;
* Sentence similarity(also known as paraphrase detection): QQP<cit.>;
* Natural language inference: RTE<cit.>;
* Text generation Text generation involves creating text from scratch or completing partial texts based on given prompts. This task is essential for applications such as chatbots, translation, and summarization. We use the following datasets:
* Summarization: xsum<cit.>;
* Translation: Replete-AI/Multi-lingual_Translation_Instruct[<https://huggingface.co./datasets/Replete-AI/Multi-lingual_Translation_Instruct>];
* Dialog: shibing624/sharegpt_gpt4[<https://huggingface.co./datasets/shibing624/sharegpt_gpt4>];
|
http://arxiv.org/abs/2409.03550v1 | 20240905141222 | DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture | [
"Qianlong Xiang",
"Miao Zhang",
"Yuzhang Shang",
"Jianlong Wu",
"Yan Yan",
"Liqiang Nie"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
[
[
September 9, 2024
=====================
§ ABSTRACT
Diffusion models (DMs) have demonstrated exceptional generative capabilities across various areas, while they are hindered by slow inference speeds and high computational demands during deployment.
The most common way to accelerate DMs involves reducing the number of denoising steps during generation, achieved through faster sampling solvers or knowledge distillation (KD).
In contrast to prior approaches, we propose a novel method that transfers the capability of large pretrained DMs to faster architectures.
Specifically, we employ KD in a distinct manner to compress DMs by distilling their generative ability into more rapid variants.
Furthermore, considering that the source data is either unaccessible or too enormous to store for current generative models, we introduce a new paradigm for their distillation without source data, termed Data-Free Knowledge Distillation for Diffusion Models (DKDM).
Generally, our established DKDM framework comprises two main components: 1) a DKDM objective that uses synthetic denoising data produced by pretrained DMs to optimize faster DMs without source data, and 2) a dynamic iterative distillation method that flexibly organizes the synthesis of denoising data, preventing it from slowing down the optimization process as the generation is slow.
To our knowledge, this is the first attempt at using KD to distill DMs into any architecture in a data-free manner.
Importantly, our DKDM is orthogonal to most existing acceleration methods, such as denoising step reduction, quantization and pruning.
Experiments show that our DKDM is capable of deriving 2× faster DMs with performance remaining on par with the baseline.
Notably, our DKDM enables pretrained DMs to function as “datasets” for training new DMs.
§ INTRODUCTION
The advent of Diffusion Models (DMs) <cit.> heralds a new era in the generative modeling domain, garnering widespread acclaim for their exceptional capability in producing samples of remarkable quality <cit.>.
These models have rapidly ascended to a pivotal role across a spectrum of generative applications, notably in the fields of image, video and audio <cit.>.
However, the generation via those models is significantly slow because the sampling process involves iterative noise estimation over thousands of time steps, which poses a challenge for practical deployment, particularly for consumer devices <cit.>.
To accelerate diffusion models, as illustrated in Figure <ref> (b)&(c), existing methods can be categorized into two pathways: reducing denoising steps and speeding up inference process of denoising networks for each step.
Compared with the standard generation of DMs as shown in Figure <ref> (a), existing methods of the first category <cit.> focus on reducing denoising steps of the lengthy sampling process.
The second category focuses on reducing the inference time of each denoising step, through quantization <cit.>, pruning <cit.>, and so on.
However, these studies often overlook the efficiency in denoising network architecture, a critical factor in the generation speed of DMs, which can be improved by efficient architecture design, i.e., Neural Architecture Search <cit.>.
Knowledge Distillation (KD) <cit.> is an effective method for transferring the capabilities of large, cumbersome models to smaller ones, either with similar <cit.> or different architectures <cit.>.
The conventional KD methods typically require simultaneous access to the dataset to sample the training data, which is then used to align the student model's behavior with that of the teacher model <cit.>.
However, the vast data requirements of training generative deep neural networks, such as GPT-4 <cit.> and Stable Diffusion,[<https://github.com/runwayml/stable-diffusion>] present significant challenges for traditional KD methods, as they often necessitate direct dataset access, which complicates data storage and accessibility.
Additionally, while weights of such models are frequently released, the corresponding datasets may remain confidential due to privacy concerns.
Recent studies have explored using generative models to eliminate the demand for the source data in KD <cit.>.
Motivated by these studies, in this paper, we explore a novel KD paradigm that distills generative ability of DMs without source data, termed Data-Free Knowledge Distillation for Diffusion Models (DKDM).
The DKDM paradigm hinges on addressing two critical challenges.
The first challenge involves optimizing a student model through the synthetic denoising data, instead of the source data.
The second challenge involves flexibly organizing the synthesis of denoising data, preventing it from becoming the main bottleneck in slowing the optimization process, as generation of DMs is inherently slow.
For the former, the optimization objective used in traditional DMs, as described by <cit.>, is inappropriate due to the absence of the data.
To address this, we have especially designed a DKDM objective that aligns closely with the original DM optimization objective.
For the latter challenge, the most straightforward approach is to utilize the teacher DMs to generate a comprehensive dataset, matching the size of the source dataset employed for training the teacher.
This dataset is then used to train the student model following the standard training algorithm <cit.>.
However, this method becomes impractical for extremely large datasets, like those utilized by models such as Stable Diffusion, due to excessive computational and storage demands.
To overcome this, we introduce a dynamic iterative distillation method that efficiently collects denoising data with varying noise levels, rather than generating real-like samples.
This method significantly reduces computation and storage requirements.
Importantly, our approach is complementary to most previously established methods, as summarized in Figure <ref> (b)&(c).
Our experiment results validate that our DKDM is able to derive 2× faster DMs while still able generate high-quality samples.
Additionally, our method allows pretrained DMs to act as “dataset” for training new DMs, thereby reducing the storage demands for further research on DMs.
§ PRELIMINARIES ON DIFFUSION MODELS
In diffusion models <cit.>, a Markov chain is defined to add noises to data and then diffusion models learn the reverse process to generate data from noises.
Forward Process. Given a sample ^0 ∼ q(^0) from the data distribution, the forward process iteratively adds Gaussian noise for T diffusion steps[We set T to 1,000 for all our experiments.] with the predefined noise schedule (β_1, …, β_T):
q(^t | ^t-1) =𝒩(^t ; √(1-β_t)^t-1, β_t I),
q(^1: T | ^0) =∏_t=1^T q(^t | ^t-1),
until a completely noise ^T ∼𝒩(0, I) is obtained. According to <cit.>,
adding noise t times sequentially to the original sample ^0 to generate a noisy sample ^t can be simplified to a one-step calculation as follows:
q(^t | ^0)=𝒩(^t ; √(α̅_t)^0,(1-α̅_t) I),
^t=√(α̅_t)^0+√(1-α̅_t)ϵ,
where α_t:=1-β_t, α̅_t:=∏_s=0^t α_s and ϵ∼𝒩(0, I).
Reverse Process. The posterior q(^t-1 | ^t) depends on the data distribution, which is tractable conditioned on ^0:
q(^t-1 | ^t, ^0)=𝒩(^t-1 ; μ̃(^t, ^0), β̃_t I),
where μ̃_t(^t, ^0) and β̃_t can be calculated by:
β̃_t:=1-α̅_t-1/1-α̅_tβ_t,
μ̃_t(^t, ^0):=√(α̅_t-1)β_t/1-α̅_t^0+√(α_t)(1-α̅_t-1)/1-α̅_t^t.
Since ^0 in the data is not accessible during generation, a neural network parameterized by θ is used for approximation:
p_θ(^t-1 | ^t)=𝒩(^t-1 ; μ_θ(^t, t), Σ_θ(^t, t) I).
Optimization. To optimize this network, the variational bound on negative log likelihood 𝔼[-log p_θ] is estimated by:
L_vlb=𝔼_^0,ϵ,t[D_KL(q(^t-1|^t,^0)||p_θ(^t-1|^t)].
<cit.> found that predicting ϵ is a more efficient way when parameterizing μ_θ(^t,t) in practice, which can be derived by Equation (<ref>) and Equation (<ref>):
μ_θ(^t, t)=1/√(α_t)(^t-β_t/√(1-α̅_t)ϵ_θ(^t, t)).
Thus, a reweighted loss function is designed as the objective to optimize L_vlb:
L_simple=𝔼_^0, ϵ, t[ϵ-ϵ_θ(^t, t)^2].
Improvement. In original DDPMs, L_simple offers no signal for learning Σ_θ(^t,t) and <cit.> fixed it to β_t or β̃_t. <cit.> found it to be sub-optimal and proposed to parameterize Σ_θ(^t, t) as a neural network whose output v is interpolated as:
Σ_θ(^t, t)=exp(v logβ_t+(1-v) logβ̃_t).
To optimize Σ_θ(^t, t), <cit.> use L_vlb, in which a stop-gradient is applied to the μ_θ(^t,t) because it is optimized by L_simple. The final hybrid objective is defined as:
L_hybrid=L_simple + λ L_vlb,
where λ is used for balance between the two objectives.[We set λ to 1 for all our experiments.] Guided by (<ref>), the process of training and sampling are shown in Algorithm <ref> and Algorithm <ref> in Appendix <ref>.
§ DATA-FREE KNOWLEDGE DISTILLATION FOR DIFFUSION MODELS
In this section, we introduce a novel paradigm, termed Data-Free Knowledge Distillation for Diffusion Models (DKDM).
Section <ref> details the DKDM paradigm, focusing on two principal challenges: the formulation of the optimization objective and the acquisition of denoising data for distillation.
Section <ref> describes our proposed optimization objective tailored for DKDM.
Section <ref> details our proposed method for effective collection of denoising data.
§.§ DKDM Paradigm
The DKDM paradigm represents a novel data-free KD approach for DMs.
Unlike conventional KD methods, DKDM aims to leverages KD to transfer the generative capabilities of DMs to models with any architecture, while eliminating the need for access to large or proprietary datasets.
This approach poses two primary challenges: 1) optimizing DMs through synthetic denoising data instead of source data, and 2) devising methods to flexibly collect denoising data for KD as the generation is slow.
r0.5
< g r a p h i c s >
font=small
Illustration of DKDM Paradigm. (a): standard training of DMs. (b): an intuitive baseline. (c): our proposed framework.
In standard training of DMs, as depicted in Figure <ref> (a),
a training sample ^0 ∼𝒟 is selected along with a timestep t ∼[1, 1000] and random noise ϵ∼𝒩(0, I).
The input ^t is computed using Equation (<ref>), and the denoising network is optimized according to Equation (<ref>) to generate outputs close to ϵ.
However, without dataset access, DKDM cannot obtain training data (^t, t, ϵ) to employ this standard method.
A straightforward approach for DKDM, termed the intuitive baseline and depicted in Figure <ref> (b), involves using DMs pretrained on 𝒟 to generate a synthetic dataset 𝒟^',[The number of samples in the synthetic dataset 𝒟^' is equal to those in original dataset 𝒟] which is then used to train new DMs with varying architectures.
Despite its simplicity, creating 𝒟^' is time-intensive and impractical for large datasets.
We propose an effective framework for DKDM paradigm, outlined in Figure <ref> (c), which incorporates a DKDM Objective (described in Section <ref>) and a strategy for collecting denoising data ℬ_i during optimization (detailed in Section <ref>).
This framework addresses the challenges of distillation without source dataset and reduces the costs associated with the intuitive baseline, since the synthetic ℬ_i requires much less computation than 𝒟^'.
§.§ DKDM Objective
Given a dataset 𝒟, the original optimization objective for a diffusion model with parameters θ involves minimizing the KL divergence 𝔼_^0,ϵ,t[D_KL(q(^t-1|^t,^0)p_θ(^t-1|^t))].
Our proposed DKDM objective encompasses two primary goals: (1) eliminating the diffusion posterior q(^t-1|^t,^0) and (2) removing the diffusion prior ^0 ∼ q(^t | ^0) from the KL divergence, since they both are dependent on ^0 from the dataset 𝒟.
Eliminating the diffusion posterior q(^t-1|^t,^0).
In our framework, we introduce a teacher DM with parameters , trained on dataset 𝒟.
This model can generate samples that conform to the learned distribution 𝒟^'.
Optimized with the objective (<ref>), the distribution 𝒟^' within a well-learned teacher model closely matches 𝒟.
Our goal is for a student DM, parameterized by , to replicate 𝒟^' instead of 𝒟^', thereby obviating the need for q during optimization.
Specifically, the pretrained teacher model is optimized via the hybrid objective in Equation (<ref>), which indicates that both the KL divergence D_KL(q(^t-1|^t,^0)p_(^t-1|^t)) and the mean squared error 𝔼_^t, ϵ, t[ϵ-ϵ_(^t, t)^2] are minimized.
Given the similarity in distribution between the teacher model and the dataset, we propose a DKDM objective that optimizes the student model through minimizing D_KL(p_(^t-1|^t))p_(^t-1|^t))) and 𝔼_^t[ϵ_(^t, t)-ϵ_(^t, t)^2].
Indirectly, the DKDM objective facilitates the minimization of D_KL(q(^t-1|^t,^0)p_(^t-1|^t))) and 𝔼_^0, ϵ, t[ϵ-ϵ_(^t, t)^2], despite the inaccessibility of the posterior.
Consequently, we propose the DKDM objective as follows:
L_DKDM=L_simple^'+λ L_vlb^',
where L_simple^' guides the learning of μ_ and L_vlb^' optimizes Σ_, as defined in following equations:
L_simple^' = 𝔼_^0,ϵ,t[ϵ_(^t,t)-ϵ_(^t,t)^2],
L_vlb^' = 𝔼_^0,ϵ,t[D_KL(p_(^t-1|^t)p_(^t-1|^t)],
where q(^t-1|^t,^0) is eliminated whereas the term ^t ∼ q(^t | ^0) remains to be removed.
Removing the diffusion prior q(^t | ^0).
Considering the generative ability of the teacher model, we utilize it to generate ^t as a substitute for ^t ∼ q(^t | ^0).
We define a reverse diffusion step ^t-1∼ p_(^t-1|^t) through the equation ^t-1=g_(^t, t).
Next,we represent a sequence of t reverse diffusion steps starting from T as G_(t).
Note that G_(0)=ϵ where ϵ∼𝒩(0, I).
For instance, G_(2) yields ^T-2=g_(g_(ϵ, T), T-1).
Consequently, ^t is obtained by ^t=G_(T-t) and the objective L_simple^' and L_vlb^' are reformulated as follows:
L_simple^' = 𝔼_^t, t[ϵ_(^t,t)-ϵ_(^t,t)^2],
L_vlb^' = 𝔼_^t,t[D_KL(p_(^t-1|^t)p_(^t-1|^t)].
By this formulation, the necessity of ^0 in L_DKDM is removed by naturally leveraging the generative ability of the teacher model.
Optimized by the proposed L_DKDM, the student progressively learns the entire reverse diffusion process from the teacher model without reliance on the source datasets.
However, the removal of the diffusion posterior and prior in the DKDM objective introduces a significant bottleneck, resulting in notably slow learning rates.
As depicted in Figure <ref> (a), standard training for DMs enable straightforward acquisition of noisy samples _i^t_i at an arbitrary diffusion step t ∼ [1,T] using Equation (<ref>).
These samples are compiled into a training data batch ℬ_j={_i^t_i}, with j representing the training iteration.
Conversely, our DKDM objective requires obtaining a noisy sample _i^t=G_(T-t_i) through T-t_i denoising steps.
Consequently, considering the denoising steps as the primary computational expense, the worst-case time complexity of assembling a denoising data batch _j={_i^t_i} for distillation is 𝒪(Tb), where b denotes the batch size.
This complexity significantly hinders the optimization process.
To address this issue, we introduce a method called dynamic iterative distillation, detailed in Section <ref>.
§.§ Efficient Collection of Denoising Data
In this section, we present our efficient strategy for gathering denoising data for distillation, illustrated in Figure <ref>.
We begin by introducing a basic iterative distillation method that allows the student model to learn from the teacher model at each denoising step, instead of requiring the teacher to denoise multiple times within every training iteration to create a batch of noisy samples for the student to learn once.
Subsequently, to enhance the diversity of noise levels within the batch samples, we develop an advanced method termed shuffled iterative distillation, which allows the student to learn denoising patterns across varying time steps.
Lastly, we refine our approach to dynamic iterative distillation, significantly augmenting the diversity of data in the denoising batch. This adaptation ensures that the student model acquires knowledge from a broader array of samples over time, avoiding repetitive learning from identical samples.
Iterative Distillation.
We introduce a method called iterative distillation, which closely aligns the optimization process with the generation procedure.
In this approach, the teacher model consistently denoises, while the student model continuously learns from this denoising.
Each output from the teacher's denoising step is incorporated into some batch for optimization, ensuring the student model learns from every output.
Specifically, during each training iteration, the teacher performs g_(^t,t), which is a single-step denoising, instead of G_(t), which would involve t-step denoising.
Initially, a batch _1={_i^T} is formed from a set of sampled noises _i^T∼𝒩(0,I).
After one step of distillation, the batch _2={_i^T-1} is used for training.
This process is iterated until _T={_i^1} is reached, indicating that the batch has nearly become real samples with no noise.
The cycle then restarts with the resampling of noise to form a new batch _T+1={_i^T}.
This method allows the teacher model to provide an endless stream of data for distillation.
To further improve the diversity of the synthetic batch _j={_i^t_i}, we investigate from the perspectives of noise level t_i and sample _i.
Shuffled Iterative Distillation.
Unlike standard training, the t values in an iterative distillation batch remain the same and do not follow a uniform distribution, resulting in significant instability during distillation.
To mitigate this issue, we integrated a method termed shuffle denoise into our iterative distillation.
Initially, a batch _0^s={_i^T} is sampled from a Gaussian distribution.
Subsequently, each sample undergoes random denoising steps, resulting in _1^s={_i^t_i}, with t_i following a uniform distribution.
This batch, _1^s, initiates the iterative distillation process.
By ensuring diversity in the t_i values within the batch, this method balances the impact of different t values during distillation.
Dynamic Iterative Distillation.
There is a notable distinction between standard training and iterative distillation regarding the flexibility in batch composition.
Consider two samples, _1 and _2, within a batch without differentiating their noise level.
During standard training, the pairing of _1 and _2 is entirely random.
Conversely, in iterative distillation, batches containing _1 almost always include _2.
This departure from the principle of independent and identically distributed samples in a batch can potentially diminish the model's generalization ability.
r0.5
0.5
To better align the distribution of the denoising data with that of the standard training batch, we propose a method named dynamic iterative distillation.
As shown in Figure <ref>
, this method employs shuffle denoise to construct an enlarged batch set _1^+={_i^t_i}, where size |_j^+|=ρ T|_j^s|, where ρ is a scaling factor.
During distillation, a subset _j^s is sampled from _j^+ through random selection for optimization.
The one-step denoised samples replace their counterparts in _j+1^+.
This method only has a time complexity of 𝒪(b) and significantly improves distillation performance.
The final DKDM objective is defined as:
L_DKDM^⋆=L_simple^⋆+λ L_vlb^⋆,
L_simple^⋆ = 𝔼_(^t,t) ∼^+[ϵ_(^t,t)-ϵ_(^t,t)^2],
L_vlb^⋆ = 𝔼_(^t,t) ∼^+[D_KL(p_(^t-1|^t)p_(^t-1|^t)],
where ^t and t is produced by our proposed dynamic iterative distillation. The complete algorithm is detailed in Algorithm <ref>.
§ EXPERIMENTS
This section details extensive experiments that demonstrate the effectiveness of our proposed DKDM.
In Section <ref>, we establish appropriate metrics and baselines for evaluation.
Section <ref> compares the performance of baselines and our DKDM with different architectures.
Additionally, we show that our DKDM can be combined with other methods to accelerate DMs.
Finally, Section <ref> describes an ablation study that validates the effectiveness of our proposed dynamic iterative distillation.
§.§ Experiment Setting
Datasets and teacher diffusion models.
Our DKDM paradigm inherently eliminates the necessity for datasets.
However, the pretrained teacher models employed are trained on specific datasets.
We utilize three distinct pretrained DMs as teacher models, following the configurations introduced by <cit.>, and these models are all based on convolutional architecture.
These models have been pretrained on CIFAR10 at a resolution of 32 × 32 <cit.>, CelebA at 64 × 64 <cit.> and ImageNet at 32 × 32 <cit.>.
Metrics.
The distance between the generated samples
and the reference samples can be estimated by the Fréchet Inception Distance (FID) score <cit.>.
In our experiments, we utilize the FID score as the primary metric for evaluation.
Additionally, we report sFID <cit.>, Inception Score (IS) <cit.> as secondary metrics.
Following previous work <cit.>, we generate 50K samples for DMs and we use the full training set in the corresponding dataset to compute the metrics.
Without additional contextual states, all the samples are generated through 50 Improved DDPM sampling steps <cit.> and the speed is measured by the average time taken to generate 256 images on a single NVIDIA A100 GPU.
All of our metrics are calculated by ADM TensorFlow evaluation suite <cit.>.
Baseline.
As the DKDM is a new paradigm proposed in this paper, previous methods are not suitable to serve as baselines.
Therefore, in the data-free scenario, we take the intuitive baseline depicted in Figure <ref> (a) as the baseline.
Specifically, the teacher model consumes a lot of time to generate a substantial number of high-quality samples, equivalent in quantity to the source dataset, through 1,000 DDPM denoising steps.
These samples then serve as the synthetic dataset (𝒟^') for the training of randomly initialized student models, following the standard training (Algorithm <ref> in Appendix <ref>).
We use the performance obtained from this method as our baseline for comparative analysis.
r.45
.45
§.§ Main Results
Effectiveness.
Table <ref> shows the performance comparison of our DKDM and baselines.
Our DKDM consistently outperforms the baselines, demonstrating superior generative quality when maintaining identical architectures for the derived DMs.
This performance validates the efficacy of our proposed DKDM objective and dynamic iterative distillation approach.
The improvement over baselines is attributed to the complexity of the reverse diffusion process, which baselines struggle to learn, whereas the knowledge from pretrained teacher models is easier to learn, highlighting the advantage of our DKDM.
Additionally, DKDM facilitates the distillation of generative capabilities into faster and more compact models, as evidenced by the 2 × faster architectures evaluated.
The parameter count and generative speed are detailed in Table <ref>, with further information on hyperparameters and network architecture available in Appendix <ref>.
Appendix <ref> includes some exemplary generated samples, illustrating that the student DMs, derived through DKDM, are capable of producing high-quality images.
Nevertheless, a limitation noted in Table <ref> is that the performance of these student DMs falls behind their teacher counterparts, which will be discussed further in Section <ref>.
We further evaluated the performance of these student models across a diverse range of architectures.
Specifically, we tested five different model sizes by directly specifying the architecture, bypassing complex methods like neural architecture search.
Both the teacher and student models employ Convolutional Neural Networks (CNNs) and the results are shown in Figure <ref>fig:compression.
Detailed descriptions of these architectures are available in Appendix<ref>.
Typically, we distilled a 14M model from a 57M teacher model, maintaining competitive performance and doubling the generation speed.
Additionally, the 44M and 33M student models demonstrated similar speeds, suggesting that DKDM could benefit from integration with efficient architectural design techniques to further enhance the speed and quality of DMs.
This aspect, however, is beyond our current scope and is designated for future research.
r.35
.35
Cross-Architecture Distillation.
Our DKDM transcends specific model architectures, enabling the distillation of generative capabilities from CNN-based DMs to Vision Transformers (ViT) and vice versa.
We utilized DiT <cit.> for ViT-based DMs to further affirm the superiority of our approach.
Detailed structural descriptions are available in Appendix <ref>.
For experimental purposes, we pretrained a small ViT-based DM to serve as the teacher.
As shown in Table <ref>, DKDM effectively facilitates cross-architecture distillation, yielding superior performance compared to baselines.
Additionally, our results suggest that CNNs are more effective as compressed DMs than ViTs.
Combination with Orthogonal Methods.
The DMs derived by our DKDM are compatible with various orthogonal methods, such as denoising step reduction, quantization, pruning and so on.
Here we conducted experiments to integrate the DDIM method, which reduces denoising steps, as illustrated in Figure <ref> fig:ddim.
This integration demonstrates that DDIM can further accelerate our derived student models, although with some performance trade-offs.
§.§ Ablation Study
To validate our designed paradigm DKDM, we tested FID score of our progressively designed methods, including iterative, shuffled iterative, and dynamic iterative distillation, over 200K training iterations.
For the dynamic iterative distillation, the parameter ρ was set to 0.4.
The results, shown in Figure <ref> fig:strategy, demonstrate that our dynamic iterative distillation strategy not only converges more rapidly but also delivers superior performance.
The convergence curve for our method closely matches that of the baseline, which confirms the effectiveness of the DKDM objective in alignment with the standard optimization objective (<ref>).
Further experiments explored the effects of varying ρ on the performance of dynamic iterative distillation.
As illustrated in Figure <ref> fig:rho, higher ρ values enhance the distillation process up to a point, beyond which performance gains diminish.
This outcome supports our hypothesis that dynamic iterative distillation enhances batch construction flexibility, thereby improving distillation efficiency.
Beyond a certain level of flexibility, increasing ρ does not significantly benefit the distillation process. Further stability analysis of our dynamic iterative distillation is available in Appendix <ref>.
§ DISCUSSION AND FUTURE WORK
r.45
.45
The primary concept of our proposed DKDM paradigm is illustrated in Figure <ref>.
In this paradigm, the teacher DM , is trained on a real dataset 𝒟, which follows the distribution 𝒟^'.
There are two critical relationships between 𝒟 and 𝒟^'.
First, the distribution 𝒟^' of a well-trained teacher closely approximates 𝒟.
Second, the FID scores, when computed using 𝒟 as a reference, correlate with those using 𝒟^', as demonstrated by the linear fitting in Figure <ref>.
This correlation underpins the effectiveness of the DKDM paradigm.
By transferring the distribution 𝒟^' from the teacher DM to a lighter student DM, DKDM enables the student to generate data whose distribution closely approximates 𝒟.
However, in practice, there is invariably some discrepancy between 𝒟 and 𝒟^', limiting the performance of the student model.
We report these scores, denoted as FID^' and sFID^', calculated over the distribution 𝒟^' instead of 𝒟 in Table <ref>.
The results indicate that the FID^' and sFID^' scores of the student closely mirror those of the teacher, suggesting effective optimization.
Nevertheless, these scores are inferior to those of the teacher, primarily due to the gap between 𝒟 and 𝒟^'.
A potential solution to enhance DKDM involves improving the generative capabilities of the teacher, which we leave as a direction for future work.
§ CONCLUSIONS
In this paper, we introduce DKDM, a novel paradigm designed to efficiently distill the generative abilities of pretrained diffusion models into more compact and faster models with any architecture.
Our experiments demonstrate the effectiveness of DKDM across three datasets, showcasing its ability to compress models to various sizes and architectures.
A key advantage of our method is its ability to perform efficient distillation without direct data access, significantly facilitating ongoing research and development in the field.
Moreover, our DKDM is compatible with most existing methods for accelerating diffusion models and can be integrated with them.
plainnat
§ TRAINING AND SAMPLING OF DDPMS
In this section, we present the algorithms for training and sampling from standard DDPMs. The specific details have been previously introduced in Section <ref>.
§ HYPERPARAMETERS
§.§ Hyperparameters for DKDM
For experiments of both DKDM and baseline, we use the hyperparameters specified by <cit.>, which are also in line with those adopted by <cit.>, as reported in Table <ref>.
Settings of our training process are basically the same with <cit.> and <cit.>, including mixed precision training, EMA and so on. All the models are trained on 8 NVIDIA A100 GPUs (with 40G memory).
§.§ Hyperparameters for Model Compression
Table <ref> shows the architectures of different students trained in Section <ref> for model compression.
§.§ Hyperparameters for Cross-Architecture Distillation
Similar to the configuration used by <cit.>, the hyperparameters employed for the ViT-based diffusion model in Section <ref> are presented in Table <ref>. This particular configuration was chosen due to the characteristic of ViT-based diffusion models generally requiring more time for image generation compared to their CNN-based counterparts. To illustrate this point, when generating 2,500 images on single A100 40GB GPU with 50 Improved DDPM steps, it takes approximately 57 seconds for a 57M CNN diffusion model, whereas a 19M ViT diffusion model requires 66 seconds. Our final choice of this configuration was driven by the aim to achieve fairness in our experimentation and analysis.
§ EXEMPLARY GENERATED SAMPLES
In this section, we demonstrate some exepmlary generated samples of our derived 2×-speed and 1/4-size student DMs.
§ ANALYSIS: RANDOM DISCARD
During our exploration, we discovered that the utilization of the Random Discard technique proves to be a straightforward yet highly effective approach for enhancing the distillation process. The idea behind it involves the random elimination of some batch of noisy samples generated by the teacher model during the iterative distillation.
For instance, in iterative distillation during the initial five training iterations, batches _1, _3, _4 may be discarded, while _2,_5 are utilized for the student's learning.
We present an analysis of the impact of random discarding in our devised methodologies. Specifically, we introduce the parameter p to denote the probability of discarding certain noisy samples. Subsequently, we apply varying discard probabilities to the iterative distillation, shuffled iterative distillation, and dynamic iterative distillation, and assess their respective performance alterations over a training duration of 200k iterations.
The outcomes are presented in Figure <ref>. It is noteworthy that both iterative distillation and shuffled iterative distillation face limitations in constructing flexible batches, where random discard emerges as a noteworthy solution to enhance their efficacy. Conversely, for dynamic iterative distillation, when ρ attains a sufficiently large value, it becomes apparent that random discard fails to confer additional advantages. This observation underscores the inherent stability of our dynamic iterative distillation method and we ultimately omitted random discard from the final implementation. This is beneficial because of its inefficiency in requiring the teacher model to prepare a larger number of noisy samples.
§ SOCIAL IMPACT
As an effective method for compressing and accelerating diffusion models, we believe that DKDM has the potential to significantly reduce the deployment costs associated with these models, thereby facilitating more widespread use of diffusion models for generating desired content.
However, it is imperative to acknowledge that, as generative models, diffusion models, while offering creative applications across various scenarios, may also engender consequences such as the production of dangerous or biased content.
Our DKDM is capable of mimicking the generative capabilities of a wide array of existing diffusion models without accessing the source datasets, which leads to derived models inheriting the flaws and limitations of these pre-existing models.
For instance, if the training data of a pre-trained diffusion model contains sensitive or personal information collected without explicit consent, derived models may still risk leaking this data.
Consequently, the potential societal harms of our approach primarily hinge on the negative impacts brought about by the existing diffusion models themselves.
Addressing how to mitigate the adverse effects inherent in diffusion models remains a critical area of research.
|
http://arxiv.org/abs/2409.02386v1 | 20240904022659 | Dissecting Payload-based Transaction Phishing on Ethereum | [
"Zhuo Chen",
"Yufeng Hu",
"Bowen He",
"Dong Luo",
"Lei Wu",
"Yajin Zhou"
] | cs.CR | [
"cs.CR",
"cs.SE"
] |
Dissecting Payload-based Transaction Phishing on Ethereum
Zhuo Chen 1 Corresponding Author.
Zhejiang University
[email protected]
Dong Luo
Zhejiang University
[email protected]
Yufeng Hu
Zhejiang University
[email protected]
Lei Wu 2 2 These authors are also affiliated at Key Laboratory of Blockchain and Cyberspace Governance of Zhejiang Province.
Zhejiang University
[email protected]
Bowen He
Zhejiang University
[email protected]
Yajin Zhou 1 2
Zhejiang University
[email protected]
September 9, 2024
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In recent years, a more advanced form of phishing has arisen on Ethereum, surpassing early-stage, simple transaction phishing. This new form, which we refer to as payload-based transaction phishing (), manipulates smart contract interactions through the execution of malicious payloads to deceive users. has rapidly emerged as a significant threat, leading to incidents that caused losses exceeding $70 million in 2023 reports. Despite its substantial impact, no previous studies have systematically explored .
In this paper, we present the first comprehensive study of the on Ethereum.
Firstly, we conduct a long-term data collection and put considerable effort into establishing the first ground-truth dataset, consisting of 5,000 phishing transactions. Based on the dataset, we dissect , categorizing phishing tactics into four primary categories and eleven sub-categories.
Secondly, we propose a rule-based multi-dimensional detection approach to identify , achieving an F1-score of over 99% and processing each block in an average of 390 ms.
Finally, we conducted a large-scale detection spanning 300 days and discovered a total of 130,637 phishing transactions on Ethereum, resulting in losses exceeding $341.9 million. Our in-depth analysis of these phishing transactions yielded valuable and insightful findings.
Scammers consume approximately 13.4 ETH daily, which accounts for 12.5% of the total Ethereum gas, to propagate address poisoning scams. Additionally, our analysis reveals patterns in the cash-out process employed by phishing scammers, and we find that the top five phishing organizations are responsible for 40.7% of all losses.
Furthermore, our work has made significant contributions to mitigating real-world threats.
We have reported 1,726 phishing addresses to the community, accounting for 42.7% of total community contributions during the same period. Additionally, we have sent 2,539 on-chain alert messages, assisting 1,980 victims.
This research serves as a valuable reference in combating the emerging and safeguarding users' assets.
§ INTRODUCTION
The rapid growth of decentralized finance (DeFi) on Ethereum has led to a significant rise in phishing scams. As users actively participate in the DeFi ecosystem, engaging in activities such as purchasing tokens like NFTs and conducting transactions on Ethereum, phishing attempts have adapted to specifically target users' crypto assets.
Unlike traditional phishing scams that focus on privacy or financial information <cit.>, Ethereum phishing are inherently tied to transactions.
Therefore, we refer to this type of phishing as transaction phishing in this paper.
In the early stages, transaction phishing attempts are relatively straightforward, relying on traditional tactics to deceive users. Ethereum transactions are used as a new means of carrying out these scams, rather than being the primary lure for victims. Scammers may initiate transfer transactions through websites to steal victims' crypto assets <cit.>, or entice victims to purchase fake assets <cit.> via websites or crypto wallets. Various mitigation proposals have been suggested to address such threats, such as the detection of phishing websites to limit their spread <cit.>, and the prediction of address risk scores based on fund flow relationships <cit.>.
However, with the continuous evolution of phishing tactics, more sophisticated scams are emerging that exploit complex on-chain semantics.
These sophisticated scams involve scammers crafting transactions or messages [The signed messages are initially dispatched to the scammer, who subsequently broadcasts them to the blockchain.] that manipulate smart contract interactions through the execution of malicious payloads to deceive users.
These payloads can either be embedded within the malicious smart contracts deployed by the scammers or executed by benign smart contracts used as the executor.
In this paper, we refer to these scams as Payload-based Transaction Phishing ().
Table <ref> provides a summary of the differences between the aforementioned simple transaction phishing and , with a further categorization of discussed in Section <ref>.
Figure <ref> provides a example of a malicious payload executed by a benign smart contract.
The scammer manipulates the semantics of the Blur [<Blur.io> is one of the top NFT marketplaces on Ethereum.] order transactions to deceive the victim. The intricate transaction semantics make it difficult for users to understand the role of each parameter in the calldata. Consequently, victims, especially those lacking domain knowledge, may perceive that they are engaging in transactions with a reputable NFT market, while remaining unaware of the concealed malicious behavior within the transaction's parameters (, fees in Figure <ref>). This lack of awareness leads victims to place blind trust in the scammer, ultimately allowing the scammer to successfully appropriate the victim's NFT without making any payment. Additionally, the propagation process and tricks are detailed in Section <ref>.
has been increasingly prevalent in the recent two years. For example, a significant number of incidents were reported from November 2022 to July 2023 <cit.>, resulting in cumulative financial losses exceeding $70 million. One particular incident stands out, causing a loss of $24 million and ranking among the top ten blockchain attack incidents of 2023 <cit.>. Unfortunately, existing countermeasures have not effectively addressed , as it exploits transaction semantics in carrying out its scams. Consequently, there is an urgent need to propose an effective detection method to combat .
Unfortunately, despite the significant threat posed by this emerging type of phishing, the understanding of is limited. Only a few studies, such as a recent work <cit.>, have measured a fraction of .
The focus of this particular study <cit.> is primarily on visual scams that exploit wallet mistakes, without considering comprehensive contract code. However, it is crucial to perform in-depth contract code analysis to detect transactions that employ sophisticated fraudulent techniques. To the best of our knowledge, no systematic study of has been conducted to date.
This work.In this paper, we present the first comprehensive study that dissects on Ethereum. We first characterize and then propose an effective detection approach to combat these scams. Furthermore, we conduct a large-scale and long-term detection and measurement, providing valuable insights into this emerging form of phishing. Our research aims to contribute to the community's understanding and mitigation of such threats.
Specifically, we first conduct extensive data collection and build up the first ground-truth dataset. Based on this dataset, we propose an in-depth analysis of the processes and tactics used in phishing scams (see Section <ref>). This involves classifying the current tactics into four main categories and eleven sub-categories.
Drawing from the insights gained through this analysis, we then identify key features of and propose a rule-based multi-dimensional detection approach accordingly. The effectiveness of this approach in identifying potential transactions is demonstrated through a thorough evaluation, achieving over 99% F1-score and processing each block in 390 ms on average. (see Section <ref>).
Lastly, we conduct a large-scale detection and perform an extensive analysis of from three perspectives (see Section <ref>):
∘
The transactions: We delve into phishing transactions, examining the extent of funds lost and providing detailed insights into the characteristics of each category.
∘
The scammers: We categorize scammer addresses into three types based on their behaviors: cashiers, fund aggregators, and depositors. Additionally, we propose an algorithm based on cash-out patterns, which utilizes the relationship between funds and address types to identify and track scammer organizations.
∘
The victims: We scrutinize the profiles of victims, including their address behavior features and the remedial actions they took after falling victim to phishing scams.
Our findings.In this study, we provide valuable insights into the characteristics of . Our analysis of transactions reveals the increasing prevalence of this type of phishing. From December 31, 2022, to October 27, 2023, the frequency of escalated, resulting in significant economic damage exceeding $341.9 million across 130,637 transactions. Notably, approximately 4.97% of approve transactions and 46.22% of permit transactions are identified as phishing transactions.
Our investigations suggest that the NFT markets prove ineffective in preventing the sale of stolen NFTs, with the majority of valuable NFTs being cashed out through platforms such as Blur (61.78%) and OpenSea (21.97%). Remarkably, scammers spent over 13.4 ETH per day in gas fees to send address poison transactions, accounting for 12.5% of the total Ethereum gas usage.
Additionally, our observations indicate a high level of organization among scammers in their cash-out process. By leveraging our cash-out pattern-based algorithm, we successfully identify the current phishing organizations. Interestingly, the top five phishing organizations are responsible for 40.7% of the total losses. Regarding the victims, our findings reveal that nearly half of them (40.38%) do not take remedial measures after incurring losses.
Contributions.Our study makes the following contributions:
* Anatomy of . Through extensive data collection and long-term on-chain monitoring, we systematically analyze the process and categorize its tactics (Section <ref>).
* First open-source dataset.
We build the first ground-truth dataset, which encompasses a comprehensive collection of 5,000 phishing transactions alongside 13,557 legitimate transactions. We will release it to the community [<https://github.com/HypoopyH/PTXPhish>].
* transaction detection approach.
We propose a rule-based multi-dimensional detection approach that effectively and efficiently identifies phishing transactions, achieving an F1-score of over 99% on both the ground-truth dataset and real-world Ethereum transactions from May 1, 2023, to Jun 1, 2023. The average processing time per block is only 390 ms.
* In-depth analysis of . We conduct a large-scale detection and perform an in-depth analysis of to provide insightful findings from three perspectives: transactions (Section <ref>), scammers (Section <ref>), and victims (Section <ref>).
* Contribution to mitigating real-world threats.
We help mitigate this emerging threat. Specifically, we have reported 1,726 phishing addresses to the community, accounting for 42.7% of the total community contributions in the same period. Moreover, we have sent 2,539 on-chain alert messages, assisting 1,980 victims. The community has acknowledged and recognized our efforts to combat phishing attempts and protect individuals from these threats.
§ BACKGROUND
§.§ Ethereum Blockchain
Ethereum is a public blockchain-based distributed computing platform and operating system featuring scripting functionality. The Ethereum blockchain <cit.> is the most prominent framework for smart contracts <cit.>.
Address.
In Ethereum, the account can be divided into two types: externally owned account (EOA) and contract account (CA). The EOA is created by using the public-private keys and is controlled by the entity in possession of the private key.
On the other hand, the CA is created by the EOA through contract creation transactions.
The functionality of CA is controlled by its deployed code instead of an entity. What's more, the CA relies on EOA to execute its functions.
Transaction. During the operation of Ethereum, users can interact with other users and contracts through sending transactions.
A transaction is a signed message to be sent from an EOA to another account, which carries the following information: to (receiver), from (message sender), value (the amount of native token, , ETH in Ethereum), data (the input for a contract call), .
In particular, when a transaction sets its to field to be empty, Ethereum regards it as a transaction that creates a contract with its data field being the bytecode of the contract.
In the end, transactions will be verified by all chain clients and be written onto the blockchain.
§.§ Decentralized Finance (DeFi)
Decentralized Finance (DeFi) is an emerging model for organizing and enabling cryptocurrency-based transactions <cit.>. In Ethereum, DeFi is built on top of multiple smart contracts, giving rise to projects such as lending, trading, and marketing <cit.>.
Token.
In Ethereum chains, tokens are digital assets. Unlike native cryptocurrency (, ETH in Ethereum), tokens are implemented using specialized smart contracts. There are two main types of tokens: fungible and non-fungible.
columns=fixed,
numbers=left,
numberstyle=,
frame=none,
backgroundcolor=,
keywordstyle=,
numberstyle=,
commentstyle=,
stringstyle=,
showstringspaces=false,
language=python,
breaklines = true,
basewidth = 0.5em
Fungible tokens, which are homogeneous and interchangeable, mostly conform to the same interface standard. These tokens serve as a complement to the native currency, playing the role of a more flexible secondary currency within the DeFi ecosystem.
In contrast, non-fungible tokens (NFT) conform to a different type of interface, such as ERC-721/1155 in ETH. These tokens are identified with unique _tokenID, representing a digital asset such as ENS domains or pictures.
ERC-20/721 are currently the most widely used standards for token implementation on Ethereum.
The standard interface defines a set of API methods that a token contract needs to implement.
Some important API methods relevant to our study are listed in Figure <ref> (in the appendix).
The approve method approves the _spender as the operator of the token (msg.receiver) with _value (ERC-20) or _tokenId (ERC-721).
In ERC-721, the setApprovalForAll method can either add or remove the address _operator from/to the set of the operators authorized by the msg.sender.
The spender can call the transferFrom method to transfer the token (within the approve _value in ERC-20, or the same _tokenId in ERC-721) from the current owner’s _from address to the _to address.
NFT Marketplaces.
NFT marketplaces are decentralized application (dApp) platforms where NFTs are traded. Typically, there are two main components of an NFT marketplace: a user-facing web interface and a collection of smart contracts that interact with the blockchain. Users interact with the web app, which in turn sends transactions to the smart contracts. To facilitate these transactions, these marketplaces have implemented many methods to help users place orders, make purchases, and transfer NFTs in batches.
§ ANATOMY OF
In this section, we first describe our data collection process of the dataset.
We then analyze phishing tactics and categorize current phishing scams. Finally, we evaluate the coverage and effectiveness of our anatomy.
§.§ Data Collection of
Currently, there is no centralized source of information dedicated to , and public information sources are diverse.
To address this gap, we have created the first ground-truth phishing transaction dataset.
Specifically, our dataset was established through the following steps:
* Collecting public reports.
We gathered public reports from two sources, , the phishing complaints made by victims on social media and the phishing blogs reported by the security community <cit.>.
By querying keywords related to phishing, scam, and drainer, we identified relevant websites. Our information collection lasted for three months and resulted in 101 public phishing complaints and reports.
* Reviewing public reports.
Due to the diverse sources of phishing reports, these phishing reports are in different formats and lack authoritative verification.
To ensure the accuracy of the dataset, a manual review was conducted by two security experts. They analyzed the transaction data, logs, tokens transferred, and transaction call traces. A consensus was reached by the two experts to label a transaction as phishing.
During the review process, we recorded scammers' and victims' addresses, transaction parameters, and transaction hashes to standardize the data format.
* Expanding from the historical phishing data.
To increase the number of phishing transactions, we reviewed the transaction history of the scammers' addresses collected from the public reports. Random historical transactions were selected from each scammer's address, with an additional 50 transactions chosen for each phishing address [Our investigation suggests that a threshold of 50 is typically enough to cover the majority of phishing techniques, see Appendix <ref> for details.]. The extended transactions underwent manual review as in the previous step.
Notably, through our data extension and manual reviews, we have found some hidden phishing scams and provided several first-of-its-kind reports of new scams (, Blur free buy order, dust value poison).
By doing so, we have established the first ground-truth dataset, which consists of 5,000 phishing transactions. The dataset is further categorized into different phishing categories (see Section <ref>), including 2,569 ice phishing transactions, 609 NFT order transactions, 226 address poisoning transactions, and 1,596 payable function transactions.
The detailed information can be found in Table <ref> in the appendix due to the page limit.
Furthermore, we built a benign dataset for comparison by collecting transactions from two distinct sources:
* Top 50 Debank [A well-known website for tracking Web3 portfolio <cit.>.] Key Opinion Leaders (KOL).
These influential users significantly impact the investment community and have a large following, which bolsters the credibility of their transactions.
* Top 10 DeFi Protocol Developers [Based on DefiLlama <cit.>, a top site for DeFi project rankings.].
These high-level developers are prominent in the DeFi space, and their widely used contracts underscore the legitimacy of their transactions.
To ensure comprehensive representation and maintain a balanced sample size, we randomly selected 200 transactions for each user [Addresses with fewer than 200 transactions were included in full.].
In total, we gathered 13,557 benign transactions.
§.§ Categorization of
Based on the ground-truth dataset, our analysis reveals that scammers employ two distinct strategies: (i) Abusing legitimate contracts; and (ii) Exploiting phishing contracts, as depicted in Figure <ref>.
In the following, we delve into the details of these strategies, including their progress and specific tactics.
§.§.§ Abusing legitimate contract
As depicted in Figure <ref>, abusing legitimate smart contracts involves three steps.
We provide a thorough description of them in the following:
∘ Step I: Scammer abuses legitimate contracts to construct phishing transactions. At first, the scammer analyzes well-known DeFi projects' contracts (, ERC20 token contracts, NFT market contracts, and Uniswap contracts) and their functions.
Subsequently, based on some functions of these contracts, the scammer constructs a set of transactions with malicious semantics. Although the interaction targets of these transactions are legitimate contracts, their actual behavior will cause phishing scams.
∘ Step II: Scammer spreads phishing transactions through websites. Generally, the scammer conceals phishing transactions within fraudulent websites and promotes them on social media platforms such as Twitter, Telegram, Instagram, or Discord.
When visiting fake websites, victims would connect their wallet and be asked to sign a transaction [Since the user's address is different, the phishing website will adjust the phishing transaction request according to the user's address.].
Unfortunately, victims only understand they are interacting with authorized contracts but are unaware of the real output of the phishing transactions, leading them to place blind trust in the phishing transactions.
∘ Step III: Victims sign phishing transactions and lose assets.
Once the victim signs and submits the transaction to the Ethereum client, the legitimate contract will execute it, transferring/authorizing the victim's assets to the scammer.
The essence of abusing existing legitimate contracts involves deceiving victims by making them believe the transactions conducted with authoritative contracts are legitimate.
Therefore, based on the types and methods of the exploited legitimate contracts, we can divide them into two categories: ice phishing and market order scams.
Scam Category I: Ice phishing scam.
The ice phishing scam exploits the approve function in token contract (see Section <ref>).
Token owners can call the approve function to give an address the right to control a certain amount of their tokens.
However, the interface does not impose any limitations on the spender.
Specifically, (i) the spender can be any address, no matter whether it is a Contract Account (CA) or an Externally Owned Account (EOA); (ii) the spender has the ability to transfer approved amount of tokens to any other addresses.
In other words, if the spender is an EOA address, it can arbitrarily transfer the owner's assets to any address without the owner's consent.
There are three specific sub-categories:
♢ I-A: Approve.
Targeting victims' ERC-20 tokens, the scammer constructs phishing transactions with approve (ERC-20 standard interface) and increaseAllowance (optional ERC-20 interface) to lure victims to sign.
♢ I-B: Permit.
The permit function performs the same role as the approve function but allows for off-chain signing.
Exploiting this feature, the scammer creates off-chain ERC20 permit messages and lures victims into signing them.
The scammer then submits the permit transaction to Ethereum.
♢ I-C: SetApproveForAll.
Turning to NFTs, the scammer exploits the setApproveForAll function of NFT collections, which can approve an entire NFT collection to an address within a single transaction.
Scam Category II: NFT order scam.
NFT order scams specifically target popular NFTs owned by victims. Since the majority of users manage and trade their NFTs through dedicated NFT markets such as OpenSea <cit.> and Blur <cit.>, scammers abuse the existing NFT market contracts to construct deceptive transactions.
Due to the lack of unified interfaces, NFT markets have implemented their own market order contracts. These contracts are highly complex, making it challenging for users to comprehend the corresponding transactions. Even wallets are only able to display raw data without providing clear explanations. Consequently, we have observed three commonly employed tactics in these scams.
♢ II-A: Bulk transfer.
Aiming to simplify the process of transferring multiple NFTs to a designated recipient address, OpenSea introduced a convenient function called the bulkTransfer.
Regrettably, scammers exploit this function by surreptitiously replacing the intended recipient address with their own, thereby diverting the NFTs to their control.
♢ II-B: Proxy upgrade.
In the early stage, OpenSea implemented a proxy contract to streamline the trading process for its users. By default, this proxy contract initially grants operator rights over the user's NFTs.
Exploiting this feature, scammers deceive users into signing a proxy upgrade transaction, which replaces the proxy contract's implementation with a scammer-controlled contract. As a result, the scammers gain ownership of the proxy contract <cit.>, thereby enabling them to steal the user's NFTs through the manipulated proxy contract.
♢ II-C: Free buy order.
In contrast to traditional centralized markets, NFT markets utilize a combination of front-end web pages and smart contracts <cit.>. Specifically, users use the front-end interface to sign an off-chain message that describes their order details, including the floor price and the trade time window. Upon matching the order, the market automatically completes the remaining details, such as the recipient and the final price.
Exploiting this design, scammers construct transactions with malicious parameters that ultimately result in the loss of the NFT owner. As illustrated in Figure <ref>, the occurrence of a free buy order is caused by a malicious 100% fees parameter intentionally set by the scammer.
§.§.§ Exploiting phishing contracts
As depicted in Figure <ref>, the process of exploiting malicious contracts deployed by scammers involves three steps.
We provide a detailed description of each step below:
∘ Step I: Scammer deploys phishing contracts. In this kind of scam, scammers begin by deploying one or a group of contracts with different functionalities. Common malicious contracts include fake token, broadcoast, and trap contracts.
∘ Step II: Phishing contracts spread fake information to victims through transactions.
In contrast to spreading scams through websites, scammers employ broadcast contracts to spread fake information to users through on-chain transactions. These transactions are specially designed to contain false information that can contaminate users' wallets. For example, they can poison users' transaction records or airdrop tokens with fake information.
∘ Step III: Victims believe the fake information, initiate transactions and lose assets. The victims believe the information appearing in their wallets and initiate transactions to phishing addresses. Unfortunately, these user-initiated transactions lead to the loss of their assets.
According to the different contracts the scammers deployed, we can divide them into two categories: address poisoning,
and payable function scam.
Scam Category III: Address poisoning scam.
The address poisoning scam is a distinct type of scam within the blockchain ecosystem. Its primary objective is to create fake transactions between fake addresses and user addresses actively. By doing so, the scammer effectively contaminates the user's transaction records with these fake addresses [The full length of an address is 20, so the GUI of wallets commonly omits a part of the address, causing similar addresses to display the same.].
For ease of understanding, we show a famous address poison scam example <cit.>. Initially, Binance sent 10 million USDT to a legitimate deposit address (0xa7B4BAC8f0f9692e56750aEFB5f6cB5516E90570).
After monitoring this transfer, the scammer creates a counterfeit address (0xa7Bf48749D2E4aA29e3209879956b9bAa9E90570) that has the same GUI (0xa7B4...0570) in various wallets.
And then, the scammer TransferFrom 10 million fake USDT from Binance to the fake address. This action leaves a record of the fake address in Binance's transfer history.
In a crucial misstep, Binance mistakenly believed the fake address in transfer history and transferred another 20 million USDT to the fake address in transaction [Transaction hash: 0x08255ca0e42a872559437141fa46980e66d907f7668922
467d67515b1ebb4b7f]. This mistake behavior results in the loss of the funds.
The scam exploits the victims who mistakenly believe that all the history records are initiated by themselves. Specifically, there are three sub-categories, as follows:
♢ III-A: Zero value transfer. The interface of ERC-20 tokens specifies that the spender can only transfer tokens within the approved amount. However, by default, the approved amount is set to zero. Exploiting this default behavior, scammers can invoke the transferFrom function to transfer zero tokens from the victim's address to a fraudulent address, as depicted in Figure 4. Even though no tokens are transferred, this action leaves a transfer record in the victim's transaction history, potentially misleading the victim.
♢ III-B: Fake token transfer. The scammers deploy fake tokens with the same name/symbol as authoritative tokens.
What's more, the scammers remove allowance check so they can call transferFrom to transfer any fake tokens from the victim's address to a fake address. By doing so, scammers can leave fake addresses in the victim's transfer history.
♢ III-C: Dust value transfer. The scammer sends a small number of valuable tokens from a fake address to the victim's address. This leaves the fake address in the victim's transfer history. Since they are tiny amounts, they are called dust value transfer.
Scam Category IV: Payable function scam.
Due to the absence of an auditing mechanism on the blockchain, the functionality of smart contract functions may not comply with interface protocols but may instead be determined by the smart contract developer.
For better understanding, we show a concrete example in Figure <ref>.
The scammer poses as a legitimate project and deceives victims into believing this is a standard interface.
However, the malicious SecurityUpdate function accepts the victims' native tokens (via the payable modifier), while the withdraw function permits only the “owner” of the contract (typically the scammer) to withdraw the tokens. The victim will incur losses upon calling this function with native tokens.
Specifically, there are two major sub-categories, as follows:
♢ IV-A: Airdrop function.
Airdrops are common in DeFi <cit.>. Scammers exploit users' greed and pretend to be an airdrop project. They first airdrop fake tokens to victims and lure victims into calling standard airdrop interfaces, such as the Claim, ClaimReward, ClaimRewards. After victims call the function, they steal victims' native tokens.
♢ IV-B: Wallet function.
Most users use wallets to manage their addresses.
The scammer pretends to be the user's wallet and sends a message, asking the user to call functions similar to the wallet's functionality and steal their native token. For example, the SecurityUpdate function pretends to be a wallet update, and the ConnectWallet function pretends to be a wallet connection.
From the analysis of described earlier, it is evident that malicious payloads are employed as fraudulent tactics, leading to notable distinctions between the content of on-chain transactions in comparison to benign transactions. Additionally, the diverse nature of scam techniques allows for the differentiation of each category based on the transaction content associated with specific techniques.
Extracting key features from these distinctions will form the foundation for the detection approach outlined in Section <ref>.
0.95
Finding #1: employs malicious payloads as fraudulent tactics, leading to notable distinctions from benign transactions. Moreover, these transactions can be accurately classified into sub-categories based on various techniques utilized.
§.§ Evaluation of Anatomy
To ensure the coverage and effectiveness of anatomy, we evaluate our classification by comparing it to well-known phishing labels. Specifically, we choose to utilize the Etherscan [The entity information has been verified by Etherscan.] Fake_Phishing nametags, which are the largest publicly available source of phishing nametags. However, during our investigation, we encountered certain issues with the data fetched from Etherscan.
For instance, we found that addresses belonging to the Hacker subcategory were separate and distinct from phishing and should not be included under Fake_Phishing. Additionally, the Fake token subcategory represented simple transaction phishing, which fell outside the scope of our study. Consequently, we excluded these addresses from our analysis.
As a result, there are two effective subcategories provided by Etherscan for :
* Address poisoning scam. The description states the address related to address poisoning scams, , "This address may be attempting to impersonate a similar-looking address" and "Zero Value Token Transfer Phishing".
* Unknown. The description lacks a specific reason, , "involved with a phishing campaign", and "involved in suspicious activities".
Accordingly, we collected a total of 5,130 addresses along with their corresponding phishing labels from May 10, 2023, to July 20, 2023. The quantities of each nametag type are presented in Table <ref>.
Comparing our classification to Etherscan, we achieved a more comprehensive coverage and broader inclusion of phishing labels. Our classification encompassed four types, providing a coverage rate of 91.2%, with only 9.8% of addresses labeled as Unknown. For the remaining 506 unknown addresses, we conducted additional manual analysis. Some of these labels were assigned because the addresses had been identified as phishing addresses on other EVM-compatible chains, even though they were not phishing on Ethereum. Others, according to a multi-chain search conducted by Debank <cit.>, were found to be completely empty addresses. Since no recorded phishing transactions were associated with these addresses on Ethereum, we were unable to classify them with the available data.
In summary, our anatomy achieves a better coverage of addresses with valid transactions, allowing for a comprehensive analysis of the phishing landscape on Ethereum.
§ DETECTION OF
In this section, we first introduce the key features for detection based on the previous analysis. We then propose a rule-based detection approach and evaluate its effectiveness using the ground-truth dataset.
§.§ Key features for detection
Drawing from the insights gained through the categorization (see Table <ref>), we extract four key features for phishing transaction detection:
∘ Contract code called by the transaction (Code). For transactions involving contracts, we capture the relevant contract code, including the bytecode, .sol files, and ABI files (if the contract is open source).
∘ Transaction input data (InputData). The input data of a transaction is composed of the hash of the function and its corresponding parameter arguments. We parse the input data based on the ABI file of the called contract, allowing us to extract specific function and parameter information [In the case of phishing that abuses legitimate contracts, it is essential to note that the legitimate contracts are typically open-source.].
∘ Transaction-related addresses (Address). We collect all addresses involved in the transaction, including the caller, the callee of the transaction, and the addresses parsed from the parameter information.
∘ Transaction history (History). For the tx.from.address of the transaction (, msg.sender), we collect their transaction history, which includes all transactions related to these addresses.
§.§ Rule-based detection approach
By integrating these features, we propose a rule-based multi-dimensional detection approach. This approach involves employing customized detection methods for each category, utilizing specific detection features to ensure high accuracy. The detailed detection rules are outlined in Table <ref>. In the following, we elaborate on the detection methods for each phishing category:
* Ice Phishing Scam Detection. This type of scam tactic abuses legitimate contracts.
Firstly, we collect a list of authorized addresses (called Authorized_List) obtained from Etherscan, including those associated with decentralized exchanges (DEX) and DeFi projects.
Next, we set up the prerequisites rule of ice phishing scam: when we encounter a transaction that involves valuable fund transfers and notice a discrepancy between the tx.from.address and transfer.from.address, we conduct further analysis on the tx.from.address.
If the target address is unauthorized (not in the Authorized_List) and transfers all existing funds in the transfer.from.address, do we classify the transaction as I: Ice Phishing scam.
To recognize each sub-category, we proceed to gather the transaction history of the tx.from.address and transfer.from.address. Based on the various transaction types identified from the transaction history, we categorize the transaction into subcategories such as I-A: approve, I-B: permit, or I:C setApproveForAll.
* NFT Order Scam Detection.
This type of scam tactic abuses legitimate contracts.
First, we apply a prerequisite to isolate transactions based on the transaction callee tx.to.address. In this study, we only focus on the addresses that belong to the famous NFT markets (, Opensea, Blur, X2Y2) [In detail, the Seaport 1.1, Seaport 1.2, Seaport 1.3, Seaport 1.4, Blur.io Marketplace, Blur.io Marketplace 2.0, Opensea Helper, Opensea Factory].
Then, combining the contract Code and ABI file, we parse the transaction Input Data to get the parameters.
When the parameters meet the function bulkTransfer and the recipient is not the transaction tx.from.address, we label them as the II-A: bulk transfer scam.
Seamless, if the parameters meet the function upgradeTo, we check whether the owner is the tx.from.address to judge if it is a II-B: proxy upgrade scam.
Turn to free buy order scam, we mainly focus on the conditions given by the seller, including NFT price, receipt address, and tips.
(i) the seller signs a sales order where the NFT price is $0, , without collection in Seaport 1.1 fullfilAdvancedOrder.
(ii) the seller gives an incredibly high fee to the buyer, , the 100% fees in Blur execute. It results in the same result of zero buy, see Figure <ref>.
(iii) the order recipient is not the NFT seller, , the seller gives his WETH to the buyer in Blur execute.
When a transaction exhibits any of these abnormal behaviors, we classify it as a II-C: free buy order scam.
* Address Poisoning Scam Detection.
Address poisoning scams adhere to a prerequisite, regardless of the specific deceptive techniques employed (, fake token, zero value, or dust transfer):
(i) When the victim sends a phishing transaction with a transfer (, tx has transfer). The fake address already exists in historical transactions. (, (, ∃ transfer' in tx.from.History, transfer'.to.address = transfer.to.address))
(ii) Before the scammer imitates a fraudulent transfer record from victim to a fake similar address, it is essential that the address has already sent valuable tokens to the genuine address, where the fake address is highly similar to the genuine address (, ∃ transfer” in tx.from.History.transfer, transfer”.to.address ≈ transfer'.to.address & transfer”.transfer.value 0).
Specifically, when we observe that the first 4 bits and the last 4 bits of two addresses are identical, we consider these addresses to exhibit a high degree of similarity.
After we encounter a transfer, we finally conduct preliminary matching of transactions with suspicious transfer behavior, , zero value transfer, fake token transfer, and dust value transfer.
* Payable Function Scam Detection.
The payable function scam relies on masquerading as innocuous function names to lure victims. After we observe many famous DeFi projects' functions, we observe a pattern: (i) Most functions are open-source. (i) Most functions are not payable, which means they can not receive users' native tokens. (iii) Most functions have implementation logic that is not empty.
Inspired by that, we first collect function signatures with sensitive names from Ethereum 4byte Signature Database <cit.>, such as claim, claimRewards, and Claim. Based on their function name, we separate the function signatures into Airdrop and Wallet classes.
Next, we establish the prerequisites for our detection approach: we consider only valuable transactions (tx.value 0) that have no associated transaction logs (tx.log = null). In such cases, we attempt to retrieve the contract source code. If the source code is inaccessible (., closed-source), we classify the transaction as an IV: Payable Function scam and further classify the sub-categories (, IV:A Airdrop function, IV:B Wallet function) based on the corresponding function signatures.
§.§ Evaluation of detection approach
We have implemented a prototype to evaluate our detection approach. First, to expedite the collection of Ethereum transaction information, we set up a local Ethereum archive node following the methodology described by Feng et al. <cit.>. Additionally, to speed up the history data collection, we accelerated the historical transaction replay process by following the techniques outlined by Wu et al. <cit.>. Finally, we implemented our aforementioned detection rules using Golang.
Besides the prototype implementation, we collected two datasets, the ground-truth dataset (see Section <ref>), and a large-scale dataset consisting of Ethereum transactions from May 1, 2023, to Jun 1, 2023. The large-scale dataset includes 210,000 blocks with 30,976,209 transactions.
In the following sub-sections, we will first use the ground-truth dataset to assess the accuracy of our approach. After that, we will apply our approach to the large-scale dataset to evaluate its real-world accuracy and efficiency.
§.§.§ Accuracy Evaluation
For the accuracy evaluation on the ground-truth dataset, we conducted separate accuracy assessments for each phishing category, as shown in the table <ref>.
The table demonstrates that our detection approach achieves remarkably high accuracy on the ground-truth dataset, with an overall F1-score over 99.9% (only 2 FPs in payable function and 1 FN in ice phishing).
For the large-scale dataset, we detected 12,050 transactions. To evaluate false positives (FPs), our research team manually reviewed these transactions using the process described in Section <ref>.
However, manually evaluating false negatives (FNs) in the same manner was impractical due to the large volume of transactions.
Therefore, for transactions not detected as , we collected their initiating addresses to check if they were flagged as Fake_Phishing by Etherscan within the same timeframe. We then manually reviewed transactions initiated by addresses labeled as Fake_Phishing. If a transaction was confirmed to be phishing, it was classified as an FN.
The results are summarized in Table <ref>: 84 transactions were identified as FP (4 in ice phishing and 80 in misleading), and 6 transactions were identified as FN (all in NFT order), resulting in an overall F1-score of 99.6%.
Additionally, we conducted a manual analysis to clarify instances of false detection cases.
For ice phishing, the majority of FPs resulted from victims approving transactions to themselves and invoking the transferFrom function. This rare behavior closely mimicked phishing activities and could not be distinguished by our detection approach.
FPs related to the payable function were attributed to specialized Miner Extractable Value (MEV) bots that employed payable functions without logical functionalities.
These MEV bots had off-chain information beyond our knowledge, leading to FP occurrences.
Regarding FNs, most were observed in ice phishing and NFT orders.
In ice phishing, FNs resulted from scammers leveraging decentralized exchanges (DEX) to convert victims' funds into alternative tokens, with the phishing address as the recipient. The complex contract semantics of these swaps disrupted the flow of funds, leading to FNs.
In NFT orders, FNs occurred because some scammers used extremely low prices (, 1 wei) to perform free order tricks instead of 0 value, resulting in detection failures.
These special cases will be discussed further in Section <ref>.
§.§.§ Efficiency Evaluation
To evaluate the efficiency of our detection approach, we use real Ethereum blocks to calculate time consumption. In Ethereum, the fundamental unit of packaging is the block, which contains multiple transactions. The average block production time in Ethereum is 12 seconds (12,000 ms).
As shown in Table<ref>, our approach exhibits high efficiency, with an average time consumption of just 390 ms per block, a median of 362 ms per block, and a maximum of 3,553 ms.
For a more detailed view, we present a time consumption graph in Figure <ref> in the appendix.
Our approach consistently consumes significantly less time than the block production time, even for blocks with the maximum time (which are rare, making the average time consumption a more reliable metric).
Therefore, our approach meets the requirements for real-time performance and has been integrated into Forta, a well-known real-time anti-phishing platform (detailed in Section <ref>).
§ LARGE SCALE DETECTION IN THE REAL WORLD
Given the demonstrated effectiveness of the proposed detection approach in the previous section, we can now apply this approach to detect real-world threats. Specifically, we conduct the detection on the Ethereum blockchain, covering the period from block 16,304,348 to block 18,440,040. This corresponds to a timeframe of 300 days, spanning from December 31, 2022, to October 27, 2023. During this period, our detection approach identifies a total of 130,637 transactions, as detailed in Table <ref>.
Building upon the detection results, we proceed to perform a comprehensive analysis in various aspects. In Section <ref>, we delve into an analysis of the transactions themselves. Section <ref> focuses on examining the characteristics and behaviors of scammers, while Section <ref> explores the experiences and impact on victims of such scams. Lastly, in Section <ref>, we present the valuable action we have provided to help combat and mitigate the risks posed by these real-world threats.
§.§ Analyzing Transactions
To analyze the transactions, we present our analysis from multiple perspectives. First, We examine the economic losses caused by phishing and their relationship with time changes. Secondly, we analyze the characteristics and performance of different phishing categories.
The economic losses caused by .
Considering the diversity of asset types and price fluctuations, it is important to explain the principles for calculating losses.
To ensure that our calculations are as realistic as possible, the prices of all ERC-20 tokens and NFTs are chosen as the price when the phishing transaction occurs.
Specifically, the price of ERC20 is determined by the price oracle [In this study, we only focus on Top tokens, , ETH, USDT, USDC, DAI, WETH, stETH, WBTC, BUSD.]. To NFTs, there is currently no way to determine the price of a specific NFT, we use the floor price marked on OpenSea instead.
We summarize the detailed phishing transactions and corresponding losses based on their phishing category in Table <ref>.
In total, caused a total loss of $341,945,807 during 300 days.
Among them, ice phishing has the highest proportion, accounting for $201,880,314 (59.04%).
Address poisoning is the second highest, with a total profit of $64,042,825 accounting for 18.73%.
Market order scams generate a total profit of $57,495,168, accounting for 16.81%. Finally, payable function scams generate $18,527,500, accounting for 5.42%. Interestingly, when we calculate the average losses, we observe variations in the profit strategies employed by phishing scams. For example, the number of address poisoning scams is relatively small (only 1,050 cases), yet they yield a significant individual loss of $60,993 per transaction. In contrast, payable function scams have the highest occurrence rate (66,826 cases), but the individual transaction loss is only $273.
We conclude the graph of by data and corresponding losses in Figure <ref>, from which we can see that has existed for a long time since early 2023, without being effectively solved, and as time goes by, the losses are still increasing.
It can be seen that phishing is an increasingly and continuously serious social problem, which further highlights the value of our work.
Especially, from March 22 to 24, the losses amount reached over $30 million.
After investigating the dates of these extreme cases, we find that Arbitrum airdrops <cit.> occurred on March 23, 2023. Unfortunately, such campaigns often result in great phishing success.
In the later stage, we find two extremely high losses, , $20M from the address poisoning attack suffered by Binance and $2.4M losses from the ice phishing of victim 0x13e382dfe53207E9ce2eeEab330F69da2794179E.
To examine the evolution and emergence process of elaborate scams, we conducted a separate study on the active periods of various scams in the early months of 2023, as illustrated in Figure <ref> in the appendix.
From the active period of different phishing sub-categories, it is evident that these phishing methods are constantly evolving and improving.
For instance, zero value transfer poisoning was already been active on December 25, 2022, as an early phishing method.
However, with the emergence of new variants of address poisoning scams, the first successful phishing transaction of dust poisoning appeared on March 7, 2023, while the first successful fake token poisoning appeared on March 16, 2023. The time interval shows that scammers continue to innovate fishing methods.
0.95
Finding #2:
has become a threatening cybercrime, yielding profits exceeding $341.9 million during a 300-day observation period. To make the most profit, employ different strategies. Payable function scams are numerous with small profits per transaction. In contrast, address poisoning scams are fewer in number but can generate significant profits in a single instance.
The characteristics of each category.
To better understand the characteristics of tricks, we delve into each trick respectively.
* Ice Phishing Scam. To better understand the prevalence of ice phishing scams, we conduct an analysis of the total number of approve and permit transactions during the same period.
Our findings reveal that, out of the token contracts we examined, there are a total of 4,207,423 successful approve transactions. Among these, 209,318 transactions are identified as phishing approves, accounting for 4.97% of the total number.
Even more concerning, we discover that out of the 13,877 successful permit transactions, 6,414 transactions are identified as phishing permits, accounting for a staggering 46.22% of the total number.
These alarming numbers show that the approve and permit functions are abused by phishing scams.
Based on our speculation, these functions are favored by phishers due to their hidden and efficient ability to transfer funds ownership.
* NFT Order Scam.
We analyze the movement of stolen NFT assets.
In total, there are 61,838 stolen NFTs, of which 16,442 NFTs have been transferred after being stolen (only 26.6%) until November 30, 2023. It indicates the poor liquidation of NFT assets.
In addition, we tracked the transfer events of these NFTs and identified the NFT markets in which these NFTs are sold. Finally, the movements of stolen NFTs are summarized in Table <ref> in the appendix.
From the table, we find that most scammers (62.22%) directly sell the NFTs to the market using the cashier address, while a small portion (17.85%) transfers NFTs to fund aggregators for selling.
Among the stolen NFTs, we observed that most of the NFTs were sold through Blur (61.78%), followed by OpenSea (21.97%), X2Y2 (8.32%), and LooksRare (7.83%).
In summary, we conclude that these NFT marketplaces do not effectively prevent the sale of stolen NFTs, and over 80% of stolen NFTs are sold through the markets.
* Address Poison Scam.
To poison the victims' transaction history, scammers will actively initiate attack transactions.
During the observation phase, we discovered a total of 888,744 address poisoning attack transactions, resulting in a total of 3,132,607 addresses being affected.
This long-term and extensive scam method poses a significant threat to the security of all addresses.
The scammer needs to pay the gas fee for their attack transactions. We calculate and find that the gas fee consumed by the scammers was 4023.3 ETH over a period of 300 days (13.4 ETH daily). Additionally, $60,509 tokens were used for dust transfers. According to Etherscan <cit.>, the daily gas consumption is around 107.5 ETH, which means that the gas fee consumed by address poisoning attack transactions accounts for 12.5% of all gas fees on the entire Ethereum.
* Payable Function Scam.
We conduct an analysis of various functions used in payable function scams to determine their respective proportions (see Table <ref> in the appendix). The total loss resulting from these scams exceeds $18 million. We observe two distinct types based on their functionalities: Airdrop accounted for 74.2% of the total losses (e.g., Claim/claim), while Wallet accounted for 25.8% (e.g., SecurityUpdate).
These findings indicate that victims of this specific phishing attack are primarily motivated by greed, as they aim to profit from potential gains associated with accepting airdrops. Unfortunately, their funds are ultimately stolen through deceptive profit-generating mechanisms employed by scammers. It is crucial to note that a minority of victims lack a fundamental understanding of blockchain technology and mistakenly perceive these interactions as standard wallet operations. As a result, they unknowingly make payments and become prey to these phishing scams.
0.95
Finding #3: is extremely rampant and has impacted ecosystem of Ethereum.
For example, 4.97% approve transactions and 46.22% permit transactions are identified as phishing transactions.
Scammers consume about 4023.3 ETH as transaction fees (13.4 ETH daily) to spread the address poisoning scams, which account for 12.5% of the total Ethereum gas fees.
§.§ Analyzing Scammer
In this section, we analyze the scammer and focus on their fund flow during the cash-out process, , the money transfer pattern and scammer address organization.
After reviewing scams that occurred over six months, we find a special money cash-out pattern, and categorize the behavior of scammer addresses into the following three types:
∘ Cashiers. The Cashier addresses are responsible for directly obtaining funds from victims.
∘ Fund Aggregators.
The fund aggregator addresses are responsible for aggregating the profit funds from multiple cashier addresses [During our analysis, we found that fund aggregators always receive funds from more than 3 cashier addresses.].
The fund aggregators may also be involved with multiple DeFi protocols, such as token swaps in decentralized exchanges (DEXes).
∘ Depositors. The depositor addresses are responsible for depositing on-chain assets to centralized exchanges (CEXes).
We illustrate the money cash-out pattern and the address organization in Figure <ref>.
First, the cashier addresses obtain funds from victims. Then, multiple cashiers transfer their funds to the fund aggregator.
The fund aggregator may exchange the tokens into fiat currencies (, USDT, USDC) or Ether.
Finally, the fund aggregator transfers the funds to multiple depositor addresses, which cash out the profits by CEXes.
Additionally, to escape the regulation from CEX and security companies, the fund aggregator addresses will change occasionally, resulting in a cashier address transferring money to different fund aggregators.
Due to the complex DeFi semantics and large transaction volume (over 3 billion transfers until June, 2023), the money flow graphs (MFG) of blockchain are complex and over-weight for analysis <cit.>.
However, based on the cash-out pattern, we propose a lightweight organization discovery algorithm based on their fund flow relationships and scammer roles, and show the algorithm process as follows.
∘ Step S1: Locate the Cashier.
First, we collected a set of cashier addresses from transactions that were directly exposed and identified as recipients of stolen funds.
∘ Step S2: outgoing Transfer Expansion.
We trace the outgoing fund transfer of the cashier addresses and record the destination addresses. To make the outgoing fund transfer more reliable, we analyze several famous DEXes (, Uniswap, Sushiswap), and remove redundant edges caused by DEX interaction. What's more, we prune transfers with a small value (less than $100).
∘ Step S3: Expansion Address Categorization.
After getting the outgoing transfer destination addresses, we further categorize these addresses through behavior features:
(i) if the destination address is in the CEX whitelist (the CEX whitelist is collected from Etherscan), the address is labeled as the CEX address.
(ii) if there are over 3 cashiers with the same outgoing destination address, we label the destination address as the fund aggregator address;
(iii) if the address does not fall into either of the above categories, we label it as an unknown address.
∘ Step S4: Repeat Expansion & Categorization. For the remaining unknown addresses, we further trace their outgoing transfers like step S2. And perform address categorizations like step S3. In this study, we repeat 3 times in total.
We show our algorithm process in Figure <ref> in the appendix due to the page limit.
In total, from the detected transaction, we identified 121 scammer organizations with the same fund aggregators.
We show the top 5 scammer organization in Table <ref>.
From the table, we can observe that among the highest-ranked organizations, there are several well-known scam addresses (, Fake_Phishing186944 <cit.>, Fake_Phishing179050 <cit.>) and scam drainers (, VenomDrainer <cit.>, InfernoDrainer <cit.>, and AngelDrainer <cit.>) exposed by the media. These top organizations account for 40.7% of all phishing scam revenue, making them a serious problem that needs to be addressed.
The findings from our cash-out pattern analysis indicate that our proposed scam organization is widely adopted within the current landscape of on-chain scam organizations.
Nevertheless, our proposed pattern has certain limitations when it comes to centralized services such as underground money laundering service, which will lead to some false correlations. However, our proposed cash-out pattern can serve as an inspiration for future research endeavors that aim to uncover more fraudulent addresses by exploiting address correlations.
0.95
Finding #4: Phishing addresses are highly organized during the cash-out process, with different roles such as cashier, fund aggregator, and depositor. Based on the cash-out pattern, we find that the top five phishing organizations account for 40.7% losses.
§.§ Analyzing Victims
In this section, we analyze the phishing victims.
Specifically, we conduct research on the victim's behavior profile and remedial measures after being phished.
The victim behavior profile.
Aim to identify user characteristics that are vulnerable to phishing scams.
We collected victim addresses from all phishing transactions and recorded the transactions actively initiated by these addresses.
To better demonstrate the behavior of victim addresses, we present two dimensions in Figure <ref> in the appendix, , the victims' transaction volumes and corresponding losses, and the proportion of transaction types.
From the analysis of the figure, it is evident that the majority of victims have fewer than 1,000 transactions. Interestingly, in incidents involving large amounts (over $100k), victim transactions are predominantly concentrated at less than 50. This statistic implies that experienced users with higher transaction amounts exhibit a greater awareness of phishing prevention.
Furthermore, our findings reveal that 99% of the victims had been engaged in DeFi activities, with 20% of them specifically involved in NFT transactions. In contrast, only 1% of the victims were found to be engaged in simple Ethereum transfers. This data further solidifies the notion that this new phishing technique predominantly targets DeFi users.
The victim remedial measure.
We mainly focus on the victims of ice phishing, as this type of fraud has ongoing harm until the victim uses the revoke function to cancel the phishing approval.
According to our observations, after being ice phished, victims mainly exhibit the following three behaviors: (i) revoking the phishing approval; ((ii)) transferring all assets to other addresses and abandoning the victim address; (iii) taking no remedial measures.
Out of the randomly selected 5,000 victims (in table <ref> in the appendix), only 1,316 addresses (26.32%) chose to revoke the phishing approval, while 1,665 addresses (33.3%) transferred all funds to other addresses, abandoning the previous address.
However, a concerning 2,019 addresses (40.38%) did not take any remedial measures, leaving them vulnerable to further attacks and potential financial loss.
This indicates that many victims have no idea how to take remedial measures.
The vast majority of victims (73.68%) did not take the most effective measure of revoking the phishing approval, but instead chose to transfer funds, which is more time-consuming and expensive.
0.95
Finding #5: The majority of victims (99%) are actively involved in DeFi, including NFT transactions. However, a significant portion of these victims (40.38%) lack awareness of the necessary steps to take for implementing remedial measures after experiencing a phishing attack.
§.§ Contributing to the Community
To further assist users in mitigating threats, we actively contribute to the community by submitting identified phishing addresses to Etherscan, which is the largest and de-facto standard blockchain explorer on Ethereum. It offers a nametag mechanism that allows trustworthy third parties to label various types of addresses. This practice is widely adopted by the community, including security companies and community sleuths, to combat phishing scams.
During the period from December 31, 2022, to October 27, 2023, we contributed a total of 1,726 phishing addresses. Among all the community contributors, our phishing address labels [Etherscan only records the label source of the first submission.] accounted for 42.7% of the total, as shown in Table <ref>.
In addition to providing phishing address labels to the community, we have made other efforts to assist users. Firstly, we proactively send on-chain messages directly to victims to alert them about phishing attempts. Our process involves monitoring the Ethereum pending pool for any suspicious transactions. Upon identifying a phishing transaction in the pending pool, we promptly send a transaction to the victim containing alert information.
By receiving our alert transactions, victims are empowered to take proactive measures and prevent phishing losses. During the specified period, we have successfully sent a total of 2,539 on-chain alert messages, providing assistance to 1,980 victims. Additionally, we contribute to anti-phishing efforts by providing phishing reports as online educational resources. These reports have been visited by a significant number of users, with a total visit count of 18,585 based on our internal records for that period. This effectively raises awareness and promotes anti-phishing initiatives.
As a result of our efforts, we have received expressions of gratitude in the form of on-chain transactions and tweets [We can provide them if needed for review purposes.]. We take pride in the acknowledgment and appreciation we have received from Etherscan and other members of the community. Their recognition validates our commitment to combatting phishing attempts and protecting individuals from these threats.
§ DISCUSSION
Our study performs the first empirical study of .
Although our focus is primarily on Ethereum, our approach can be easily applied to other EVM-compatible blockchains (, BNB smart chain and Polygon Mainnet).
In the following, we will discuss details related to the anatomy, corner cases, and anti-phishing tools/platforms.
Anatomy of .
In this study, we categorize the current phishing scams into four categories. However, as discussed in Section <ref>, scammers are continuously developing new methods. Therefore, the categorization presented in this study reflects the current state of phishing techniques. Future advancements in phishing methods may necessitate adjustments to this categorization.
Corner Case of Detection. In some extreme theoretical scenarios, our detection approach may produce inaccurate results, such as self-approvals or closed-source MEV bot (see Section <ref>).
Other potential corner cases might include situations where a drainer executes a transferFrom but leaves some funds with the victim.
These cases are counter-intuitive, as we assume that all on-chain behaviors are driven by rational actors seeking to maximize their benefits.
However, behaviors like self-approvals or leaving funds behind lead to unnecessary losses or wasted gas fees, making them relatively rare.
Consequently, while our detection approach may not cover all extreme theoretical cases, it remains suitable and effective for real-world applications.
Anti-Phishing Tools/Platforms.
Many security companies have developed anti-phishing tools/platforms to combat the prevalence of phishing scams.
We list prominent anti-phishing tools/platforms in Table <ref> in the appendix, and categorize them based on their approaches.
Current anti-phishing tools (, AegisWeb3, Pocket Universe) primarily use transaction pre-execution to predict fund changes and implement blacklists for receiving address detection.
In contrast, our detection approach adopts a rule-based strategy based on on-chain information.
This unique approach complements existing tools and enhances their security coverage.
Indeed, our detection approach has been integrated into Forta, a leading scam detection platform, establishing us as a primary partner.
§ RELATED WORK
§.§ Security Issues on Ethereum
Since its inception, Ethereum has faced numerous security issues. The security issues have evolved with the development of the platform. The academic community has shown great concern for the security of Ethereum, with many research <cit.> efforts dedicated to addressing its security challenges.
Xia et al. <cit.> perform the first analysis on the fake ERC-20 tokens, and leverage AI to perform fake token detection.
Chen et al. <cit.> conduct analysis on smart contracts and propose a method to find the security issues by comparing historical versions.
Liu et al. <cit.> focus on the permission bugs in the DeFi project, and propose a prototype detection system.
Su et al. <cit.> measure the DeFi attacks and propose a detection algorithm.
Das et al. <cit.> perform an in-depth analysis of the NFT ecosystem, and raise several security issues.
§.§ Phishing Analysis
Research into analyzing phishing behaviors have been evolving for years.
For traditional Web2 phishing, several studies <cit.> have analyzed phishing behaviors and characteristics.
Web3 phishing, while similar to traditional Web2 phishing, extends beyond websites and leverages cryptocurrency as a payment method <cit.>.
He et al. <cit.> and Li et al. <cit.> have proposed website-based phishing detection systems and conducted analyses of phishing websites.
In addition to traditional phishing scams, Ivanov et al. <cit.> were the first to highlight scams exploiting misleading EVM features, , address manipulation and Unicode attacks. Ye et al. <cit.> focused on phishing that involves misleading information on the wallet UI (including token symbols, wallet addresses, and smart contract function names), though their study was limited to zero-value transfers and fake claim functions.
Kim et al. <cit.> focused on NFT scams and developed a detection model using features like price differences, time duration, and transfer relations. Li et al. <cit.> collected illicit addresses from the Blockchain Intelligence Group and employed machine learning techniques to predict these addresses.
Our study distinguishes itself from related research in the following aspects: (i) Different target & motivation. To our knowledge, our study is the first to provide a comprehensive analysis of . We aim to thoroughly investigate this new form of phishing, which may include subclasses of previous phishing tactics such as “setApproveForAll” in NFT phishing and “zero value transfer” in address poisoning.
(ii) Different detection method & capability. Previous research primarily relies on past fund flows, which may overlook/delay the detection of newly created phishing addresses.
In contrast, our rule-based detection method allows for real-time identification of phishing transactions and addresses.
§ CONCLUSION
This paper presents the first comprehensive study of on the Ethereum.
First, we conducted a long-term data collection to establish the first ground-truth dataset consisting of 5,000 phishing transactions. Then we dissected , categorizing phishing tactics into four primary categories and eleven sub-categories.
Second, we proposed a rule-based multi-dimensional detection approach to identify phishing transactions, achieving over 99% F1-score.
Finally, we conducted an in-depth analysis of the large-scale detection results to offer insightful findings.
Our analysis revealed that resulted in losses exceeding $341.9 million within a 300-day period.
Scammers expended approximately 13.4 ETH daily, which accounted for 12.5% of the total Ethereum gas fees, in spreading address poisoning scams. Notably, the top five phishing organizations were responsible for 40.7% of the total losses.
Furthermore, our work made significant contributions to the community. We reported a total of 1,726 phishing addresses, accounting for 42.7% of the total community contributions during the same period. Additionally, we sent 2,539 on-chain alert messages, providing assistance to 1,980 victims of phishing attacks.
IEEEtran
§ APPENDIX
The appendix contains charts and figures mentioned in the main text but not displayed due to space constraints.
§.§ Important ERC-20/ERC-721 interface
§.§ Decision on expanding the number of phishing transactions.
Due to the extensive transactions associated with the addresses, manually verifying all historical transactions is impractical. Consequently, we employed a sampling method to obtain historical data. This approach involves a trade-off: a larger sample size significantly increases manual effort, while a smaller sample may result in insufficient coverage.
We analyzed the number of transactions per address and determined the median count to be 43.5. To balance adequate coverage with manageable effort, we chose a threshold of 50 transactions. Detailed information about the addresses is available at: <https://github.com/HypoopyH/PTXPhish>.
§.§ Detailed ground-truth dataset of
We have established the first ground-truth dataset, consisting of 5,000 phishing transactions. The dataset is categorized into various phishing categories, including 2,569 ice phishing transactions, 609 NFT order transactions, 226 address poisoning transactions, and 1,596 payable function transactions.
Detailed information can be found in Table <ref>.
§.§ Efficiency evaluation of detection approach.
The figure <ref> shows our detection approach time consumption, which is mentioned in Section <ref>. The average block production time in Ethereum is 12s (12,000 ms). Our approach is highly efficient, with an average time consumption of only 390 ms per block, a median time consumption of 362 ms per block, and a max time of 3,553 ms.
§.§ Popular signatures of payable function phishing scams
Table <ref> described in Section <ref>, details popular signatures of payable function phishing scams. These scams have resulted in total losses resulting exceeding $18 million. We observe two distinct types based on their functionalities: Airdrop scams account for 74.2% of the total losses (e.g., Claim/claim), while Wallet scams account for 25.8% (e.g.,
SecurityUpdate).
§.§ Heatmap of by date and corresponding losses in the early stage
Figure <ref>, described in Section <ref> shows the heatmap of by date and corresponding losses.
The data indicates that phishing methods are continually evolving and improving. For example, zero value transfer poisoning emerged as an early phishing method on December 25, 2022. However, new variants of address poisoning scams began to appear, with the first successful dust poisoning transaction on March 7, 2023, and the first successful fake token poisoning on March 16, 2023. This timeline highlights the ongoing innovation in phishing techniques.
§.§ Stolen NFTs cash-out markets
Table <ref>, described in Section <ref>, presents data on stolen NFTs cash-out markets.
The table reveals that the majority of scammers (62.22%) directly sell the NFTs to the market using the cashier address, while a smaller portion (17.85%) transfers NFTs to fund aggregators for selling.
Among the stolen NFTs, most were sold through Blur (61.78%), followed by OpenSea (21.97%), X2Y2 (8.32%), and LooksRare (7.83%).
§.§ Victim behavior profile
Figure <ref>, mentioned in Section <ref>, illustrates the victim behavior profile.
The data shows that the majority of victims have conducted fewer than 1,000 transactions. Notably, in incidents involving large amounts (over $100k), the victim transactions are predominantly fewer than 50. This suggests that experienced users, who handle higher transaction amounts, are generally more aware of phishing prevention.
§.§ Scammer organization discovery algorithm process
Figure <ref>, described in Section <ref>, outlines the scammer organization discovery algorithm process.
In step S1, we found 5,350 cashier addresses, of which 1,210 had no outgoing transfers.
In step S2, we identified 4,384 outgoing destination addresses with transfer value exceeding 100$.
In step S3, we categorized these addresses into 2,307 destination fund aggregators and 260 depositors based on their behavior.
In step S4, we repeated the outgoing transfer expansion & categorization process for the remaining 1,817 addresses.
§.§ Details of existing anti-phishing tools/platforms
Table <ref>, described in Section <ref>, provides details on existing anti-phishing tools/platforms.
§.§ Remedial behavior of ice phishing victims
Table <ref>, described in Section <ref>, details the remedial behavior of ice phishing victims.
Among the randomly selected 5,000 victims, only 1,316 addresses (26.32%) chose to revoke the phishing approval, while 1,665 addresses (33.3%) transferred all funds to other addresses, abandoning the compromised address.
However, a concerning 2,019 addresses (40.38%) did not take any remedial measures, leaving them vulnerable to further attacks and potential financial loss.
This suggests that the majority of victims unaware of how to effectively address phishing incidents.
|
http://arxiv.org/abs/2409.03664v1 | 20240905161800 | The Kneser--Poulsen phenomena for entropy | [
"Gautam Aishwarya",
"Dongbin Li"
] | math.MG | [
"math.MG",
"cs.IT",
"math.IT",
"math.PR",
"37C10, 94A17, 52A40, 52A20"
] |
Faculty of Mathematics, Technion - Israel Institute of Technology, Haifa 3200003, Israel.
[email protected]
Faculty of Science - Mathematics and Statistical Sciences, University of Alberta, Edmonton, AB T6G 2R3, Canada.
[email protected]
§ ABSTRACT
The Kneser–Poulsen conjecture asserts that the volume of a union of balls in Euclidean space cannot be increased by bringing their centres pairwise closer. We prove that its natural information-theoretic counterpart is true. This follows from a complete answer to a question asked in <cit.> about Gaussian convolutions, namely that the Rényi entropy comparisons between a probability measure and its contractive image are preserved when both undergo simultaneous heat flow.
MSC classification:
37C10,
94A17,
52A40,
52A20.
GA was supported by ISF grant 1468/19 and NSF-BSF grant DMS-2247834. DL acknowledges the support of the Natural Sciences and Engineering Research Council of Canada and the Department of Mathematical and Statistical Sciences at the University of Alberta.
The Kneser–Poulsen phenomena for entropy
Dongbin Li
September 9, 2024
========================================
§ INTRODUCTION AND MAIN RESULTS
Owing to the striking resemblance between various phenomena in convex geometry and information theory, one is naturally led to the study of parallels between the two subjects. This direction of research goes back at least to the work of Costa and Cover <cit.> in the early 1980's, when they explicitly observed the similarity between the Brunn–Minkowski inequality in convex geometry and the Entropy Power inequality in information theory, both being cornerstone results in their respective fields. A connection between these inequalities was perhaps observed even earlier by Lieb, appearing implicitly in the work <cit.>. From this point onwards, one finds a continuous stream of works elaborating the underlying dictionary between convex geometry and information theory while enriching both fields in the process (for example, <cit.>). We shall refrain from giving a general overview of this vibrant field and refer the interested reader to the beautifully presented survey article <cit.> (and the references therein) for a bigger picture.
We aim to supplement the evidence in favour of the Kneser–Poulsen conjecture, as well as offer a new potential tool, by observing that the mirroring entropy-version holds. Recall that, the Kneser–Poulsen conjecture in convex and discrete geometry asserts the intuitive statement that volume of the union of a finite collection of balls of a fixed radius cannot increase if their centres are brought closer together.
[Kneser–Poulsen, <cit.>]
Let {x_1 , … , x_k} and {y_1, ⋯ , y_k} be two sets of points in ℝ^n such that ‖ y_i - y_j‖_2≤‖ x_i - x_j‖_2 for all i, j ∈{1, …, k}. If r> 0, then we have:
( ⋃_i=1^k (y_i, r) ) ≤( ⋃_i=1^k (x_i, r) ),
where is the Lebesgue measure on (^n, ‖·‖_2) and ℬ(x,r) is the ball of radius r centred at x.
Sometimes a more general version is considered where the radii of the balls are allowed to be different. However, we will restrict ourselves to the original formulation as stated above.
If K = { x_1 , … , x_k}, the set ⋃_i=1^k (x_i, r) can be rewritten as the Minkowski sum K + rℬ. By an elementary approximation-from-within argument one can go from finite sets to compact sets in order to rephrase Conjecture <ref> in the manner below.
For every contraction (that is, a 1-Lipschitz map) T: (K, ‖·‖_2) → (^n, ‖·‖_2) defined on a compact set K
⊆^n, r>0, we have
(T[K] + r ) ≤ (K + r ).
Beyond the n=2 case which was resolved by Bezdek and Connelly <cit.>, very little is known. For a general dimension n, the Kneser–Poulsen conjecture has been established in various cases, under rather strong restrictions on the set K and the map T. For example, Csikós <cit.> proved Conjecture <ref> for continuous contractions (see Definition <ref>). A modification of the formulas for volume obtained by Csikós play a key role in Bezdek and Connelly's proof in the plane where they are delicately combined with an old trick (used previously, for example, by Alexander <cit.>) to move the x_i to the y_i in a larger ambient space. Other such examples include the more recent work of Bezdek and Naszódi <cit.> which demonstrates the conjecture for uniform contractions, that is, when there exists λ > 0 such that ‖ y_i - y_j‖_2 < λ < ‖ x_i - x_j‖_2 for all i ≠ j. In the same work, the authors also settle the case when the pairwise distances are reduced in every coordinate. For the current state-of-the-art regarding the Kneser–Poulsen conjecture, including many foundational works as well as recent exciting developments that we have skipped here, we refer the reader to <cit.>. However, despite the progress made so far, the Kneser–Poulsen conjecture remains largely out of reach. This is perhaps due to the fact that, at present, there is no way to deal with arbitrary contractions in a manner amicable to the geometric computations required in this context.
In the pioneering comparison of the Brunn–Minkowski inequality and the Entropy Power inequality discussed earlier, one immediately observes that Euclidean balls correspond to Gaussian measures and (the logarithm of) volume corresponds to the Shannon–Boltzmann entropy. Indeed, on one hand, the Brunn–Minkowski inequality can be stated in the form
(A + B) ≥ (A^∗ + B^∗),
where A^∗, B^∗ denote Euclidean balls having the same volume as A,B, respectively. On the other hand, the Entropy Power inequality asserts that
h(X + Y) ≥ h(X^∗ + Y^∗),
where h(·) denotes the Shannon–Boltzmann entropy (see Definition <ref>), X,Y are independent random vectors, X^∗, Y^∗ are independent Gaussian random vectors having the same entropy as X,Y, respectively.
These correspondences, summarised in Table <ref>, indicate that the Shannon–Boltzmann entropy of the heat flow X + √(s)Z is the natural information-theoretic analogue of the volume of a tube K + r. In this paper, we prove that the information-theoretic analogue mimics the geometric phenomena predicted by the Kneser–Poulsen conjecture <ref>. Before we state our main result concretely, which is more general than the last claim, we set up some notation.
(K + r ) h(X + √(s) Z).
§.§.§ Some notation and definitions
Throughout, unless stated otherwise, all sums X + Y of random vectors will be sums of independent random vectors. The density of a random vector X with respect to the Lebesgue measure (if it exists) will sometimes be denoted by f_X. The letter Z is reserved for a random vector having the standard Gaussian distribution in the ambient space, that is, if the discussion is in ^n we will have f_Z(x) = 1/(2 π)^n/2e^- ‖ x ‖^2_2/2. The word “Gaussian” will also be used for scalings of Z.
Let X be an ^n-valued random vector with density f with respect to the Lebesgue measure. Then, the Rényi entropy of order α∈ (0,1)∪ (1, ∞) of X is given by,
h_α(X) = 1/1 - αlog∫_^n f^αx̣.
The Rényi entropy of orders 0, 1, and ∞ are obtained via taking respective limits,
h_0(X) = log ((f)),
h_1(X) = -∫ f log f,
h_∞(X) = - log‖ f ‖_∞.
The special case h_1 (·) is called the Shannon-Boltzmann entropy, often denoted simply by h(·).
For an ^n-valued random vector X with distribution μ having density f with respect to the Lebesgue measure, we will abuse notation and use h_α(X), h_α (μ) , and h_α(f), interchangeably. A property of Rényi entropies that we shall use later is that h_α(X,Y) = h_α(X) + h_α(Y) holds if X,Y are independent vectors taking values in ^n, ^m, respectively.
Let T : K →^n be a contraction, i.e., a 1-Lipschitz (K, ‖·‖_2) → (^n, ‖·‖_2). The map T is said to be a continuous contraction if each point x ∈ K can be joined to T(x) ∈ T[K] by a curve c_x: [0,1] →^n such that ‖ c_x(t) - c_y(t) ‖_2 is monotonically decreasing. In this case, we call T_t: x ↦ c_x(t) a continuously contracting family of maps.
§.§ The main result
It is simple to check that h_α(X) ≥ h_α(T(X)) if T is a contraction. However, when the distributions of X and T(X) both undergo simultaneous heat flow to X + √(s)Z, and T(X) + √(s)Z, respectively, there may not be a contraction mapping the former to the latter. Our main result says that nevertheless their Rényi entropies may be compared.
Let T: ^n →^n be a 1-Lipschitz map. Then for every ^n-valued random vector X, s ≥ 0, we have
h_α(X + √(s)Z) ≥ h_α(T(X) + √(s)Z),
for all α∈ [0, ∞].
Suppose T_t is a continuously contracting family of maps starting in a set K ⊂^n (that is, T_0 is defined on K) with smooth trajectories, and X is a K-valued random vector. Then, the stronger conclusion holds: for any convex function ϕ: [0, ∞) → satisfying ϕ (0) = 0,
∫ϕ (f_T_t(X) + √(s)Z) x̣
is monotonically increasing in t.
Continuous contractions induce totally-ordered curves with respect to a majorisation order <cit.>. The second part of our result says that such curves continue to be totally-ordered under the action of Gaussian convolution.
Note that the main theorem completely answers <cit.> for Gaussian noise. Moreover, as explained in <cit.>, a result such as Theorem <ref> for the uniform distribution on a ball rather than the Gaussian would immediately yield the Kneser–Poulsen conjecture.
Examples satisfying the hypothesis for the second part of Theorem <ref> appear quite naturally in probability theory. The (reverse) Ornstein–Uhlenbeck flow on strongly log-concave measures <cit.>, the (reverse) heat flow for log-concave measures <cit.>, and the maps inducing displacement interpolation when the final transport map is 1-Lipschitz— are all examples of continuous contractions.
Of course, the information-theoretic analog of the Kneser–Poulsen conjecture is the special case α=1.
For every 1-Lipschitz map T: ^n →^n and random vector X, we have h(X + √(s)Z) ≥ h(T(X) + √(s)Z).
Perhaps from an adjacent viewpoint, the Kneser–Poulsen theorem for information transmission would be the following statement.
Suppose Alice wants to communicate with Bob using the alphabet K = { x_1 , ⋯ , x_k} over a noisy channel with additive white Gaussian noise: Bob receives the random point x + Z when Alice sends x. The optimal rate at which information can be reliably transmitted across this noisy channel cannot be improved by bringing points in K pairwise closer.
In information theory, this optimal rate is called the channel capacity 𝒞 (see <cit.> for a quick mathematical introduction). Shannon, in his landmark work <cit.>, showed that this quantity has a clean mathematical expression which translates to the following in the setting of Corollary <ref>:
𝒞 = sup_X I (X; X + Z),
where the supremum is over all random vectors taking values in K, and the mutual information I(X; X + Z) = h(X+Z) - h(X+Z | X) = h(X+Z) - h(Z) measures the amount of information shared between the “input”X and the “output” X + Z. Theorem <ref> shows I (X; X + Z) ≥ I(T(X); T(X)+Z) thereby proving Corollary <ref> in a pointwise-sense. We suspect that Corollary <ref> may hold in more generality, for every radially-symmetric log-concave noise W instead of the Gaussian Z. It is possible to approach the Kneser–Poulsen conjecture directly based on Rényi-generalisations of channel capacity, but this will be pursued elsewhere.
§.§ Plan of proof of Theorem <ref>
We first prove the second part using a mass-transport argument. Given a curve {μ_t}_t ∈ [0,1] of probability measures and a velocity-field v_t compatible with it, in Proposition <ref> we obtain a formula for a velocity-field ṽ_t compatible with the noise-perturbed curve {μ_t⋆ν}_t ∈ [0,1], where ν is any measure with density. In Section <ref> we show, under the assumption ν = γ is the standard Gaussian measure, that if T_t is continuously contracting, μ_0 any probability measure, μ_t = T_t_#μ_0, then ·ṽ_t≤ 0. Proposition <ref> allows us to deduce the second part of Theorem <ref> from this divergence condition. The first part elegantly follows from the second part, using the same old trick of extending the phase space. Only this time, thanks to the tensorisation properties of both the Gaussian measure and Rényi entropies, the argument goes through in all dimensions effortlessly.
§.§ Acknowledgements
We would like to express our heartfelt gratitude to Irfan Alam, Serhii Myroshnychenko, and Oscar Zatarain-Vera, for many enriching discussions around the Kneser–Poulsen theme. We are indebted to Boaz Slomka for introducing us to the Kneser–Poulsen conjecture.
§ PREPARATION: CURVES OF PROBABILITY MEASURES, PERTURBATIONS BY CONVOLUTIONS
Let {μ_t}_t ∈ [0,1] be a curve of probability measures in ^n. We say that a time-dependent velocity-field { v_t}_t ∈ [0,1] is compatible with {μ_t}_t ∈ [0,1] if the transport equation
∂_tμ_t + · (v_tμ_t) = 0
is satisfied in the weak sense. The latter equation means that
/ṭ∫ f μ̣_t = ∫⟨ f , v_t⟩μ̣_t,
for all test functions f.
Given a one-parameter family of maps T_t, say such that the trajectory T_t(x) of each point x is smooth, paths of measures and compatible velocity-fields arise naturally in the following manner. Fix a probability measure μ_0, define μ_t = T_t_#μ_0. Define v_t as the trajectory field of T_t using the equation /ṭ T_t(x) = v_t (T_t(x)). Then v_t is a time-dependent velocity field compatible with μ_t.
Velocity-fields v_t corresponding to continuous contractions have the monotonicity property ⟨ v_t(x) - v_t (y) , x - y ⟩≤ 0 on the support of v_t.
Let x_0, y_0 be such that T_t(x_0) = x, T_t(y_0) = y, then
0 ≥/ṭ‖ T_t(x_0) - T_t(y_0) ‖_2^2 = 2 ⟨ v_t(T_t(x_0)) - v_t(T_t(x_0)) , T_t(x_0) - T_t(y_0) ⟩
= 2 ⟨ v_t(x) - v_t (y) , x - y ⟩ .
Given a curve of probability measures, the dynamics of its noise-perturbation is described below.
Let {μ_t} be a curve of probability measures and v_t a time-dependent velocity-field compatible with it. Suppose ν is a measure having smooth density. Then, a time-dependent velocity field compatible with the curve μ̃_̃t̃ = μ_t⋆ν is given by the conditional expectation
ṽ_̃t̃(x) = [ v_t(X_t) | X_t + Y = x ],
where X_t∼μ_t, Y ∼ν is independent of the X_t.
Let f be a test function. Suppose ν has density g with respect to the Lebesgue measure. Then,
/ṭ∫ f(x) (̣μ_t⋆ν)(x)
= /ṭ∫[ ∫ f(x) g(x-y) x̣] μ̣_t(y)
= ∫⟨_y( ∫ f(x) g (x-y) x̣) , v_t(y) ⟩μ̣_t(y)
= ∫⟨ -∫ f(x) ( g)(x-y) x̣ , v_t (y) ⟩μ̣_t(y)
= ∫⟨∫ f (x) g(x-y) x̣ , v_t (y) ⟩μ̣_t(y)
= ∫⟨ f (x) , ∫ v_t (y) g(x-y) μ̣_t(y) ⟩x̣
= ∫⟨ f (x) , ∫ v_t(y) g (x-y) μ̣_t(y)/μ_t⋆ g (x)⟩μ_t⋆ g (x) x̣
= ∫⟨ f (x) , [ v_t(X_t) | X_t + Y = x ] ⟩(̣μ_t⋆ν)(x).
Thus, the continuity equation ∂_tμ̃_̃t̃ + · (ṽ_̃t̃μ̃_̃t̃) = 0 is verified.
We need one more ingredient to allow us to conclude entropic inequalities from compatible velocity-fields.
Let μ_t be a curve in the space of probability measures and a time-dependent velocity field v_t compatible with it. Suppose each μ_t has density ρ_t with respect to the Lebesgue measure. Then, if · v_t≤ 0 for all t, then ∫ f (ρ_t) x̣ is monotone in t for every convex function f : ^n → [0, ∞) such that f(0) = 0.
We have,
/ṭ∫ f (ρ_t) x̣ = ∫ f ' (ρ_t) ρ̇_̇ṫx̣ = - ∫ f ' (ρ_t) · (v_tρ_t) x̣
= ∫⟨ρ_t (f ' (ρ_t)) , v_t⟩x̣.
Note that, ρ_t (f ' (ρ_t)) = (ψ∘ρ_t), where ψ (w) = w f'(w) - f (w). Thus,
∫⟨ρ_t (f ' (ρ_t)) , v_t⟩x̣ = ∫⟨ (ψ∘ρ_t), v_t⟩x̣ = - ∫ψ (ρ_t) · v_tx̣.
On the other hand, since f is convex with f (0) = 0,
ψ (ρ_t) = ρ_t f ' ( ρ_t ) - f (ρ_t) = ρ_t( f ' (ρ_t) - f (ρ_t) - f(0)/ρ_t - 0) ≤ 0.
This shows that, /ṭ∫ f (ρ_t) x̣≥ 0 if · v_t≤ 0.
§ PROOF OF THEOREM <REF>
Let T_t be a family of continuously contracting maps in ^n having smooth trajectories. Call the corresponding velocity field v_t. Now, let μ_0 be any probability measure and μ_t := T_t_#μ_0. Consider μ̃_̃t̃ = μ_t⋆ν, where ν̣= g x̣ = e^-Vx̣. We are interested when ν is the distribution of √(s)Z, and thus g = C_se^-‖ x ‖_2^2/2s, g(x) = - g (x) V (x) = - g (x) x/s. We shall write the proof for s=1, the case of a general s works in exactly the same manner.
Recall that, by Proposition <ref>, a velocity-field compatible with μ̃_̃t̃ is given by
ṽ_̃t̃(x) = ∫ v_t(y) g (x - y) μ̣_t(y)/∫ g (x - y) μ̣_t(y).
Hence,
ṽ_̃t̃(x) = ∫ v_t(y)
⊗_x( g (x - y) ) μ̣_t(y) /∫ g (x - y) μ̣_t(y)
- ∫_x( g (x - y) ) μ̣_t(y) ⊗∫ v_t(y) g (x - y) μ̣_t(y)/( ∫ g (x - y) μ̣_t(y) )^2
= - ∫ v_t(y)
⊗ V (x - y) g (x - y) μ̣_t(y) /∫ g (x - y) μ̣_t(y)
+ ∫ V (x - y)g (x - y) μ̣_t(y) ⊗∫ v_t(y) g (x - y) μ̣_t(y)/( ∫ g (x - y) μ̣_t(y) )^2.
Let Y = Y_t,x be a random vector with density f_x(y) = g(x-y)/∫ g(x-y) μ̣_t(y) with respect to μ_t. Then, by taking trace,
·ṽ_t (x)
= - ⟨ v_t(Y) , V (x-Y) ⟩ + ⟨ V (x-Y) , v_t(Y) ⟩
= - ⟨ v_t (Y) , x ⟩ + ⟨ v_t(Y), Y ⟩ + ⟨ x , v_t (Y) ⟩ - ⟨ Y , v_t(Y) ⟩
= ⟨ v_t (Y) , Y ⟩ - ⟨ Y , v_t(Y) ⟩
= ⟨ v_t(Y) - v_t(Y) , Y- Y ⟩.
Now, since (Y - Y) = 0, v_t (Y) can be replaced with any constant of choice. Extending v_t to Y if necessary (by a theorem of Minty <cit.>), we choose the constant to be v_t( Y) so that
·ṽ_̃t̃ (x) = ⟨ v_t (Y) - v_t ( Y) , Y - Y ⟩≤ 0.
This, by an application of Proposition <ref>, establishes the second part of Theorem <ref>.
For the first part, suppose an arbitrary 1-Lipschitz map T: ^n →^n is given. Consider the continuous contraction in ^2n defined by the trajectories S_t (x) = (√(1-t)x, √(t)T(x)). The standard Gaussian in ^2n can be written as Z = (Z_1, Z_2), where Z_1, Z_2 are independent ^n-valued standard Gaussians. For any ^n-valued random vector X, we have
h_α (X + Z_1) + h_α(Z_2) = h_α(X + Z_1, Z_2)
=h_α ((X,0) + Z) ≥ h_α (S_1(X,0) + Z)
= h_α((0, T(X)) + (Z_1, Z_2))
=h_α (Z_1, T(X) + Z_2)
= h_α(Z_1) + h_α (T(X) + Z_1).
Cancelling off the Rényi entropy of the standard Gaussian on both sides completes the proof.
When n=1, since “divergence = derivative”, much stronger conclusions can be drawn by looking at the continuous contractions-part of the proof:
* The proof works for arbitrary log-concave probability densities g (that is, whenever V is convex). This requires an application of Chebyshev's other inequality <cit.> after the first equality in Equation <ref>.
* The transport maps induced by the velocity fields ṽ_t are contractions.
These one-dimensional results are already known albeit with a different proof <cit.>.
§ RELATED MATTERS: COSTA'S EPI FOR CONTINUOUS CONTRACTIONS
The entropy power inequality is commonly expressed in terms of the entropy power, which is defined by
N(X) = 1/2 π e e^2 h(X)/n,
for an ^n-valued random vector. Analogous to the volume-radius in convex geometry <cit.>, N(X) is the variance of the Gaussian having same entropy as X.
amsplain
|
http://arxiv.org/abs/2409.03517v1 | 20240905133005 | On constructing zeta elements for Shimura varieties | [
"Syed Waqar Ali Shah"
] | math.NT | [
"math.NT",
"math.RT",
"11R23, 11F67, 11F70 (Primary) 20E42, 20G25, 22D99 (Secondary)"
] |
§ ABSTRACT We present a novel axiomatic framework for establishing horizontal norm relations in Euler systems that are built from pushforwards of classes in the motivic cohomology of Shimura varieties. This framework is uniformly applicable to the Euler systems of both algebraic cycles and Eisenstein classes. It also applies to non-spherical pairs of groups that fail to satisfy a local multiplicity one hypothesis, and thus lie beyond the reach of existing methods.
A key application of this work is the construction of an Euler system for the spinor Galois representations arising in the cohomology of Siegel modular varieties of genus three, which is undertaken in two companion articles.
Speaker and Style Disentanglement of Speech Based on Contrastive Predictive Coding Supported Factorized Variational Autoencoder
Yuying Xie1, Michael Kuhlmann2, Frederik Rautenberg2 , Zheng-Hua Tan1, Reinhold Haeb-Umbach2
1. Department of Electronic Systems, Aalborg University, Denmark
2. Department of Communications Engineering, Paderborn University, Germany
===========================================================================================================================================================================================================================================================
§ INTRODUCTION
Euler systems are objects of an arithmetic-algebraic-geometric nature that are designed to provide a handle on Selmer groups of p-adic Galois representations
and play a crucial role in linking these arithmetic groups to special values of L-functions. Though Euler systems are Galois theoretic objects, the tools involved in their construction are often of an automorphic nature. A typical setup of its kind
starts by identifying the Galois representation in the cohomology of a Shimura variety of a reductive group G. The Galois representation is
required to be automorphic, i.e., its L-function
matches that of a corresponding automorphic representation. The class at the bottom of such a hypothetical system is taken to be the pushforward of a special element that lives in the motives of a sub-Shimura variety arising from a reductive subgroup H of G. Two common types of special elements are
fundamental cycles and Eisenstein classes, and their respective Euler systems are often distinguished based on this dichotomy. The desire to construct an Euler system via such pushforwards is also motivated by a corresponding period integral (p-adic or complex) which provides a link between L-values and the bottom class of this hypothetical system. For this reason, the classes in an Euler system are sometimes also referred to as `zeta elements' (<cit.>). Once such a setup is identified, the problem of constructing the deeper (horizontal) layers of zeta elements is often tackled by judiciously picking special elements of the same type in the motives at deeper levels of H and pushing them into motives of G along conjugated embeddings.
This turns out to be a rather challenging problem in general. At the moment, there is no known general method that illuminates what levels and conjugated embeddings would yield the desired norm relations in any particular setting.
The primary goal of this article is to describe an axiomatic machinery that specifies a precise criteria for constructing the deeper layers of zeta elements in the aforementioned settings. It is also designed to handle the potential failure of the so-called multiplicity one hypothesis, which is a crucial requirement for the technique of local zeta integrals introduced in <cit.>. This failure does arise in practice,
most notably in the situation studied in <cit.>, where the relevant period integral unfolds to non-unique models. An immediate new application of our work is the construction of a full Euler system for GSp_6 envisioned in loc.cit., which is carried out in <cit.>, <cit.> using the framework presented here. Other forthcoming applications include <cit.> and <cit.>.
§.§ Main results
To describe our main results, it is convenient to work in the abstract setup of locally profinite groups as used in <cit.> (cf. <cit.>).
Let H = ∏_v ∈ I' H_v, G = ∏_v ∈ I'G_v,
be respectively the restricted products of locally profinite groups H_v, G_v taken with respect to compact open subgroups U_v⊂ H_v, K_v ⊂ G_v. We assume that H_v is a closed
subgroup of G_v and that U_v = H ∩ K_v.
Let Υ_H, Υ_G be suitable non-empty collections of compact open subgroups of H, G.
Let
N : Υ_H→_p-Mod, M : Υ_G→_p-Mod
be mappings that associate to each compact open subgroup a _p-module in a functorial manner mimicking the abstract properties of cohomology of Shimura varieties over varying levels. More precisely, it is assumed that for each K_1⊂ K_2 in Υ_G and g ∈ G, there exist three maps; ^* : M(K_2) → M(K_1) referred to as restriction, _* : M(K_1 ) → M(K_2) referred to as induction and [g]^* : M(K_1) → M(gK_1g^-1) referred to as conjugation and that these maps satisfy certain compatibility conditions. Similarly for N. We also require that there are maps ι_* : N(K_1∩ H) → M(K_1) for all K_1∈Υ_G.
These model the behaviour of pushforwards induced by embeddings of Shimura varieties.
Fix for each v ∈ I a compact open normal subgroup L_v of K_v. By K, L, U, we denote respectively the products of K_v, L_v, U_v over all v and assume that L, K ∈Υ_G and U ∈Υ_H.
Let 𝒩 denote the set of all finite subsets of I. For ν∈𝒩, we denote G_ν = ∏_v ∈ν G_v, G^ν = G / G_ν and use similar notations for H, U, K, L. Set K[ν] = K^ν L_ν for ν∈𝒩. Thus K[μ] ⊂ K[ν] whenever ν⊂μ and we denote by _μ,ν,* the induction map M(K[μ]) → M(K[ν]).
For each v ∈ I, let ℌ_v be a finite _p-linear combination
of characteristic functions of double cosets in K_v\ G_v / K_v. Then for any pair of disjoint μ, ν∈𝒩, there are linear maps
ℌ_μ,* : M(K[ν]) → M( K [ ν ] )
given essentially by sums of tensor products of Hecke correspondences in ℌ_v for v ∈μ. For v ∈ I, let g_v,1, …, g_v,r_v∈ G be an arbitrary but fixed set of representatives for
H_v\ H_v·( ℌ_v) /K_v .
For i = 1, …, r_v, let H_v, i : = H_v∩ g_v,i K_v g_v,i^-1, V_v,i = H_v∩ g_v,i L_v g_v,i ^-1⊂ H_v,i and let 𝔥_v,i = 𝔥_g_v , i be the function h ↦ℌ_v((-)g_v,i) for h ∈ H_v. Then we have induced _p-linear maps 𝔥_v,i,* : N(U) → N(H_v,i U^v ) for each i given by Hecke correspondences. Given x_U∈ N(U), our goal is to be able to construct classes y_ν∈
M (K[ν]) such that y_∅ = ι_*(x_U) ∈ M(K) and
ℌ_μ∖ν , * ( y_ν ) = _μ , ν, * ( y_μ )
for all ν , μ∈𝒩 satisfying ν⊂μ. A classical example of norm relation in this format is <cit.>.
See
<cit.> for an exposition of Heegner point scenario in a similar spirit.
[Theorem <ref>] Let x_U∈ N(U). Assume that N equals a restricted tensor product ⊗'_v N_v with respect to x_U_v∈ N_v ( U _ v ) (see below) and x_U = ⊗'_v x_U_v. Suppose that for each v ∈ I and 1 ≤ i ≤ r_v, there exists x_v, i∈ N_v(V_v,i) such that
𝔥_v,i , * (x_U_v) = _V_v,i, H_v,i,*(x_v,i)
Then there exist classes y_ν∈ M
(K[ν]) for all ν∈𝒩 such that y_∅ = ι_*(x_U) and (<ref>) holds for all ν , μ∈𝒩 satisfying
ν⊂μ.
That N = ⊗'_v N_v
means the following.
For each v ∈ I, there are functorial _p-Mod valued mappings N_v on compact open subgroups of H _v
and there are elements x_U_v∈ N_v(U_v) such that for any compact open subgroup W = ∏_v W_v∈Υ_H that satisfies W_v = U_v for all but finitely many v, N(W) equals the restricted tensor product ⊗'_v N_v(W_v)
taken with respect to x_U_v.
In the case where special elements are taken to be fundamental cycles, the situation can be modelled by taking N(W) = _p· 1_W where 1_W denotes the fundamental class of the Shimura variety of level W ∈Υ_H. Then N is trivially a restricted tensor product.
In this case, Theorem <ref> reduces to verifying certain congruence conditions between degrees of Hecke operators. Here we define
the degree of a double coset operator T = (W h W') to be | Wh W'/ W' | and that of T_* to be | W\ W h W' |, and extend this notion to linear combinations of such operators in the obvious way. Set d_v,i = [ H_v,i : V_v,i ]. Note that d_v,i divides the index [K_v : L_v].
[Corollary <ref>] Let N be as above and x_U = 1_U∈ N(U). If for each v ∈ I and 1 ≤ i ≤ r_v, the degree of 𝔥_v,i,* lies in d_v,i·_p,
there exist classes y_ν∈ M( K [ν] ) for each ν∈𝒩 such that y_∅ = ι_*(x_U) and (<ref>) is satisfied for all ν⊂μ in 𝒩.
There is a generalization of such congruence criteria that applies
to Eisenstein classes.
For each v ∈ I, let X_v be a locally compact Hausdorff totally disconnected topological space endowed with a continuous right action X_v× H_v→ X_v. Let Y_v⊂ X_v be a compact open subset invariant under U_v. Let X = ∏_v ' X_v be the restricted topological product of X_v taken with respect to Y_v. Then we get a smooth left action of H on the so-called Schwartz space 𝒮_X of all locally constant compactly supported _p-valued functions on X. For our next result, we assume that for each W of the form ∏_v ∈ I W_v∈Υ_G, we have N(W) equals 𝒮_X(W), the _p-module of all W-invariant functions in 𝒮_X. Then N is a restricted tensor product of N_v with respect to ϕ_U_v = (Y_v) ∈ N_v(U_v) where N_v(W_v) for a compact open subgroup W_v⊂ H_v is the set of all W_v-invariant Schwartz functions on X_v.
Given compact open subgroups V_v, W_v⊂ H_v such that V_v⊂ W_v
and x ∈ X, we denote by V_v(x,W_v)
the subgroup of W_v generated by V_v and the stabilizer Stab_W_v(x) of x in W_v.
[Theorem
<ref>] Let ϕ_U = ⊗ ' ϕ_U_v∈ N(U) = 𝒮_X(U).
Suppose that for each v ∈ I and 1 ≤ i ≤ r_v,
(𝔥_v,i,*( ϕ_U_v ) ) (x) ∈ [ V_v,i(x,H_g,i): V_v,i] ·_p
for all x ∈ ( 𝔥_v,i, * (ϕ_U_v ) ). Then there exist y_ν∈ M(K[ν]) for all ν∈𝒩 such that y_∅ = ι_* ( ϕ_U ) and (<ref>) is satisfied for all ν⊂μ in 𝒩.
If X is reduced to a point {pt},
one recovers Theorem
<ref> since for all v and i, V_v,i(pt,H_v,i) = H_v,i and the action of 𝔥_v,i,* is via multiplication by its degree.
While it is conceivable to prove our main result in a more direct fashion (see Remark <ref>), we have chosen to develop our approach from the point of view of specifying a “best possible test vector" that yields a solution to (<ref>). Let us explain this. It is possible to recast the relations
(<ref>) in terms of intertwining maps of smooth representations of H × G by passing to the inductive limit over all levels. More precisely, let *NN,
*NM denote the inductive limits of N(V) ⊗__p_p, M(K') ⊗__p_p over all levels V ∈Υ_H, K' ∈Υ_G with respect to restrictions.
These are naturally smooth representations of H, G respectively. Let ℋ( G ) denote the _p-valued Hecke algebra of G with respect to a suitable Haar measure on G. We can construct a map
ι̂_* : *NN⊗__pℋ(
G
) →*NM
of H × G representations with suitably
defined actions on the source and the target.
One can then take an arbitrary finite sum of twisted pushforwards from N to M of classes at arbitrary `local' levels of H_v and ask whether the element given by this sum satisfies the norm relation (<ref>), say for ν = ∅, μ = { v }.
In terms of the map ι̂_*, this becomes a problem of specifying a “test vector” in N⊗__pℋ(G) that satisfies certain integrality properties and whose image under ι̂_* equals ι̂ _*(x_U⊗ℌ_v ). This leads to a notion of integral test vector given for instance in <cit.>, analogues of which also appear in several other recent works. If such an integral test vector lies in the H_v-coinvariant class of ι̂_*(x_U⊗ℌ_v), we refer to it as an abstract zeta element.
[Theorem <ref>] An abstract zeta element at v exists if and only if the norm relations (<ref>) hold for
1 ≤ i ≤ r_v up to _p-torsion.
This result connects our approach to the one pursued in <cit.> (cf. <cit.>), which seeks such integral test vectors by means of local zeta integrals. However, it provides no mechanism on how one may find them in the first place. Our approach, on the other hand, pinpoints an essentially unique test vector in terms of the Hecke polynomial to check the norm relations with. Another key advantage of our approach over theirs is that ours is inherently integral, as no volume factor normalizations show up in the criteria above. Crucially, it also has broader applicability, since it remains effective even in cases where the so-called ‘multiplicity one’ hypothesis fails to hold.
§.§ Auxiliary results
The execution of our approach hinges on explicit description of the Hecke polynomials ℌ_v and their twisted restrictions 𝔥_v,i. This requires among other things a description of left or right cosets contained in double cosets of parahoric subgroups. It is possible to exploit the affine cell decompositions of flag varieties to specify a “geometric" set of representatives which makes double coset manipulations a much more pleasant task. In <cit.>, Lansky derives such a decomposition recipe for double cosets of parahoric subgroups of split Chevalley groups by studying the structure of the underlying Iwahori Hecke algebras. Though the class of groups we are interested in is not covered by Lansky's results, the ideas therein are completely adaptable.
We generalize Lansky's recipe by axiomatizing it in the language of generalized Tits systems as follows.
Let 𝒯 = (G,B,N) be a Tits system and let φ : G →G̃ be a (B,N)-adapted inclusion. Let W = N / B be the Weyl group of 𝒯 and S the generating set of reflections in W determined by 𝒯. We assume that B s B / B is finite for each s ∈ S. Then B w B / B is finite for each w ∈ W. For each s ∈ S, let _s⊂ G denote a set of representatives for BsB/B. For X ⊂ S, let W_X denote the subgroup of W generated by X and K_X = B W_X B ⊂ G the corresponding parabolic subgroup. Let B̂ be the normalizer of B in G̃, Ω = B̂ / B and W̃ = W ⋊Ω be the extended Weyl group.
For any X, Y ⊂ S, let [W_X\W̃ / W_Y ] denote the set of all w ∈W̃ whose length among elements of W_X w W_Y is minimal. For a reduced decomposition w = s_1… s_mρ where s_i∈ W, ρ∈Ω, let _w = _s_1×…_s_m, ρ̃∈B̂ a lift of ρ and 𝒳_w : _w→ G the map which sends κ⃗ = (κ_1, …, κ_m) ∈_w to the product κ_1…κ_mρ̃.
Then the image of 𝒳_w modulo B only depends on the element w.
[Theorem <ref>]
For any X , Y ⊂ S and w ∈ [ W_X\W̃ / W_Y ], we have
K_X w K_Y = _τ_κ⃗∈_τ w 𝒳_τ w ( κ⃗ ) K_Y
where τ∈ W_X overs minimal length representatives of W_X / ( W_X∩ w W_Y w^-1 ).
The images of the maps 𝒳_w defined above can be viewed as
affine generalizations of the more familiar Schubert cells one encounters in the geometry of flag varieties. See <ref> for a discussion.
In practice, the recipe is applied by taking B to be the Iwahori subgroup of the reductive group at hand, W the affine Weyl group and W̃ the Iwahori Weyl group. The recursive nature of Schubert cells 𝒳_w proves to be particularly advantageous for computing the twsited restrictions of Hecke polynomials.
§.§ Other approaches
The framework presented here focuses on ‘pushforward-style’ constructions in cohomology, motivated by period integrals where cusp forms on a larger group are integrated against (some gadget on) a smaller group. Recently, a new ‘pullback-style’ approach has been proposed by Skinner and Vincentelli (<cit.>), opening up the possibility of using “potentially motivic" classes such as the Siegel Eisenstein class constructed in <cit.>.
Another approach, developed by Eric Urban, uses congruences between Eisenstein series to intrinsically construct Euler system classes in Galois cohomology <cit.>, <cit.>. Both of these approaches differ fundamentally from ours and do not seem applicable to the various settings that can be explored using our method, e.g., <cit.>.
For Euler systems of fundamental cycles, an earlier approach developed by Cornut and his collaborators also aims to prove norm relations in the style of (<ref>). This approach involves studying the Hecke action on the corresponding Bruhat-Tits buildings, e.g., see <cit.>, <cit.>, <cit.>. However, it was observed in <cit.> that the Hecke action used in these works is not compatible with the geometric one. Cornut has informed us however that this issue can be resolved. It is our expectation that insights from studying actions on Bruhat-Tits buildings may provide a more conceptual explanation for computations in our own work.
§.§ Organization
This article is divided into two parts, where the first develops our approach abstractly and the second executes it in concrete situations. We briefly outline the contents of each section within both.
In <ref>, we revisit and expand upon the abstract formalism of functors developed in <cit.>. Our motivation here is partly to develop a framework for Hecke operators that works well in the absence of Galois descent. We prove several basic results that normally require a passage to inductive limits. We also introduce the notion of mixed Hecke correspondences that allow us to relate double coset operators of a locally profinite group to those of a closed subgroup. These play a crucial role in establishing the aforementioned norm relation criteria.
We end the section by outlining how the formalism applies to Shimura varieties even in the absence of Milne's (SV5) axiom, which was assumed in <cit.>.
In <ref>, we develop our machinery from the point of view of abstract zeta elements. We strive for maximum possible generality in defining these and establish a structural result in Theorem <ref>. This
allows us to focus attention on a specific type of such elements; the one given by twisted restrictions of Hecke polynomials. This comes with the added benefit of eliminating all normalizations by volume factors, giving a highly canonical crtieria that we are able to upgrade in <ref> to finite levels. To handle Euler systems of Eisenstein classes, we have included an axiomatic study of traces in arbitrary Schwartz spaces of totally disconnected spaces in <ref>.
Finally, a toy example of CM points on modular curves is included to illustrate our machinery in a simple case.
In <ref>, we collect several facts about Satake transforms and Hecke polynomials. Everything here can be considered well-known to experts and we make no claim of originality.
We have however chosen to include
proofs of a few results, partly because we could not find a satisfactory reference that covers the generality we wish to work in and partly because conventions seem to differ from one reference to another. Some of these results
play a crucial role in our computations. A few results are included (without proofs) to provide a check on our computations. For instance, certain congruence properties of Kazhdan-Lusztig polynomials are not necessary for the computations done in this article but are invoked in <cit.>.
In <ref>, we develop from scratch another important ingredient of our approach. After justifying all the necessary facts we need on (generalized) Tits systems, we prove a recipe for decomposing certain double cosets following the method of Lansky. We briefly review some facts from Bruhat-Tits theory that allow us to apply this recipe in practice. The results of this section also complement the content of <ref> in the sense that we can often use the decomposition recipe
to efficiently invert various Satake transforms for Hecke polynomials, though we note that this step can often be skipped.
Part II of this article is devoted to examples. Its primary purpose is to provide concrete evidence that the abstract criteria proposed in Theorem <ref> does hold in practice. We study the split case of unitary Shimura varieties GU_1,2m-1 for arbitrary m in <ref> and the inert case for m = 2 in <ref>. The inert case for general m is the subject of a later work. In both these scenarios, we show that along anticyclotomic towers, the criteria of Theorem <ref> holds for pushforwards of fundamental cycles of products of two sub-Shimura varieties. The split case of our results for these Shimura varieties strengthens <cit.> and also applies to certain CM versions of these varieties. An interesting observation in the split case is that our criteria fails to hold if one considers the full abelian tower (Remark <ref>). This is consistent with the well-documented observation that Heegner points do not “go up" cyclotomic extensions. Another interesting observation is that the degrees of the various restrictions of the Hecke polynomials turn out to be q-analogues of the binomial expansion (1-1)^k for k a positive integer. This alludes to an intimate relationship between twisted restrictions of Hecke polynomials and factors of Satake polynomials.
In <ref>, we study the case of genus two Siegel modular varieties. Here we establish that the criteria of Theorem <ref> holds for pushforwards of cup products of Eisenstein classes for modular curves. This yields the “ideal" version of the horizontal norm relations alluded to in <cit.>. An interesting observation here is that of the two twisted restrictions of the spinor Hecke polynomial, one is essentially the standard _2-Hecke polynomial for a diagonally embedded copy of _2 and the corresponding trace computation is reminiscent of <cit.> for Kato's Euler system.
§.§ Acknowledgements
This article is based on the author’s thesis work done at Harvard University. The author is sincerely grateful to Barry Mazur for all his advice and encouragement; Christophe Cornut for valuable feedback; Antonio Cauchi for bringing the author’s attention to several new applications of this work and for carefully explaining an unfolding calculation; and Wei Zhang for several useful conversations. Many ideas of this article have their roots in an earlier joint work, and the author is thankful to his collaborator Andrew Graham for their continued discussions. The softwares MATLAB® and SageMath proved particularly helpful in carrying out and verifying numerous computations that arose in the course of this project. A part of this work was also completed when the author was affiliated with the University of California, Santa Barbara. The author extends his sincere gratitude to Francesc Castella, Zheng Liu, and Adebisi Agboola for their mentorship and support.
Part 1. General theory
§ PRELIMINARIES
In this section, we recall and expand upon the abstract formalism of functors on compact open subgroups of locally profinite groups as introduced in <cit.>[which in turn was inspired by <cit.>)] which we will use in <ref> to study norm relation problems encountered in the settings of Shimura varieties. We note that a few conditions of <cit.> have been relaxed for generality while others related to vertical norm relations have been dropped completely since they do not pertain to the questions addressed in this article[though they are needed again in <cit.>)]. Note also that the terminology in a few places has been modified to match what seems to be the standard in pre-existing literature on such functors e.g., <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. The material in this section can however be read independently of all of these sources.
One of the reasons for developing this formalism further (besides convenience and generality) is to address the failure of Galois descent in the cohomology of Shimura varieties with integral coefficients. This failure in particular means that the usual approach to Hecke operators as seen in the theory of smooth representations is no longer available as cohomology at finite level can no longer be recovered after passage to limit by taking invariants. See <cit.> for a discussion of a similar issue that arises when defining cohomology with support conditions.
In our development, the role of Galois descent is primarily played by what is known in literature as Mackey's decomposition formula which was used in <cit.> to study vertical norm relations. This formula can serves as a replacement for Galois descent and allow us to derived many results that hold when coefficients are taken in a field. One may thus view this formalism as an integral counterpart of the ordinary theory of abstract smooth representations.
§.§ RIC functors
For G a locally profinite group, let Υ = Υ_G be a non-empty collection of compact open subgroups of G satisfying the following conditions
(T1) For all g ∈ G, K ∈Υ, gKg^-1∈Υ.
(T2) For all K, L ∈Υ, there exists a K ' ∈Υ such that K ' ◃ K, K' ⊂ L.
(T3) For all K, L ∈Υ, K ∩ L ∈Υ.
Clearly the set of all compact open subgroups of G satisfies these properties. Let ℱ be any collection of compact open subgroups of G. We let Υ(ℱ ) denote the collection of all compact open subgroups of G that are obtained as finite intersections of conjugates of elements in ℱ. We refer to Υ(ℱ) as the collection generated by ℱ.
For any ℱ as above, the collection Υ( ℱ ) satisfies satisfies (T1)-(T3). In particular, any collection that satisfies (T1)-(T3) and contains ℱ must contain Υ(ℱ ).
Axioms (T1) and (T3) are automatic for Υ(ℱ) and we need to verify (T2). Let K , L ∈Υ (ℱ ). Pick a (necessarily finite) decomposition K L = _γγ L and define K ' : = K ∩⋂ _γγ L γ^-1∈Υ(F) Then K ' ◃ K and K' ⊂ L and so (T2) is satisfied.
To any Υ as above, we associate a category of compact opens 𝒫(G) = 𝒫(G, Υ) whose objects are elements of Υ and whose morphisms are given by Hom_𝒫(G) ( L, K ) = { g ∈ G | g^-1 L g ⊂ K } for L , K ∈Υ with
composition given by
(L K ) ∘ ( L ' L ) = (L' L K ) = ( L ' K ) .
A morphism (L K ) will be denoted by [g]_L,K, and if e denotes the identity of G, the inclusion (LK) will also denoted by _L,K. Throughout this section,
let R denote a commutative ring with identity.
An RIC functor M on (G, Υ ) valued in R-Mod is a pair of covariant functors
M ^ * : 𝒫(G) ^ op→ R-Mod M _ * : 𝒫(G) → R-Mod
satisfying the following three conditions:
(C1) M^*(K) = M_*(K) for all K ∈Υ. We will denote the common R-module by M(K).
(C2)For all L, K ∈Υ such that g ^-1 L g = K,
(L K)^* = ( K L )_*∈Hom(M(K),M(L)) .
Here, for ϕ∈𝒫 ( G ) a morphism, we denote ϕ_* : = M_*(ϕ), ϕ^* : = M^*(ϕ).
(C3) [ γ ]_K,K,* : M(K) → M(K) is the identity map for all K ∈Υ, γ∈ K.
We refer to the maps ϕ^* (resp., ϕ_*) in (C2) above as the pullbacks (resp., pushforwards) induced by ϕ. If moreover ϕ = [e], we also refer to ϕ^* = ^* (resp., ϕ_* = _*) as restrictions (resp., inductions). We say that a functor M is -torsion free if M(K) is -torsion free for all K ∈Υ. Moreover, we say that M is
(G) Galois if for all L , K ∈Υ, L ◃ K,
_L,K^* : M (K) M(L)^K/L .
Here the (left) action K/L × M(L) → M(L) is given by (γ, x) ↦ [ γ ]^*_L,L (x).
(Co) cohomological if for all L , K ∈Υ with L ⊂ K,
( L K ) _ * ∘ ( L K ) ^ * = [K:L] · ( K K )^* .
That is, the composition is multiplication by index [K:L] on M(K).
(M) Mackey
if for all K, L, L' ∈Υ with L ,L' ⊂ K, we have a commutative diagram
[column sep = large]
⊕_γ M(L_γ) [r,"∑_*"] M(L)
M(L') [r,"_*",swap] [u,"⊕ [γ]^*"] M(K) [u,"^*",swap]
where the direct sum in the top left corner is over a fixed choice of coset representatives γ∈ K of the double quotient L \ K / L' and L_γ = L ∩γ L' γ^-1∈Υ. The condition is then satisfied by any such choice of representatives of L \ K / L '.
If M satisfies both (M) and (Co), we will say that M is CoMack. If S is an R-algebra, the mapping K ↦ M(K) ⊗ _R S is a S-valued RIC functor, which is cohomological or Mackey if M is so.
In what follows, we will often say that M : G → R-Mod is a functor when we mean to say that M is a RIC functor on (G, Υ) and suppress Υ if it is clear from context.
The acronym RIC stands for restriction, induction, conjugation and the terminology is borrowed from <cit.>. Cf. <cit.> and <cit.>.
A morphism φ : N → M of RIC functors is a pair of natural transformations φ ^ * : N ^ * → M^ * , φ_* : N _*→ M _*
such that φ_* ( K ) = φ^* ( K ) for all K ∈Υ. We denote this common morphism by φ(K). The category of R-Mod valued RIC functors on (G,Υ) is denoted RIC_R(G, Υ) and the category of CoMack functors by CoMack_R(G, Υ).
We record some straightforward implications. Let M : 𝒫(G,Υ) → R-Mod be a RIC functor.
The functor M is Mackey if and if only if for all K, L, L' ∈Υ with L ,L' ⊂ K, we have a commutative diagram
[column sep = large] ⊕_ δ M ( L _ δ ' ) [r, "∑ [δ ]_*"]
M ( L )
M ( L' ) [r, "_*"] [u, "⊕ ^*"] M (K) [u,"^*", swap]
where the direct sum in the top left corner is over a fixed choice of coset representatives δ∈ K of L ' \ K / L and L_δ ' = L' ∩δ L δ ^-1∈Υ.
If γ∈ K runs over representatives of L \ K / L ', then δ = γ ^-1 runs over a set of representatives for L ' \ K / L. For each γ∈ K, we have a commutative diagram
M(L_δ ' ) [r, "[δ]_*"] M(L)
M(L') [ru, "_*" ] [r, "[γ]^* ", swap] M(L_γ ) [ru, swap, "_*"]
where δ = γ ^-1. Indeed, the two triangles obtained by sticking the arrow M(L_γ) M(L'_δ) in the diagram above are commutative. From this, it is straightforward to see that diagram (<ref>) commutes if and only if diagram (<ref>) does.
We will refer to the commutativity of the diagram (<ref>) as axiom (M').
We say that M has injective restrictions if _L,K ^ * : M ( K ) → M ( L ) are injective for all L , K ∈Υ, L ⊂ K.
Suppose M is either Galois or cohomological and -torsion free. Then M has injective restrictions.
Let L , K ∈Υ with L ⊂ K. If M is Galois, pick K ' such that K' ◃ K, K' ⊂ L (using axiom (T2)). Then _K', K^* = _K', L ^ * ∘_L, K ^ * : M ( K ) → M ( K ' ) is injective by definition which implies the same for _L,K ^ *. If M is cohomological, then _L,K,*∘_L,K^* = [K:L] which is injective if M(K) is -torsion free which again implies the same for _L,K^*.
Suppose M is Mackey. Let L, K ∈Υ with L ⊂ K and let K' ∈Υ be such that K' ◃ K and K' ⊂ L[such a K' exists by (T2)]. Then
_K' , K ^ * ∘ _ L , K , * =
∑_γ [γ]_K' , L ^* where γ runs over K / L.
Since K ' ◃ K and K' ⊂ L, the right multiplication action of K' on L \ K is trivial i.e. L \ K / K ' = L \ K. By axiom (M') obtained in Lemma <ref>, we see that
[column sep = large] ⊕_ δ M ( K'
) [r, " ∑[ δ]_*"]
M ( K ' )
M ( L ) [r, "_*"] [u, "⊕ ^*"] M (K) [u,"^*", swap]
where δ runs over L \ K. Since [δ]_K', K ' , * = [δ^-1]_K', K' ^ *, we may replace δ with δ = γ ^-1. Then γ∈ K runs over K / L as δ runs over L \ K and the claim follows.
Suppose R is a -algebra. Then M is Galois if it is CoMack.
For any L , K, Υ with L ◃ K, _L,K^* : M(K) → M(L) is injective by Lemma <ref>. If x ∈ M(L) is K-invariant, then _L,K^*∘_L,K,*(x) = ∑_γ∈ K / L [γ]^*_L,K (x) = [K:L] x by Lemma <ref> and so _L,K^* (y) = x if y = [K : L] ^-1_L,K,*(x) ∈ M(K) . Thus _L,K^* surjects onto M(L)^K/L.
It is clear how to define the direct sum and tensor product of functors on finitely many groups G_1 , …, G_n to obtain a functor on G_1×⋯× G_n. A more involved construction is that of restricted tensor products which we elaborate on now. Say for the rest of this subsection only that G = ∏_ v ∈ I' G_v is a restricted direct
product of locally profinite groups G_ v with respect to compact open subgroups K_ v given for each v ∈ I. For ν a finite subset of I, we denote G_ν : = ∏_v ∈ν, G^ν : = G/G_ν and similarly for K_ν, K ^ν. For each v ∈ I, let Υ_ v be a collection of compact open subgroups of G _ v that satisfies (T1)-(T3) and which contains K_v. Let Υ_I⊂∏_ v ∈ I Υ _ v be the collection of all subgroups of the form L_ν K^ν where ν is a finite subset of I and L_ν∈∏_ v ∈νΥ_v. Then Υ_I satisfies (T1)-(T3) and contains K. If L ∈∏_v ∈ IΥ, we denote L_v its component group at v.
Let N_v : 𝒫( G_v , Υ_ℓ ) → R-Mod be a RIC functor and let ϕ_K_v∈ N_v(K_v) for each v ∈ I. The restricted tensor product N = ⊗_v' N_v with respect to ϕ_K_v is the RIC functor M : 𝒫(G , Υ_I) → R-Mod given by L ↦⊗'_v N(L_v) where ⊗ '_v denotes the restricted tensor product of R-modules N(L_v) with respect to ϕ_K_v.
We elaborate on the definition above. Fix L ∈Υ_I and write L = L_ν K^ν. For each finite subset μ of I with μ⊃ν, denote N_μ : = ⊗_v ∈μ N_v (L_v ) the usual tensor product of R-modules. If μ_1⊂μ_2 are two such sets, there is an induced map N_μ_1→ N_μ_2 of R-modules that sends x ∈ N_μ_1 to x ⊗⊗_v ∈μ_2∖μ_1ϕ_K_v. Then N(L) = _μ N _μ where the inductive limit is over the directed set of all finite subsets μ of I that contain ν.
§.§ Inductive Completions
Let (G, Υ) be as in <ref>. The category CoMack_R(G ,Υ) is closely related to the category of smooth G-representations. We show that
when R is a field and Υ is the collection of all compact open subgroups of G, there is an equivalence between the two. When R is not a field however, axiom (G) can fail and the former category requires a more careful treatment.
Let π be a left module over R[G]. We say that π is a smooth representation of G if for any x ∈π, there is a compact open
subgroup K ⊂ G such that x is fixed under the (left) action of K. A morphism of smooth representations is a R-linear map respecting the G-actions. The category of smooth representations of G is denoted SmthRep_R(G).
Suppose π∈SmthRep_R(G). For K ∈Υ, let M_π : G → R-Mod be the functor given by K ↦π^K. For g ∈ G and (L K ) ∈𝒫(G,Υ), let
[g] ^ * : M(K) →M(L) [g]_ * : M(L) →M(K)
x ↦g ·x x ↦∑_γ∈K / g^-1 L g γg ^-1 ·x
Here, g · x ∈π in the mapping on the left above is indeed a well-defined element of M(L) as it is invariant under L ⊂ gKg^-1 and similar remarks apply to the expression on the right above. In particular, the map [e]^*_L,K : M(K) → M(L) is the inclusion π^K↪π^L. The following is then straightforward.
The mapping M_π is a RIC functor that is CoMack and Galois.
We refer to M_π as the RIC functor associated to π. If π = R is the trivial representation, we denote the associated functor by M_triv and refer to it as the trivial functor.
Let M : G → R-Mod be a functor.
The inductive completion M is defined to be the limit _K ∈Υ M ( K ) where the limit is taken over all restriction maps. We let j_K : M ( K ) →M denote the natural map.
There is an induced smooth action G ×M→M , (g,x) ↦ g· x where g · x is defined as follows. Let K ∈Υ, x_K∈ M(K) be such j_K(x_K) = x. Then g · x is defined to be the image of x_K under the composition
M(K) M ( g K g ^ - 1 ) →M .
It is a routine check that is well-defined. The action so-defined is smooth as the image of j_K : M(K) →M(K) is contained in the K-invariants M^K. If M is also Galois, j_K identifies M(K) with M^K.
Moreover if φ : M → N is a morphism of functors, the induced map φ : M→N is G-equivariant.
Suppose M is cohomological. Then ( j_K ) is contained in M(K)_-tors. In particular, if R is a field of characteristic zero, j_K is injective.
Let x ∈ ( j_K ). By definition, there exists L ∈Υ, L ⊂ K such that _L,K^* (x) = 0. Since _L,K,*∘ _ L, K ^* = [ K : L ], we must have [K:L] · x = 0.
The following result seems originally due to <cit.> for finite groups.
Let R be a -algebra and Υ the collection of all compact open subgroups of G. Then the functor SmthRep_R(G) →CoMack_R(G , Υ ) given by π↦ M_π
induces an equivalence of categories with (quasi) inverse given by M ↦M.
By Lemma <ref>, any CoMack functor valued in a -algebra is Galois and therefore one can recover a functor M from the representation M. Similarly, _ K ⊂ Gπ^K = π by smoothness of π.
§.§ Hecke operators
A smooth representation comes equipped with an action of algebra of measures known as Hecke algebra. In this subsection, we briefly review the properties of this action and fix conventions. For background material on Haar measures and further reading, the reader may consult <cit.>.
Let (G, Υ) be as in <ref>.
Let μ be a left invariant Haar measure on G valued in R[e.g., if μ is -valued and R is a -algebra] and let K ∈Υ.
The Hecke algebra ℋ_R( K \ G / K ) of level K is defined to be the convolution algebra locally constant K-bi-invariant functions valued in R. The convolution product is denoted by *.
The Hecke algebra of G over Υ is defined to be ℋ_R(G) = ℋ_R(G, Υ) = ⋃ _ K ∈Υℋ_R ( K \ G / K ). The transposition on ℋ_R ( G , Υ ) is the mapping ξ↦ξ^t = ( g ↦ξ(g^-1) ), ξ∈ℋ_R(G).
The convolution ξ_1 * ξ_2 where ξ_1 , ξ_2∈ℋ_R(G, Υ) is given by
( ξ_1 * ξ_2 ) ( g ) = ∫_x ∈ G ξ_1(x) ξ_2(x^-1 g) d μ(g) .
In particular, if ξ_1 = ( α K ) for α∈ G, K ∈Υ and ξ_2 is right K-invariant, then ξ_1 * ξ_2 = μ(K) ξ_2 ( α ^-1 ( - ) ).
If G is unimodular, then one also has ( ξ _1 * ξ_2 ) ( g) = ∫_Gξ_1 ( g y^-1 ) ξ_2(y) d μ(y) obtained by substituting x with g y ^-1. The transposition map is an anti-involution of ℋ_R(G) i.e. ( ξ_1 * ξ_2 ) ^ t = ξ_2 ^ t * ξ_1 ^t for all ξ _1 , ξ_2∈ℋ_R(G). It stabilizes ℋ_R(K \ G / K ) for any K ∈Υ.
The Hecke algebra ℋ_R ( K \ G / K ) has an R-basis given by the characteristic functions of double cosets K σ K for σ∈ K \ G / K denoted ( K σ K ) and referred to as Hecke operators. The degree of ( K σ K ) is defined to be | K σ K / K | or equivalently, the index [ K : K ∩σ K σ^-1]. The product (Kσ K) * (K τ K ) is supported on K σ K τ K and can be described explicitly as a function on G / K as follows: if K σ K = _iα _ i K, K τ K = _jβ_j K, then
ch(K σ K) * ch(K τ K) = μ ( K ) ·∑ _ i ( α_i K τ K ) = μ(K) ·∑ _ i , j ch ( α_iβ _j K )
On the other hand, the value of the convolution at υ∈ G equals μ ( K σ K ∩υ K τ^-1 K ). Thus the convolution above can be written as μ(K) ·∑_υ c_σ, τ ^ υ ( K υ K ) where υ∈ K \ K σ K τ K / K and
c_σ , τ ^υ = | ( K σ K ∩υ K τ ^-1 K ) / K |
If μ( K ) = 1, then ℋ_R(K \ G / K ) is unital and the mapping ℋ_R( K \ G / K ) → R given by ( K σ K ) ↦ | K σ K / K | is a homormorphism of unital rings.
Any smooth left representation π∈SmthRep_R(G) inherits a left action of the Hecke algebra ℋ_R(G, Υ).
The action of ch(K' σ K ) ∈ℋ_R ( G, Υ ) on an element x ∈π invariant under K is given by ch(K ' σ K) · x = μ( K ) ∑ _α∈ K ' σ K / K α· x.
Similarly, if K ∈Υ, the R-module π ^K is stable under the action of ℋ_R(K \ G / K ) and is therefore a module over it.
In particular, if M is a RIC functor, then M is a module over ℋ_R(G, Υ) and if M is Galois, M(K) = M^K is naturally a module over ℋ_R( K \ G / K ).
We note that ℋ_R(G , Υ ) is itself a smooth left representation of G under both right and left translation actions. It is therefore a (left) module over itself in two distinct ways. Let
λ: G ×ℋ_R ( G , Υ) →ℋ_R ( G , Υ) ρ: G ×ℋ_R(G, Υ) →ℋ_R(G , Υ)
(g, ξ) ↦ξ( g^-1 (-) ) (g , ξ) ↦ξ( (-) g )
When ℋ_R(G , Υ ) is considered as a G-representation under λ, the induced action of ℋ_R(G, Υ ) on itself is that of the convolution product *. When ℋ_R(G , Υ ) is considered as a G-representation under ρ, the induced action of ℋ_R(G, Υ ) will be denoted by *_ρ.
There is a relation between * and *_ρ that is useful to record.
For ξ_1 , ξ_2∈ℋ_R(G, Υ ), ξ_1 * _ ρξ_2 = ξ_2 * ξ _1 ^ t.
By definition, we have for all g ∈ G
( ξ_1 * _ ρξ_2 ) ( g)
= ∫_Gξ_1( x ) ξ_2(gx) d μ(x)
= ∫_Gξ_2(y) ξ_1( g^-1 y ) d μ(y ) = ∫_Gξ_2(y) ξ_1^t( y^-1 g ) d μ(y)
= ( ξ_2 * ξ_1^t ) ( g)
where in the second equality, we used the change of variables x = g^-1 y.
§.§ Hecke correspondences
On RIC functors, one may abstractly define correspondences in the same manner as one does for the cohomology of Shimura varieties. We explore the relationship between such correspondences and the action of Hecke algebra defined in <ref>. More crucially, we need to establish the usual properties of Hecke correspondences in the absence of axiom (G).
Let M G → R-Mod be a functor. For every K, K' ∈Υ and σ∈ G, the Hecke correspondence [ K ' σ K ] is defined to be the composition
[K ' σ K] M(K) M(K ∩σ^-1 K ' σ) M(σ K σ^-1∩ K ') M(K') .
If 𝒞_R (K ' \ G / K ) denotes the free R-module on functions ( K ' σ K ), σ∈ K' \ G / K, there is a R-linear mapping 𝒞_R ( K ' \ G / K ) →Hom_R( M(K) , M(K') ) given by ( K ' σ K ) ↦ [K' σ K ]. The transpose of [ K ' σ K ] is defined to be the correspondence
[K' σ K]_* = [K σ ^-1 K ' ] : M ( K' ) → M( K )
which we also refer to as the covariant action of [K σ K' ]. The degree of [K' σ K ] is defined to be the cardinality of K ' σ K / K or equivalently, the index [ K' : K' ∩σ K σ^-1 ]. The degree of [K' σ K]_* is the degree of [K σ^-1 K'].
Let M : G → R-Mod be a Mackey functor and let K , K ', L ∈Υ with L ⊂ K. Suppose that K
' σ K = _ i L σ_i K. Then _L, K' ^ * ∘ [ K ' σ K ] = ∑ _ i [L σ _ i K ].
Denote L ' : =K ' ∩σ K σ^-1∈Υ. As K' / L' → K ' σ K / K, γ L ' ↦γσ K is a bijection, so is the induced map L \ K' / L '
→
L \ K' σ K / K and we may therefore assume that σ = γ_iσ where γ _i form a set of representatives for L \ K ' / L '. Set L_i : = L ∩γ_i L ' γ_i ^ - 1. Since M is Mackey, we see that the square in the diagram
[sep = large, /tikz/column 3/.style=column sep=0.1em]
⊕_ i
M ( L_i ) [r, " ∑_*"] M(L)
M( K ) [r, "[σ]^*"'] [ur , "[σ_i]^*"]
[rr, "[K' σK]"', bend right, swap] M ( L ' ) [r, "_*"'] [u, "⊕[γ_ i ]^*"'] M(K') [u, "^*"']
commutes and therefore so does the whole diagram. Noting that L_i = L ∩σ_i K σ_i^-1, the
claim follows from the commutativity of the diagram above.
Let M : G → R-Mod be a Mackey functor and μ be a Haar measure on G. Let K , K ' ∈Υ be such that μ ( K ) ∈ R. Then for any σ∈ G, the actions of [K ' σ K ] and (K ' τ K) on M agree up to μ(K). That is, for all x ∈ M(K),
μ(K) · j_K∘ [K ' σ K](x) = ch(K ' σ K ) · j_K ( x ) .
In particular if M(K) →M is injective and μ(K) = 1, the R-linear mapping ℋ_R(K \ G / K ) →End _R M(K) given by ( Kτ K ) ↦ [K τ K ] is an R-algebra homomorphism.
By (T2), there exist L ∈Υ such that L ⊂σ K σ^-1, L ◃ K '. Then K ' σ K / K = L
\ K ' σ K / K and [ L γ K ] = [ γ ] _ L , K ^ * for any γ K ⊂ K ' σ K/ K. So we get the first claim by Lemma <ref>.
The second claim then follows by the first and eq. (<ref>).
See Corollary <ref> where the map ℋ_R(K \ G / K ) →End_R M(K) is shown to be an algebra homomorphism under the assumption that M is CoMack.
Suppose that G = G_1× G_2, σ_i∈ G_i and K_i, L_i⊂ G_i are compact open subgroups such that K_1 K_2, L_1 L_2, K_1L_2 and L_1 K_2 are all in Υ. Let M : G → R-Mod be a Mackey functor. Denoting τ_1 = (σ_1 , 1), τ_2 = ( 1, σ_2 ), we have
[ (L_1 L_2) τ_1 (K_1 L_2) ] ∘ [(K_1 L_2) τ_2 ( K_1 K_2 ) ] = [(L_1L_2)τ_1τ_2 ( K_1K_2) ] = [ ( L_1 L_2 ) τ_2 ( L_1 K_2 ) ] ∘[ ( L_1 K_2 ) τ_1 (K_1K_2 ) ]
as morphisms M(K_1 K_2 ) → M (L_1 L_2 ). We also denote this morphism as [L_1σ_1 K _1 ] ⊗ [L_2σ_2 K _2 ] and refer to it as the tensor product. A similar fact holds for tensor products of a finite number of Hecke correspondences in restricted topological product of groups.
For i = 1, 2, denote P_i = σ_i K_iσ_i^-1∩ L_i. Then
τ_1 ( K_1K_2 ) τ_1^-1∩ (L_1K_2) = P_1K_2 ,
τ_2 (L_1 K_2) τ_2^-1∩ (L_1L_2) =L_1 P_2 ,
τ_1τ_2 ( K_1K_2
) (τ_1τ_2 ) ^-1∩ (L_1L_2) = P_1 P_2
and all of these groups are in Υ. Since τ_2^-1 (L_1P_2 ) τ_2\ L_1 K_2 / P_1 K_2 = { 1_K} and M is Mackey, we get a commutative diagram
[row sep = large, column sep = tiny]
M(P_1P_2) [rd, "pr_*"]
M(P_1K_2) [rd, "pr_*"] [ru, "[τ_2]^*"] M(L_1P_2) [rd, "pr_*"]
M(K_1K_2) [rr, "[(L_1K_2)τ_1(K_1K_2)]"'] [ru, "[τ_1]^*"] M(L_1K_2) [rr, "[(L_1L_2)τ_2(L_1K_2)]"'] [ru, "[τ_2]^*"] M(L_1L_2)
which implies that [ L_1 L _2τ_2 L_1 K_2 ] ∘ [ L_1 K_2τ_1 K_1K_2] = [L_1L_2τ_1τ_2 K_1K_2]. By interchanging the roles of τ_1, τ_2, we get the second equality.
§.§ Mixed Hecke correspondences
In the situations that we are going to consider, the classes used for constructing Euler systems are pushforwarded from a functor associated with a smaller (closed) subgroup. Here we study this scenario abstractly and introduce some terminology that will be used extensively in the next section. Let ι : H ↪ G be a closed subgroup, and Υ_H , Υ_G be a collection of compact open subgroups of H, G respectively satisfying (T1)-(T3) and such that the collection ι^-1 ( Υ_G) : = { K ∩ H | K ∈Υ_G} is contained in Υ_H. Note that ι^-1( Υ_G ) itself satisfies (T1)-(T3) for H and we refer to it as the pullback of Υ_G to H.
We say that
( U, K ) ∈Υ_H×Υ_G forms a compatible pair if U ⊂ K. A morphism of compatible pairs h : (V,L) → (U,K) is a pair of morphisms
(V U ), (L K ) for some h ∈ H. Let M _ H, M_G be R-Mod valued functors on H, G respectively. A pushforward M _ H → M_G is a family of morphisms ι_U, K, * : M_H(U) → M_G(K) for all compatible pairs (U,K) ∈Υ_H×Υ_G such that ι_U,K,*, ι_V,L,* commute with the pushforwards [ h ] _ * on M_H, M_G induced by any morphism h : (V,L) → (U,K) of compatible pairs. We say that ι_* is Mackey if for all U ∈Υ_H, L , K ∈Υ _ G satisfying U , L ⊂ K, we have a commutative diagram
[column sep = large] ⊕_ γ M_H ( U _ γ ) [r, " ∑[γ]_*"]
M_G ( L )
M_H ( U ) [r, "ι_*"] [u, "⊕ ^*"] M_G(K) [u,"^*", swap]
where γ∈ U \ K / L is a fixed set of representatives, U_γ = U ∩γ L γ^-1 and [γ]_* : M _ H ( U _ γ ) → M_G ( L ) denotes the composition M_H ( U_γ) M _G ( γ L γ ^-1 ) M_G ( L ).
If φ_G : N_G→ M_G is a morphism of functors, then it may be viewed as a pushforward in the sense of Definition <ref>. We will say that φ is Mackey if it is so as a pushforward.
If M_G is Mackey, then so is any morphism φ_G : N_G→ M_G.
If M is Mackey, then M satisfies the axiom (M') given in Lemma <ref>. Using its notation, the commutativity of <ref> implies that
[column sep = large] ⊕ _ δ N _ G ( L' _ δ ) [r, "φ_G"] ⊕_ δ M _ G ( L _ δ ' )
N _ G (L') [r, "φ_G" ,swap] [ u , "⊕^*" ] M _ G ( L' )
[u, "⊕^*"']
is commutative as well.
Let ι _ * : M _ H → M_G be a pushforward. For U ∈Υ _H, K ∈Υ_G and σ∈ G, the mixed Hecke correspondence [ U σ K ] _ * is defined as
[U σ K ]_* : M_H (U) M_H ( U ∩σ K σ ^ - 1 ) M_G( σ K σ ^-1 ) M_G(K) .
One can verify that [Uσ K]_ * depends only on the double coset U σ K. The degree of [Uσ K]_* is defined to be the index [H ∩σ K σ^-1 : U ∩σ K σ ^-1 ].
Suppose that H = G, ι = id and ι_* : M_G→ M_G is the identity map. Then one can verify that ι_* is Mackey iff M_G is. Moreover if U , K ∈Υ_G and σ∈ G, we have [U σ K]_* = [ U σ K ] ^t = [K σ^-1 U ] agrees with the covariant action introduced before and the degrees of [ U σ K ]_*, [K σ^-1 U] also agree. The `*' in the notation of mixed Hecke correspondence is meant to emphasize its `pushforward nature' and its dependence on ι_*. We note that [U σ K ] _ * is however independent of ι_*.
Let ι_* : M_H→ M_G be a pushforward and let σ∈ G, U ∈Υ_H, K ∈Υ_G. For h ∈ H, g ∈ G, denote U^h : = h U h ^-1, K^g : = g K g ^ - 1. Then
[ U σ K ] _ * = [ U^h h σ K ] ∘ [ h ] ^ * _ U ^h , U = [g]_K^g, K , * ∘ [ U σ g ^ - 1 K^g ] _* .
Moreover [ U σ K ] _* = [ U ^h h σ K] _ * = [ U σ g^-1 K^g ]_*.
Let V : = U ∩σ K σ ^-1, V ' : = U^h∩ h σ K ( h σ )^-1, L :
= σ K σ ^-1 and L' : = h σ K σ ^-1 h^-1. By definition, h V h ^-1 = V ', h L h ^-1 = L', V ⊂ L and V ' ⊂ L '. One easily verifies that the diagram
[sep = large]
M_H(U) [d, "[h]^*"'] [r, "pr^*"] [rrr, "[UσK]_*", bend left = 22] M_H(V) [d, "[h]^*"] [r, "ι_*"] M_G(L) [d, "[h]^*"'] [r, "[g]_*"] M_G(K)
M_H(U^h) [r, "pr^*"] M_H(V') [r, "ι_*"] M_G(L') [ru, "[hg]_*"']
is commutative which implies [ U σ K ] _ * = [ U^h h σ K ] ∘ [ h ] ^ * _ U ^h , U. By definition, [ U σ K ]_* = [ H ∩ L : V ] and [ U ^h h σ K ] _* =[ H ∩ L ' : V' ]. Since L, L' and V, V' are conjugates under h, [ H ∩ L : V ] = [ H ∩ L' : V ' ] and so [ U σ K ]_* = [ U ^h h σ K ]. The proof for the second set of equalities is similar.
Let ι_* : M_H→ M_G be a pushforward and let σ∈ G, U, V ∈Υ_H, K, L ∈Υ_G be such that V ⊂ U ⊂σ K σ^-1, V ⊂σ L σ^-1 and L ⊂ K. Then
[V σ K ]_* = [U σ K ] _ * ∘_V , U , * = _L,K , * ∘ [ V σ L ] .
Since U ⊂σ K σ^-1, [U σ K ]_* is the composition M_H(U) M_G ( σ K σ ^-1 ) M_G(K). Similarly [ V σ L ] _* is the composition M_H(V) M_G ( σ L σ^-1) M_G(L). Since pushforwards commute with each other, the claim follows.
The following result
is an analogue of Lemma <ref> for pushforwards.
Let ι_* : M_ H → M_G be a Mackey pushforward and let σ∈ G, U ∈Υ_H, K, K' ∈Υ_G with U ⊂ K. Suppose that K σ K ' = _i U σ _i K '. Then [ K σ K ' ] _ * ∘ι_U , K , * = ∑ _ i [ U σ _ i K '] _ *.
Let L := K ∩σ K' σ^-1. As K / L → K σ K ' / K ', γ L ↦γσ K' is a bijection, so is the induced map U \ K / L → U \ K σ K' / L' and we may thus assume that σ_i = γ_iσ where γ _i∈ K forms a set of representatives of U \ K / L. Let
K_i ' : = σ _i K ' σ_i ^-1 , L_i : = γ_i L γ_i ^-1 , U_i : = U ∩ K_i ' .
Then L_i = K ∩σ_i K ' σ_i^-1 = K ∩ K_i ' and therefore U _i = U ∩ L_i. As ι_* is Mackey, we see that
_L , K ^ * ∘ι _ U , K , * = ∑ _ i [ γ _ i ] _ U _ i , L , * ∘_U_i , U ^ *
where [γ_i ] _ U_i, L , * : = [ γ_i ] _ L_i, L , * ∘ι_ U_i, L_i , * = [U_iγ_i L ]_* (see the diagram on the left below).
[sep = large, /tikz/column 3/.style=column sep=0.1em]
⊕_ i M_H ( U_i )
[r, "∑[γ_i]_*"] M_G ( L) [dr, "[σ]_*"]
M_H ( U_i ) [dr, "ι_*", swap] [r, "[γ_i]_*"] M_G(L)
[dr, "[σ]_*"]
M_H ( U ) [r, "ι_*", swap] [u,"⊕ ^*"] M_G(K) [u, "^*" ] [r,"[KσK']_*" , swap ] M_G(K ') M_G ( K _i' ) [r, "[σ_i]_*" , swap ] M_G(K')
As
ι_U_i, K_i',*
= [γ_i ^ - 1 ]_L,K_i',*∘ [ γ_i ]_U_i, L , * for each i, we see that
[σ]_L,K',*∘ [γ_i]_U_i,L,* = ( [ σ_i ] _ K_i', K ' , * ∘ [ γ _ i ^-1 ] _ L, K_i ' , * ) ∘ [γ_i ]_U_i, L, *
= [ σ_i ] _K_i ' , K ' ∘ι_U_i, K _ i ' ,*
(see the diagram on the right above).
Using [K σ K ' ]_* = [ σ ] _ L , K ' , * ∘ _ L , K ^ * in conjunction with eq. (<ref>) and eq. (<ref>), we see that
[ K σ K ' ] _ * ∘ι_U, K , * = [ σ ] _ L , K ' , * ∘∑ _ i ( [ γ _ i ] _ U_i , L , * ∘ _ U _ i , U ^ * )
= ∑ _ i ( [ σ ] _L,K ' , * ∘ [ γ_i ] _ U_i , L , * ) ∘_ U _ i , U ^ *
= ∑ _ i ( [ σ _ i ] _ K_i ' , K ' , * ∘ι _ U _ i , K_ i ' ) ∘ _ U _ i , U ^ *
= ∑ _ i [ U σ _ i K '] _ *
We end this subsection by showing that any two (contravariant) Hecke correspondences compose in the usual way. For K _1, K_2, K_3∈Υ_G, the convolution of double cosets is the -linear homomorphism
∘ : 𝒞_( K_3\ G / K_2 ) ×𝒞_(K_2\ G / K_1 ) →𝒞_( K_3\ G / K_1 )
given by (K_3σ K_2) ∘(K_2τ K_1 ) = ∑ _υ c^υ_σ, τ (K_3υ K_1)
where c^υ_σ, τ = | (K_3σ K_2∩υ K_1τ^-1 K_2 ) / K_2 |.
Let M = M_G be a CoMack functor on G, K_1, K_2, K_3∈Υ_G and σ, τ∈ G. Then [K_3σ K_2 ] ∘ [ K _2τ K_1 ] ∈Hom_R ( M(K_1 ) , M(K_3) ) is a sum of Hecke correspondences obtained by the convolution of double cosets as above.
Let L = τ K_1τ^-1∩ K_2∈Υ_G and suppose that K_2σ ^-1 K_3 = _i L σ _i ^-1 K_3 for some σ _i∈ G. Since M is Mackey, we see by Lemma <ref> that
[K_3σ K_2] ∘ [ K_2τ K_1 ] = [K_3σ K_2 ] ∘_L,K_2,*∘ [ τ ] _ L , K _1 ^*
= ( [K_2σ^-1 K_3 ]_*∘_L,K_2,* ) ∘ [ τ ] _L,K_1 ^ *
= ∑ _i [L σ _ i ^ -1 K_3 ]_*∘ [ τ ]^*_L,K_1
= ∑ _ i [K_3σ _ i L] ∘ [ τ ] _ L , K_1 ^ * .
For each i, let d_i : = [ σ_iτ K_1 (σ_iτ)^-1∩ K_3 : σ_i L σ_i ^ - 1 ∩ K_3 ]. Since M is cohomological, we see that the diagram
M ( σ_i L σ_i^-1 ∩K_3 ) [dd, "_* " ] [dr, "_*"]
M(K_1) [ur, "[σ_i τ]^*"] [dr, "d_i ·[σ_i τ]^* "'] M(K_3 )
M ( σ_i τK_1 (
σ_i τ) ^-1 ∩K_3 ) [ur, "^*"']
is commutative. So [ K_3σ_i L ] ∘ [ τ ]_L, K_1 ^* = d_i· [K_3σ_iτ K_1 ] as maps M(K_1) → M(K_3) (take the two routes in the diagram above) and therefore
[ K_3σ K_2 ] ∘ [K_2τ K_1 ] = ∑ _i d_i [ K_3σ_iτ K_1 ] .
To show that ∑_i d_i (K_3σ _iτ K_1 ) equals the convolution product,
take M = M_π to be the functor associated with the smooth left G-representation π where π = ℋ_(G, Υ) with G acting via left translation λ (<ref>).
Let μ be a -valued left Haar measure on G. By Corollary <ref>,
μ(K_2) μ(K_1) · j_K_3∘ [K_3σ K_2 ] ∘ [K_2τ K_1] ( (K_1) ) = (K_3σ K_2 ) * (K_2σ K_1 )
as elements of M_π = π = ℋ_(G, Υ_G).
Similarly,
μ (K_2 ) μ(K_1 ) · j_K_3∘∑ _i d_i [ K_3σ_iτ K_1 ] · ( (K_1) ) = μ(K_2 ) ∑ _i d_i (K_1σ_iτ K_3) .
As the LHS of these two equalities are equal by the above argument, we must have
(K_3σ K_2 ) * (K_2σ K_1 ) = μ(K_2) ∑ _i d_i (K_1σ_iτ K_3 ) .
But the coefficient of ( K _ 3 υ K_1) in (K_3σ K_2 ) * (K_2τ K_1 ) equals μ ( K_3σ K_2∩υ
K_1τ ^-1 K_2 ) = μ(K_2 ) c_σ , τ ^ υ. Therefore ∑_i d_i( K_1σ_iτ K_3 ) must equal the convolution of double cosets.
§.§ Completed pushforwards
Let ι : H → G be as in <ref> and assume moreover that H, G are unimodular. Let μ_H, μ_G Haar measures on H, G respectively with μ_ H ( Υ_H ) , μ_G ( Υ_G) ∈ R ^ ×.
Let ℋ _ R ( G , Υ _ G ) denote the Hecke algebra of G over Υ _ G.
Given smooth representations τ of H, σ of G, we consider τ⊗ℋ_R(G , Υ _ G ) and σ smooth representations of H × G under the following extended action.
* (h, g) ∈ H × G acts on x ⊗ξ∈τ⊗ℋ _ R (G , Υ _ G ) via x ⊗ξ↦ h x ⊗ξ ( ι ( h ) ^ - 1 ( - ) g ).
* (h, g ) ∈ H × G acts on y ∈σ via y ↦ g · y.
An intertwining map Ψ : τ⊗ℋ_R(G, Υ _ G ) →σ is defined to be a morphism of H × G representations.
Let Ψ : τ⊗ℋ_R(G, Υ _ G ) →σ be an intertwining map.
For any ξ _1 , ξ_2∈ℋ_R( G , Υ _ G ) and x ∈τ,
ξ_1·Ψ ( x ⊗ξ_2 ) = Ψ ( x ⊗ξ_2 * ξ_1^t ) ,
where ξ_1 ^t is the transpose of ξ_1.
Since Ψ is an intertwining map, it is also a morphism of ℋ_R(G, Υ_G ) modules under the induced actions. Thus, ξ_1·Ψ ( x ⊗ξ_2) = Ψ ( x ⊗ξ_1 *_ρξ_2 ). But Lemma <ref> implies that ξ _ 1 *_ρξ_2 = ξ_2 * ξ_1^t.
The proofs for the next two results are omitted and can be found in <cit.> (cf. <cit.>).
Let σ ^ ∨ denote the smooth dual of σ and ⟨· , ·⟩ : σ ^ ∨×σ→ R denote the induced pairing. Consider τ⊗σ ^ ∨ as a smooth H-representation via h (x ⊗ f ) = hx ⊗ι(h) f. Then for any intertwining map Ψ as above, there is a unique morphism ψ : τ⊗σ^∨→ R of smooth H-representations such that
⟨ f , Ψ ( x ⊗ξ ) ⟩ = ψ ( x ⊗ ( ξ· f ) )
for all x ∈τ, f ∈σ ^ ∨ and ξ∈ℋ_R(G, Υ_G ). The mapping Ψ↦ψ thus defined induces a bijection between Hom_H × G ( τ⊗ℋ_R(G, Υ_G ) , σ ) and Hom_H ( τ⊗σ^∨ , R ).
Suppose M_ H, M_G are RIC functors with M_H CoMack and M_G Mackey. Consider M_H⊗ℋ_R (G , Υ ) and M_G as smooth H × G representations via the extended action. Then for any pushforward ι_* : M_H→ M_G,
there is a unique intertwining map of H × G representations
ι̂ _ * : M_H⊗ℋ _ R ( G , Υ _ G ) → M _G
satisfying the following compatibility condition: for all compatible pairs (U,K) ∈Υ _H×Υ_G, x ∈ M_H(U), we have ι̂ _* ( j_U(x ) ⊗(K) ) = μ_H ( U ) j_K ( ι_U,K,* ( x ) ). Equivalently any ι_* determines a unique morphism ι̃_* : M_H⊗ (M_G)^∨→ R of H-representations such that ι̃_* ( j_U(x ) ⊗ f ) = μ_H(U) f ( j_K∘ι_U,K,*(x) ) for U, K, x as above and f ∈ ( M_G ) ^∨
Let ι̂_* be as above. For any U ∈Υ_H, K ∈Υ_G, g ∈ G and x
∈ M_H(U),
ι̂_* ( j_U(x_U ) ⊗ ( g K ) ) = μ_H ( U ∩ g K g^-1 ) · j_K∘ [ U g K ] _ * ( x
) .
In the definition of ι̂_*, we may replace ℋ_R(G) with 𝒞_R( G ) which is ℋ_R(G) considered as a R-module with G-action given by right translation, since the definition of ι̂_* does not require the convolution operation. In particular, ι̂_* is independent of μ_G.
§.§ Shimura varieties
In this subsection, we briefly outline how the abstract formalism here applies to the cohomology of general Shimura varieties. We refer the reader to <cit.> for terminology which we will be used freely in what follows.
Let (, X) be a Shimura-Deligne (SD) datum and let 𝐙 denote the center of . For any neat compact open subgroup K ⊂(_f), the double quotient
_(K)() := () \ [ X ×(_f)/K]
is the set of -points of a smooth quasi-projective variety over . If (, X ) satisfies (SD3) or if (, X) admits an embedding into a SD datum which satisfies (SD3), then _(K) admits a canonical model over its reflex field. For two neat compact open subgroups K' , K ⊂(_f) such that K ' ⊂ K, it is not true in general that _(K')() →_(K) is a covering map of degree [K' : K ] unless (SD5) is also satisfied (<cit.>). However one can establish the following (cf. <cit.>).
Let K, K' ⊂ ( _f ) be neat compact open subgroups such that K ' ⊂ K and K ∩𝐙() = K' ∩𝐙(). Then the natural map _K', K : Sh_(K')( ) ↠Sh_(K) ( ) of smooth -manifolds is an unramified covering map of degree [K : K' ].
Suppose that there exists x ∈ X, g ∈𝐆(_f ) and k ∈ K such that [x, g]_K' = [x , g k ] _ K' in
Sh_(K')(). Let K_∞ denote the stabilizer of x in 𝐆( ). By definition, there exists a γ∈() ∩ K_∞ such that
g k = γ g k '
for some k ' ∈ K ' ⊂ K. Then γ = g k (k')^-1 g^-1 is an element of Γ : = () ∩ g K g^-1. Since () is discrete in ( ), we see that Γ is discrete in () and so Γ∩ K_∞ is discrete in K_∞. In particular, the group C : = ⟨γ⟩⊂Γ∩ K_∞ generated by γ is discrete in K_∞. By <cit.>, the quotient K_∞ / ( 𝐙 ( ) ∩ K _ ∞ ) is a compact group. Since C / ( 𝐙() ∩ C ) is a (necessarily closed) discrete subgroup of this quotient, it must be finite. There is therefore a positive integer n such that
γ ^n∈𝐙( ) ∩ C ⊂𝐙 () .
Since Γ⊂() is neat, its image Γ̅⊂𝐆^ad() under the natural map 𝐆 ( ) →𝐆^ad ( ) is also neat <cit.>. Thus Γ / ( 𝐙 ( ) ∩Γ ) = Γ / ( 𝐙() ∩Γ) ⊂Γ̅ is neat as well and in particular torsion free. So it must be the case that γ∈𝐙(). From (<ref>), we infer that k = γ k ' and this makes γ an element K ∩𝐙(). As K ∩𝐙() = K' ∩𝐙(), we see that k = γ k ' ∈ K' .
The upshot is that the fiber of _ K' ,K above [x,g]_K is of cardinality [K : K ' ].
Now let L ⊂ K' be normal in K. By replacing L with L · (𝐙() ∩ K), we may assume that L ∩𝐙() = K ∩𝐙(). Applying the same argument to L, we see that Sh_(K)()
is a quotient of Sh_(L)() by the free action of K / L,
hence the natural quotient map is an unramified covering of degree [K : L]. Similarly for K'. This implies the claim since [K : L ] = [K : K ' ] · [K' : L].
Let L , L ' , K be neat compact open subgroups of (_f) such that L , L' ⊂ K and all three have the same intersections with 𝐙(). For γ∈ K, let L_γ = L ∩γ L' γ ^-1.
Then the diagram below
_γ Sh(L_γ ) [r] [d , "⊔[γ]", swap ] Sh(L) [d, ]
Sh(L') [r] Sh(K)
where γ∈ K runs over representatives of L \ K / L ' is Cartesian in the category of smooth -manifolds.
If canonical models exist for Sh_(K), then the following lemma allows us to descend the Cartesian property above to the level of varieties.
Let W , X, Y , Z be geometrically reduced locally of finite type schemes over a field k of characteristic zero forming a commutative diagram
W [r, "a"] [d, "g", swap]
X [d, "f"]
Z [r, "b"] Y
such that f , g are étale. Suppose for each closed point z ∈ Z the map a : W → X is injective on the pre-image g^-1(z) and surjects onto the pre-image of f^-1(b(z)). Then the diagram above is Cartesian in the category of k-schemes.
Suppose that 𝒲 = X × _ Y Z is a pullback. Let p_X : 𝒲→ X, p_Y : 𝒲→ Z be the natural projection maps and γ : W →𝒲 the map induced by the universal property of 𝒲.
As f is étale, so is p_Z and since p_Z∘γ = g is étale, so is γ. Let k̅ denote the separable closure of k. Since 𝒲(k̅ ) = { (x,z) ∈ X(k̅) × Z ( k̅ ) | f ( x ) = b ( z ) }, the condition on closed points (i.e. k̅-points) implies that γ : W(k̅) →𝒲(k̅) is a bijection. The result follows since an étale morphism between such schemes that is bijective on k̅ points is necessarily an
isomorphism.
We now assume for the rest of this subsection that _(K)() admits a canonical model for each neat level K. We let
Υ be any collection of neat compact open subgroups of (_f) such that the intersection of any K ∈Υ with 𝐙() gives a subgroup of 𝐙() that is independent of K. For instance, we may take Υ = Υ(K_0) for any given neat level K_0 where Υ(K_0) is the set of all finite intersections of conjugates of K_0. By Lemma <ref>, such a collection satisfies (T1)-(T3) and clearly, the intersection of any group in Υ(K_0) with 𝐙() equals K_0∩𝐙(). Now let {ℱ_K}_K ∈Υ be a collection of _p-sheaves ℱ_K on _(K) that are equivariant under the pullback action of (_f ).
More precisely, for any σ∈(_f) and L, K ∈Υ such that σ^-1 L σ⊂ K, we assume that there are natural isomorphisms φ_σ : [σ]_L,K^*ℱ_K≃ℱ such that φ_τσ = [τ]^*_L',L∘φ_σ for any L ' ∈Υ satisfying τ^-1 L ' τ⊂ L.
For any integer i ≥ 0 and K ∈Υ, let
M(K) := H^i_( _(K), ℱ_K)
denote Jannsen's continuous étale cohomology. Then for any morphism ( L K ) ∈𝒫(G, Υ), there are induced _p-linear maps [σ]_L,K^* : M(L) → M(K) and [σ]_L,K^* : M(K) → M(L) that make M a RIC functor for 𝒫(G, Υ) (see <cit.>).
M is a cohomological Mackey functor.
Lemma <ref> and <cit.>) imply that M is cohomological. Corollary <ref> and <cit.> imply that M is Mackey.
Using similar arguments, one may establish that an injective morphism (, Y) ↪ (, X) of Shimura-Deligne data and a collection on sheaves of the two sets of varieties that are compatible under all possible pullbacks induce a Mackey pushforward on the corresponding cohomology of varieties over the reflex field of (, Y). See e.g., <cit.>. Some care is required in the case where the centers of and differ and (SD5) is not satisfied for . This is because one to needs to specify a collection of compact open subgroups for (_f ) which contains the pullback of Υ and which also satisfies the conditions of intersection with the center of (_f ).
§ ABSTRACT ZETA ELEMENTS
In this section, we begin by giving ourselves a certain setup that one encounters in, but which it is not necessarily limited to, questions involving pushforwards of elements in the cohomology of Shimura varieties and we formulate a general problem in the style of Euler system norm relations within that setup. We then propose an abstract resolution for it by defining a notion we refer to as zeta elements and study its various properties. An example involving CM points on modular curves is provided in <ref> and the reader is encouraged to refer to it
while reading this section. We note for the convenience of the reader that in the said example, it is the group denoted `G̃' (resp., `K̃') that plays the role of the group denoted `G' (resp., `K') below.
§.§ The setup
Suppose for all of this subsection that we are given
∙ ι : H ↪ G a closed immersion of unimodular locally profinite groups,
∙ Υ_H, Υ_G non-empty collections of compact open subgroups satisfying (T1)-(T3) and ι^-1(Υ_G) ⊂Υ_H,
∙ an integral domain with field of fractions a -algebra,
∙ M_H, : 𝒫(H, Υ_H ) →𝒪-Mod, M_G, : 𝒫(G, Υ_G ) →𝒪-Mod CoMack functors,
∙ ι _ * : M _ H , → M _ G , a pushforward,
∙ U ∈Υ _H, K ∈Υ_G compact opens such that U = K ∩ H referred to as bottom levels,
∙ x _ U ∈ M _ H , ( U ) which we call the source bottom class,
∙ ℌ∈ 𝒞 _ ( K \ G / K ) a non-zero element which we call the Hecke polynomial,
∙ L ∈Υ _ G, L ◃ K a normal compact open subgroup referred to as a layer extension of degree d = [ K : L ].
As in Definition <ref>, ℌ induces a -linear map ℌ_* = ℌ^t : M_G , ( K ) → M_G, ( K ). Let y_K : = ι_U, K , * ( x _ U ) ∈ M_G, ( K ) which we call the target bottom class.
Does there exist a class y _ L ∈ M_ G , ( L ) such that
ℌ _* (y_K) = _ L , K , * ( y _ L )
as elements of M _ G , ( K )?
Let us first make a few general remarks. First note note is that if d ∈^×, the class d^-1·_L,K^*(y_K) ∈ M_G,(L) solves the problem above. Thus the non-trivial case occurs only when d is not invertible in , and in particular when is not a field. In Kolyvagin's bounding argument, the usefulness of such a norm relation is indeed where d is taken to be non-invertible e.g.,
= _p and d = ℓ - 1 where ℓ≠ p is a prime such that a large power of p divides ℓ - 1.
Second, Problem <ref> is meant to be posed as a family of such problems where one varies L over a prescribed lattice of compact open subgroups of K (which correspond to layers of certain abelian field extensions) together with the other parameters above and the goal is to construct y_L that satisfy such relations compatibly in a tower. This is typically achieved by breaking the norm relation problem into `local' components and varying the parameters componentwise. More precisely, H and G are in practice the groups of adelic points of certain reductive algebraic groups over a number field and the class x_U has the features of a restricted tensor product. The problem above is then posed for each place in a subset of all finite places of the number field. Thus Problem <ref> is to be seen as one of a local nature that is extracted from a global setting. See <ref> for an abstract formulation of this global scenario.
Third, the underling premise of <ref> is that y_K is the image of a class x_U that one can vary over the levels of the functor M_H, and for which one has a better description as compared to their counterparts in M _ G ,. If ι_* is also Mackey, then Lemma <ref> tells us that ℌ_* (y_K) is the image of certain mixed Hecke correspondences. The class y_L we are seeking is therefore required to be of a similar form. As experience
suggests, we assume that y_L = ∑ _ i = 1 ^ r [V_ig_i L]_* ( x_V_i ) where
* g_i∈ G,
* V_i⊂ g_i L g_i ^ -1, V_i∈Υ_H
* x_V_i∈ M_H, ( V_i )
are unknown quantities that we need to pick to obtain the said equality.
If we only require equality up to -torsion (which suffices for applications, see <ref>), then one can use Proposition <ref> to guide these choices. More precisely, let μ_H be a -valued Haar measure on H, Φ a field containing , and M_H, Φ, M_G, Φ denote the functors obtained by tensoring with Φ. Let M _ H , Φ, M _ G , Φ be the completions of M_H, Φ, M _ G , Φ respectively. For V ∈Υ_H, let j_ V : M_H, ( V ) → M _ H , Φ denote (abusing notation) the natural map and similarly for M_G,. Let ι̂ _* : M_H, Φ⊗ℋ_Φ(G , Υ_G ) → M
_ G , Φ the completed pushforward of Proposition <ref>.
As M_G,Φ is cohomological, the kernel of j_K : M_G,(K) →M_H,Φ is contained in -torsion of M_G, (K). An application of Corollary <ref> then implies that ℌ_* ( y _ K ) - _L,K, * ( y_L ) is -torsion if and only if
ι̂ _ * ( j_U ( x _U ) ⊗ℌ ) = ι̂ _* ( ∑ _ i = 1 ^ r μ_H(U)/μ_H ( V_i ) ( j_V_i( x_V_i ) ⊗ch(g_i K ) ) )
as elements of the Φ-vector space M_G, Φ (see the proof of Proposition <ref> below). Thus we are seeking a specific “test vector"
in M_H, Φ⊗ℋ_Φ(G , Υ_G ) ^K containing the data of certain elements in M_H,
whose image under ι̂_* coincides with that of j_U(x_U)⊗ℌ. Any such test vector can equivalently be seen as a right K-invariant compactly supported function ζ : G →M_H, Φ. The shape of the element inside ι̂_* on the RHS of (<ref>) forces upon us a notion of integrality of such vectors. As ι̂_* is H-equivariant, a natural way of enforcing (<ref>) is to require that the two functions in the inputs of ι̂_* have equal H-coinvariants with respect to the natural H-action on the set of such functions. If a test vector satisfying these two conditions exists, Problem <ref> is solved modulo -torsion. In fact, such a vector solves the corresponding problem (modulo torsion) for any pushforward emanating from M_H, to a functor on 𝒫(G,Υ_G), since the two aforementioned properties are completely independent of ι_*.
Under certain additional conditions, the resulting norm relation can be upgraded to an equality.
See <ref>
We now formalize the discussion above. For τ an arbitrary group, we let 𝒞 (G/K, τ ) denote the set of all compactly supported functions ξ : G →τ that are invariant under right translation by K on the source. Here the support of ξ is the set of elements that do not map to identity element in τ. If τ is abelian and has the structure of a Φ-vector space,
𝒞 (G/K, τ ) is a Φ-vector space. If τ is in addition a Φ-linear left H-representation, so is 𝒞(G/K, τ) where we let h ∈ H act on ξ∈𝒞 (G/K, τ ) via ξ↦ h ξ : = h ξ ( h^-1 ( - ) ). In this case, we denote by 𝒞(G/K, τ ) _H the space of H-coinvariants and write ξ_1≃ξ_2 if ξ_1, ξ_2∈𝒞( G / K , τ ) fall in the same H-coinvariant class. Given a ϕ∈𝒞 (G/K, Φ) and x ∈τ, we let x ⊗ϕ∈𝒞 (G/ K , τ ) denote the
function given by g ↦ϕ(g) x. Fix a -valued Haar measure μ_H on H. For V_1, V_2∈Υ_H, we denote
[V_1 : V_2 ] : = μ_H(V_1 ) / μ_H(V_2). This is then independent of the choice of μ_H.
An element ξ∈𝒞 (G/K , M_H,Φ) is said to be -integral at level L if for each g ∈ G, there exists a finite collection { V_i∈Υ_H | V_i⊂ gL g^-1}_i ∈ I and classes x_V_i∈ M_H, ( V_i) for each i ∈ I such that
ξ(g) = ∑ _i ∈ I [U:V_i] j_V_i( x_V_i ) .
A zeta element for (x_U, ℌ, L) with coefficients in Φ is an element ζ∈𝒞 ( G / K , M _H, Φ ) that is -integral at level L and lies in the H-coinvariant class
of j_U(x_U) ⊗ℌ
This notion of integrality appears in <cit.>. Cf. <cit.>.
Let ζ be a zeta element for (x_U , ℌ , L ). Then we may write ζ as a (possibly empty if ζ = 0) finite sum ∑_α [U : V_α] j_V_α (x_V_α ) ⊗(g_αK) where for each α, V_α⊂ g_α L g_α^-1 and x_V_α∈ M_H, ( V_α ).
Given such a presentation of ζ, we refer to
y_L : = ∑ _ α [V_ α g_α L ] _ * (x_V_α) ∈ M_G, ( L )
as an associated class for ζ under ι_*.
It depends on the choice of the presentation for ζ.
Suppose there exists a zeta
element for (x_U , ℌ , L ).
Then for any associated class y_L∈ M_G, (K), the difference ℌ_*(y_K ) - _ L , K , * ( y _ L ) lies in the -torsion of M_G, ( K ).
As above, let ζ = ∑_α [U : V_α ] j_V_α( x_V_α ) ⊗(g_α K ) be a choice (of a presentation) of a zeta element to which y_L is associated.
Let j_K : M_G, (K) →M_G , Φ denote the natural map and let μ_G be a Haar measure on G such that μ_G(K) = 1.
Corollary <ref> and the properties of ι̂_* as
an intertwining map (Lemma <ref>) imply that
μ_H(U) · j_K( ℌ_*(y_K) ) = ℌ^t·ι̂ _ * ( j_U ( x_U ) ⊗(K) ) = ι̂ _* ( j_U( x_U ) ⊗ℌ ) .
Since ι̂_* is H-equivariant, its restriction to M_H,Φ⊗ℋ(G, Υ_G)^K≃𝒞
(G/K,
M_H, Φ ) factors through the space of corresponding
H-coinvariants. Since ( j_U(x_U ) ⊗ℌ
) ≃ζ
by assumption,
ι̂_* ( j_U(x_U ) ⊗ℌ ) = ∑ _α [U:V_α] ι̂_* ( j_V_α(x_V_α) ⊗(g_α K ) ).
Corollary <ref>
allows us to rewrite each summand on the right hand side above as μ_H(U) j_K∘ [ V _ α g_α K ]_*(x_V_α ). By Lemma <ref>, [ V_α g_α K ] _ * = _L, K , * ∘ [ V_α g_α L ] _ *. Putting everything together, we get that
μ_H(U) · j_K ( ℌ_*( y_K ) ) = μ_H ( U ) · j_K∘_L, K , * ( ∑ _α [V_α g_α L ] _ * ( x_V_α ) )
= μ_H ( U ) · j_K ( _L,K,* ( y_L ) )
Thus j_K ( ℌ_*(y_K) - _L,K,*( y_L ) ) = 0. This implies the claim since the kernel of j_K : M_G, ( K ) → M_G, Φ ( K ) →M_G, Φ is M_G, (K)_-tors by Lemma <ref>.
We next study how a given presentation of a zeta element may be modified.
Given f ∈𝒞 (G/K, H) and ξ∈𝒞 (G/K, τ ) for τ any left H-representation over Φ, we define f ξ∈𝒞 (G/ K, τ ) by g ↦ f(g) ξ( f(g)^-1 g ).
If ζ is a zeta element, so is f ζ for any f ∈𝒞 ( G/K, H). Moreover the set of associated classes for the two elements under any pushforward are equal.
Clearly
fζ lies in 𝒞 ( G/K, M_H, Φ ) and f ζ≃ζ. Say ∑_α [ U : V_α ] j_V_α ( x_V_α ) ⊗( g_α K ) is a presentation for ζ.
Set h_α : = f (
g_α ), V_α' : = h_α V_α h_α^-1 and x_V_α' ' : = [h]_V_α', V_α^* ( x_V_α). Then
f ζ = ∑ _α [ U : V_α ] j_V_α' ( x_V_α' ' ) ⊗( h g_α K ) .
Since [U : V_α ] = [ U : V_α' ] by unimodularity of H, f ζ is integral at L. That the sets of associated classes for ζ and ηζ under a pushforward are equal follows by Lemma <ref>.
Let ζ be a zeta element. A presentation ζ = ∑_α [U : V_α] j_V_α(x_V_α) ⊗(g_α K )
is said to be optimal if V_α = H ∩ g_α L g_α^-1 for all α and the cosets H g_α K are pairwise disjoint. We say that ζ is optimal if it has an optimal presentation.
If there exists a zeta element, there exists an optimal one and such that the set of associated classes of the latter element under any pushforward contains those of the former.
Let ζ = ∑ _α∈ A [U : V_α ] j_V_α(x_V_α ) ⊗( g_α K ) be a presentation of a zeta element. Say there is an index β∈ A such that V_β≠ H ∩ g_β L g _β ^-1. Temporarily denote V_β ' : = H ∩ g_β L g _ β and x_V_β' = _V_β, V_β' , * ( x_V_β ). Then
[U : V_α ] j_V_β ( x_V_β ) ⊗( g _β K ) , [U : V_β ' ] j_V_β' ( x_V_β ' ) ⊗( g_β K )
are equal in 𝒞( G/ K , M_H, Φ) _ H by Lemma <ref>. So the element ζ ' obtained by replacing the summand indexed by β in ζ with [U : V_β ' ] j_V_β' ( x_V_β ' ) ⊗( g_β K ) constitutes a zeta element. Since [V_β ' g_β L ] ∘_V_β , V_β ' , * = [V_β g_β L ]_* by Lemma <ref>, the associated classes for the chosen presentations of ζ and ζ' are equal. So we can assume that there is no such index β in our chosen ζ. But then f ζ for any f ∈𝒞(G/K, H ) is a zeta element (Lemma <ref>) which also possesses the same properties. By choosing f suitably, we can ensure that H g _α K are pairwise disjoint.
The terminology
`zeta element' is inspired by <cit.> and motivated by the fact that Hecke polynomials specialize to
zeta functions of Shimura varieties <cit.>, <cit.>.
§.§ Existence Criteria
In this section, we derive a necessary and sufficient criteria for the existence of zeta elements that can be applied in practice.
Retain the setup of <ref> and the notations introduced therein. For X ⊂ H a group and g ∈ G, we will often denote by X_g = X_g,K the intersection X ∩ g K g^-1. For the result below, we denote by τ be an arbitrary left H-representation over Φ.
For ξ∈𝒞( G/ K , τ ), we let f_ξ∈𝒞(G/K, H ) be an element satisfying the following condition: supp ( f_ξξ ) = ⊔_i g_i K and H g_i K are pairwise disjoint (see Notation <ref>). It is clear that a f_ξ exist for each ξ.
The class [ξ]_H∈𝒞(G/K, τ)_H vanishes if and only for each g ∈ G, the class of (f_ξξ)(g) ∈τ in the space τ_H_g of H_g-coinvariants vanishes.
For each α∈ H \ G / K, fix a choice g_α K ∈ G / K such that H g_α K = α and set
ℳ : = ⊕_α∈ H \ G / K τ_H_g_α .
We are going to define a Φ-linear map φ : 𝒞(G/K, τ ) →ℳ. Since 𝒞(G/K, τ) ≃⊕_gK ∈ G/ Kτ, it suffices to specify φ on simple tensors. Given x ⊗(gK) ∈𝒞(G/K, τ), let α : = H g K and pick h ∈ H such that h g K = g_α K. Then we set φ ( x ⊗(gK ) ) ∈ℳ to be the element whose component at any index β≠α vanishes and at α equals the H_g_α-coinvariant class of hx.
It is straightforward to verify φ is well-defined and factors through the quotient 𝒞(G/K, τ)_H.
We now prove the claim. If ξ = 0, the claim is obvious, so assume otherwise. Since [f_ξξ]_H = [ξ]_H, we may replace ξ with f_ξξ and assume wlog that elements of ( ξ)/K ⊂ G/K represent distinct cosets in H \ G / K. Say ξ = ∑_i x_i⊗(g_iK ). Denote α_i : = H g_i K and let h_i∈ H be such that h_i g_i = g_α_i K. If [ξ]_H vanishes,
so does φ(ξ) which in turn implies that the class of h_i x_i in H_g_α_i-coinvariants
of τ vanishes for each i. By conjugation, this is equivalent to the vanishing of H_g_i-coinvariant class of x_i for each i. This proves the only if direction. The if direction is straightforward since the vanishing of H_g_i-coinvariant class of x_i∈τ readily implies the same for the H_g_i-coinvariant
class of x_i⊗(g_i K ) ∈𝒞(G/K , τ ).
For g ∈ G, the g-twisted H-restriction or the (H,g)-restriction of ℌ is the function
𝔥_g : H →𝒪
given by h ↦ℌ(hg) for all h ∈ H.
For each α∈ H \ H ·(ℌ ) / K, choose a representative g_α∈ G for α. We denote (abusing notation) H_α = H ∩ g_α K g_α^-1, V_α = H ∩ g_α L g_α^-1, d_α = [ H_α : V_α ] and 𝔥_α = 𝔥_g_α denote the (H, g_α)-restriction of ℌ.
There exists a zeta element for ( x_U , ℌ , L ) if and only if there exist classes x_V_α∈ M_H, 𝒪(V_α) for all α∈ H \ ( H ·ℌ ) / K
such that
𝔥_α^t· j_U(x_U) = j_H_α∘_V_α, H_α,*(x_V_α)
in M _H, Φ.
Moreover if y_L is an associated class for a given zeta element under ι_* and M_H, is -torsion free, the classes x_V_α satisfying the criteria above can be picked so that _L, K, * ( y _ L ) = ∑_α [V_α g_α K ]_* (x_V_α ) where the sum runs over
α∈ H \ ( H ·ℌ ) / K.
For notational convenience, we will denote by A : = H \ H · ( ℌ ) / K and x : = j_U(x_U)
in the proof. Since U \ G / K
→ K \ G / K is surjective and has finite fibers, we have a natural injection 𝒞_( K \ G / K ) ↪𝒞_ ( U \ G / K ) given by ( K σ K ) ↦∑ ( U τ K ) where τ runs over U \ K σ K / K. Via this map, we consider ℌ as an element in 𝒞_( U \ G / K ). We assume that
ℌ = ∑
_ j ∈ J c_j ( U σ _j K )
where J is a finite indexing set, U σ_j K are pairwise disjoint double cosets and c_j≠ 0 for all j ∈ J. For α∈ A, let J_α⊂ J be the set of all j ∈ J such that H σ _j K = α. For each j ∈ J_α, let h_j∈ H
be such that σ_j K = h_j
g_α K. Then U h_j H_α∈ U \ H / H_α is independent of the choice of h_j and
𝔥_α : = ∑ _ j ∈ J_ α c_j ( U h_j H_α ) ∈𝒞 _ ( U \ G / H_α
) .
() Assume that there exist x_V_α∈ M_H, ( V_α ) satisfying the equality in the statement.
We claim that
ζ=∑_α∈ A [ U : V_α ] j_V_α( x_V_α ) ⊗(
g _ α K ) is a zeta element. As ζ is clearly -integral, we need to only show that x
⊗ℌ≃ζ. To this end, note that
x ⊗ℌ ≃∑_ j ∈ J c_j [ U : U_σ_j ] (
x ⊗ ( σ_j K ) )
≃∑_j ∈ J c_j
[ U : H_σ_j ]
[U σ_j K ]_* ( x ⊗ ( σ_j K ) )
≃∑_α∈ A
[ U :
H_α ] ∑_j ∈ J_α c_j [U σ _j K]_* ( h_j^-1 x ⊗ ( g_α K ) )
where the third relation uses that μ_H ( H_σ_j) = μ_H( H_α ) for
j ∈ J_α. For each α∈ A, let θ _α∈Φ [H_g_α] denote the sum over
a set of representatives of H_α / V_α and for each j ∈ J_α, let ς_j∈Φ[H_α] denote the sum over a set of representatives for H_α / ( h_j^-1 U _σ_j h_j ). By Lemma <ref> and Lemma <ref>, we have
𝔥_α^t· x = ∑_ j ∈ J_α c_jς_j h_j^-1 x,
j_H_g_α∘_V_α, H_g_α,*(x_V_α) = θ_α j_V_α ( x_V_α )
Now note that ς_j = [H_α : h_j^-1 U_σ_j h_j ] = [U σ_j K ] _* where degree of an element in a group algebra denotes its image under the augmnetation map. Thus for each α∈ A,
∑
_ j ∈ J_α c_jdeg [U σ _j K]_* ( h_j ^ - 1
x ⊗ ( g_α K ) ) ≃∑ _ j ∈ J_α
c_j ( ς_j h_j ^-1 x ⊗ ( g_α K ) )
= ( 𝔥_α^t·
x ) ⊗ ( g_α K )
= ( j_H_α∘_V_α, H_α, * (x_V_α ) ) ⊗( g_α K )
= ( θ_α j_V_α ( x_V_α ) ) ⊗( g_α K )
≃ [H_α : U_α ] · j_V_α ( x_V_α ) ⊗( g_α K )
Putting everything together,
we deduce that
x ⊗ℌ ≃∑ _α∈ A
[U : H_α] [ H_α : V_α ] · j_V_α ( x_V_α ) ⊗(g_α K )
= ∑_α∈ A [U : V_α ] j_V_α(x_V_α ) ⊗( g_α K ) = ζ .
This completes the proof of the if direction.
() Suppose that ζ is a zeta element for (x_U, ℌ, L ). Invoking Lemma
<ref>, we may assume that ζ is optimal. Say ζ = ∑ _ β∈ B [U : V_β ] j_V_β(x_V_β) ⊗( g_β K) is an optimal presentation where B is some finite indexing set. We identify B with a subset of H \ G / K by identifying β∈ B with H g_β K ∈ H \ G / K. Extending B by adding zero summands to ζ if necessary, we may assume that A ⊂ B.
Lemma <ref> allows us to further assume that { g_α K | α∈ A }⊆{ g_β K | β∈ B }.
We claim that x_V_β for β∈ A are the desired elements. Set 𝔥_β : = 0 and d_β : = [H_g_β : V_β ] for β∈ B
∖ A. Our calculation in the first part and the fact that ζ≃ x ⊗ℌ imply that
∑_β∈ B [U : H_g_β ] (
d_β j_V_β ( x_V_β ) - 𝔥_β ^t · x ) ⊗(g_β K ) ≃ 0 .
Lemma <ref> now implies that H_g_β-coinvariant class of d_β j_V_β (x_V_β ) - 𝔥_β x vanishes for each β∈
B. Fix a β∈ B for the remainder of this paragraph and write
d_β j_V_β ( x_V_β ) - 𝔥_β^t· x = ∑_i=1^k (γ _i -1) x_i
where γ _i∈ H_g_β and x_i∈ M _H,Φ. Let W ⊂ H_g_β be a normal compact open subgroup contained in V_β
such that W fixes x_i for each i = 1, …, k. If β∈ A,
we require in addition that W ⊂ h_j ^-1 U h_j∩ g_β K g_β^-1 for each j ∈ J_β.
Let Q = H_g_β/W and
e_Q∈Φ[H_g_β] denote |Q|^-1 times the sum over a set of representatives in H_g_β for Q. Then left multiplication of elements in M_H,Φ by e_Q annihilates (γ_i - 1) x_i for all i = 1, …, k, stabilizes 𝔥_β^t· x ∈ (M_H,Φ)^H_g_β and sends d_β j_V_β (x_V_β ) to j_H_g_β∘_V_β, H_g_β, * ( x_V_β ). Thus multiplying (<ref>) by e _ Q on both sides yields an equality involving 𝔥_g_β and x_V_β. The equalities for β∈ A are the ones sought after. This completes the proof of the only if direction.
It remains to prove the seoncd claim. So assume that M_H, is -torsion free and let y_L be an associated class for an arbitrary zeta element. By the second part of Lemma <ref>, we may pick the optimal presentation for ζ (as we did at the start of the if direction above) to further ensure that
y_L = ∑_β∈ B [V_β g_β L]_* ( x_V_β ) .
Since
_L, K, *∘ [ V_β g_β L ] = [V_β g_β K ] _* for each β∈ B (see Lemma <ref>), it suffices to show that [V_β g_β K]_* ( x_V_β ) = 0 for each β∈ B ∖ A to establish the second claim. Torsion freeness of M _ H , implies that
j_ H_g_β : M_H, ( H_g_β ) →M_H, Φ
is injective for all β.
Thus the conclusion of the previous paragraph gives _V_β , H_g_β, * ( x_V_β ) = 0 for all β∈ B ∖ A (recall that 𝔥_β = 0 for such β). Invoking Lemma <ref> again, we get that [V_β g_β K ]_*( x_V_β ) = [H_g_β g_β K ] ∘_ V_β , H_g_β * ( x_V_β ) = 0
which finishes the proof.
A zeta element for ( x_U , ℌ , L ) exists over Φ if and only if one exists over Frac ( ).
The if direction is trivial and the only if direction follows by Theorem <ref> and injectivity of M_H, Frac ( ) ( V ) → M_H, Φ(V) for any V ∈Υ _ H.
Suppose in the notation introduced at the start of the proof of Theorem <ref> that for each α∈ H \ H ·(ℌ) / K,
* j_U(x_U) ∈M_H, Φ lifts to a class in M_H, ( H _ α )[this condition is automatic if H_α⊆ U],
* for each j ∈ J _ α, we have h_j^-1· j_U(x_U) = a _ j j_U(x_U ) for some a _ j ∈Φ.
Then there exists a zeta element for ( x_U , ℌ , L ) if ∑ _j ∈ J_ α c_j a_j [ U σ_j K ] _*∈ d_α𝒪
for all α.
For each α, let x_H_α∈ M_H, ( H_α ) denote an element satisfying
j_H_α ( x_H_α ) = j_U( x_U ). Then
( H_α h_ j ^-1 U ) · j_U(x_U) equals a_j [U σ_j K ]_* j_U(x_U) for each j ∈ J_α.
So by Theorem <ref>, a zeta element exists in this case if (and only if) there exist x_V_α∈ M_H, ( V_α ) for each α such that
∑ _j ∈ J_α ( c_j a_jdeg [U σ_j K ]_* )
j_U ( x_U ) = j_H_α∘pr_V_α, H_α , *(x_V_α) .
But if ∑_j ∈ J_ α c_j a_j [ U σ _ j K ] _*
= d_α f _α for some f_α∈,
(<ref>) holds with
x_V_α : =
_V_α , H_α ^ * ( f_α x_H_g_α ).
Suppose that M_H, 𝒪 is the trivial functor on H and x_U∈ M_H, (U) = is an invertible element. Then a zeta element exists for ( x_U , ℌ , L ) if and only if
( 𝔥_α^t )
∈ d_α
for all α∈ H \ H · (ℌ )
/ K.
The if part is clear since the conditions of Corollary <ref> are satisfied with a_j = 1 and the sum ∑_ j ∈ J_α c_j [U σ_j K]_* equals
𝔥_α.
For the only if part, note that (<ref>) in the previous proof is also
necessary.
So we may assume that there exist x_V_α∈ M_H, (V_α) = such that (<ref>) holds. Now x_U = x_H_α∈,
x_V_α = f_α x_ U where f_α = x_V_α·
x_U^-1∈ and _V_α , H_g_α , * (x_V_α ) = d_αf_α x_U. Thus
(<ref>) is equivalent to
∑
_ j ∈ J_α ( c_j [U σ_j K ]_* )
x_ U = d_α f_α x_U
and the claim follows by multiplying by x_U^-1 on both sides.
The next result is included for completeness and shows that if the norm relation problem <ref> is trivial, so is the existence of a zeta element.
If 𝔥_ α^t· j_U(x_U ) ∈M_H, Φ lifts to class in d_α· M_H, ( H_α ) for all α∈ H \ H · ( ℌ ) / K, a zeta element exists for (x_U , ℌ, L ). The lifting condition holds automatically for an α if d _ α is invertible in . In particular, a zeta element exists unconditionally if d is invertible in .
Let x_H_α∈ M_H, ( H_α ) be such that j_H_α ( d_α x_H_α ) =
𝔥_α ^t· j_U ( x_U ). The criteria of Theorem <ref> is
then satisfied by taking x_V_α : = _V_α , H_g_α ^* ( x_H_α ) ∈ M_H,(V_α). By Lemma <ref>,
we have
j_H_α ( [ U h_j H_α ]_* ( x_U ) ) = ( H_α h_j^-1 U ) · j_U ( x_U ) .
Since [H_α h_j^-1 U ] ( x_U ) ∈ M_H, (H_α ), the class
𝔥_α^t· j_U(x_U) always lifts to an element in M_H, ( H_g_α ). In particular when d_α∈ ^ ×, this class lifts to an element in d_α· M_H , (H_g_α ) = M_H, ( H _ g_α ).
As noted in the proof above, each 𝔥_α gives rise to an -linear map
𝔥_α,* : M_H, (U) → M_H, (H_g_ α)
given by Hecke correspondences in the covariant convention.
Theorem <ref> then says that constructing a zeta element amounts to finding x_V_α∈ M_H, (V_α) such that
j_H_g_α∘𝔥_α , *
( x_U ) = j_H_α∘_V_α , H_α , * ( x_V_α ) .
Using this, we can record the following version of Theorem <ref>.
Suppose M_H, is -torsion free. Then a zeta element exists for (x_U , ℌ, L) if and only if there exist x_V_α∈ M_H, (V_α ) such that
𝔥_α, * (x_U) = _V_α , H_α , * ( x_V_α )
for all α∈ H \
H · ( ℌ ) / K.
Moreover if y_L is an associated class for a zeta element under ι _*, the classes x_V_α can be picked to ensure that _L,K,*(y_L) = ∑_α [V_α g_α K]_* ( x_V_α ).
Let ℌ ' ∈𝒞_ ( K \ G / K ) such that ℌ - ℌ ' ∈ d ·𝒞_ ( K \ G / K ). Then there exists a zeta element for (x_U , ℌ , L ) if and only if there exists one for ( x_U , ℌ ' , L )
For g ∈ G, let 𝔥_g' ∈𝒞_𝒪 ( U \ H / H_g ) denote the (H,g)-restriction of ℌ'. Then 𝔥_g - 𝔥_g' ∈ d ·𝒞_𝒪(U\ H / H_g ).
Since d · M_H, ( H ∩ g K g^-1 ) is in the image of the trace map from M_H, 𝒪(H ∩ g L g^-1), the claim follows.
The motivation behind these criteria
is that in practice the source functor M_H, is much better understood (e.g., ones arising from cycles or Eisenstein classes) than the target M_G, and the results above provide a means for parlaying this additional knowledge (and that of the Hecke polynomial) for Euler system style relations. In fact in all the cases that we will consider, M_H, will be a space of functions on a suitable topological space that would parametrize classes in the cohomology of Shimura varieties. In <ref> we study the trace map for such spaces in detail.
For the case of cycles coming from a sub-Shimura datum, the collection of fundamental classes of the sub-Shimura variety constitutes the trivial functor on H (see <cit.> for a concrete instance). In this case, Corollary <ref> applies and proving norm relations amounts to verifying certain congruence conditions. One may of course use the finer structure of the connected components of a Shimura variety as prescribed by the reciprocity laws of <cit.>. See <cit.> where a general formula for the action of Hecke operators is provided.
We however point out that for the case considered in <cit.>, working with the trivial functor turns out to be necessary as the failure of axiom (SD3) precludes the possibility of describing the geometric connected components of the source Shimura variety.
While the if direction is the “useful" part of Theorem <ref>, the
only if direction provides strong evidence that one does not need to look beyond the test vector specified by twisted restrictions, say, in local zeta integral computations.
§.§ Handling Torsion
We now address the equality of norm relation asked for in Problem <ref> without forgoing torsion. We retain the setup at the start of <ref> and <ref>, in particular Notation <ref>.
Suppose that ι_* is Mackey and M_H, is -torsion free.
If a zeta element exists, any associated class y_L∈ M_G,(L)
satisfies
ℌ_*(y_K)=pr_L, K, *(y_L).
By Corollary <ref>, we can find x_V_α∈ M_H , ( V_α ) satisfying
_L, K , * ( y_L ) = ∑ _ α∈ A [ V_α g_α K ]_* ( x_V_α ) and
𝔥_α, * ( x_U ) = _V_α , H_α , * ( x_V_α ).
In the notation introduced at the start of the proof of Theorem <ref>, we see using
Lemma
<ref>
that
ℌ_*(y_K)
= ∑_j ∈ J[U σ_j K]_*(x_U) =
∑_α∈ A ∑_j ∈ J_α c_j[U σ_j K]_*(x_U).
For j ∈ J, let W_j : =h_j ^-1 U_σ_j h_j. Then for j ∈ J_α, W_j = h_j^-1 U h_j∩ g_α K g_α^-1⊂ H ∩ g_α K g_α^-1 = H_α.
By Lemma <ref> and <ref>, we see that
ℌ_*( y_K ) =∑_α∈ A ∑_j ∈ J_α c_j[U_σ_jσ_j K]_*∘pr_U_σ_j, U^*(x_U)
=∑_α∈ A ∑_j ∈ J_α c_j[W_j g_α K]_*∘[h_j^-1]_ W_j, U ^*(x_U)
=∑_α∈ A ∑_j ∈ J_α c_j[H_α g_α K]_*∘pr_W_j, H_g_α
, *∘[h_j^-1]_W_j, U^*(x_U)
Now note that U ∩ h_j H_g_α h_j^-1 = U _ σ_j and h_j ^ - 1 U
h_j∩ H_α = W_j. Thus
pr_W_j, H_α , *∘[h_j^-1]_W_j, U^*(x_U)=[H_α h_j ^-1 U](x_U) = [U h_j H_α]_*(x_U)
(see the diagram below) and so ∑_j ∈ J_α c_j·_W_j, H_g_α, * ∘ [ h_j^-1 ]^* _ W_j , U ( x_U ) = 𝔥_α, * ( x_U ).
M_H, 𝒪 (U_σ_j) [r, "[h_j^-1]^*"] M_H, 𝒪(W_j) [dr, "_*"] M_H, 𝒪(V_α
) [d, "_*"]
M_H,(U) [ur,"^*"] [rrr, " [ U h_j H_α ]_* " , swap ] M_H, ( H_α )
Therefore by eq. (<ref>)
and Lemma <ref>, we have
ℌ_*(y_K)
= ∑_α∈ A [H_α g_α K ]_* ( 𝔥_α, * ( x_U ) )
=∑_α∈ A [H_α g_α K]_*∘pr_V_α, H_α, *(x_V_α)
=∑_α∈ A [V_α g_α K]_* (x_V_α ) =pr_L, K, *(y_L)
which finishes the proof.
In the proof above, the relation ℌ_*(y_K)=pr_L, K, *(y_L) in Theorem <ref> can also be derived under the weaker assumption that the relations eq. (<ref>) is satisfied modulo the kernel of ι_* : M_H,(H_α) → M_ G , (g_α K g_α^-1 ) for each α and even when M_H, is not -torsion free. In particular, this result (which is all we really need for Euler systems) can be stated
without ever referencing zeta elements. However as noted in the introduction, the notion of zeta elements to connects the approach of <cit.>, <cit.> etc., with ours, and also “explains" the nature of integral test vectors chosen in these works.
In applications to Shimura varieties, one eventually projects the norm relations to a π_f-isotypical component of the cohomology of the target Shimura variety, where π_f is (the finite part
of) an irreducible cohomological automorphic representation of the target reductive group, in order to land in the first Galois cohomology H^1 of a Galois representation ρ_π in the multiplicity space of π. The projection step, to our knowledge, requires the coefficients to be in a field. Thus the information about torsion is lost anyway i.e., one apriori obtains norm relations in the image of H^1 of a Galois stable lattice T_π⊂ρ_π inside H^1 of the Galois representation ρ_π. One way to retrieve the torsion in the norm relation after projecting to Galois representation is to to use Iwasawa theoretic arguments e.g., see <cit.> or
<cit.>. We will however not address this question here.
§.§ Gluing Norm Relations
We now consider the `global' version of Problem <ref>. Let
I be an indexing set and
ι_v : H_v→ G_ v be a collection of embeddings of unimodular locally profinite groups indexed by v ∈ I. We will consider H_ v as a subgroup of G_ v via ι_v. For each v ∈ I, let K_v ⊂ G_ v be a compact open subgroup and set U_v := H ∩ K_v. Let G, H denote respectively the restricted direct product of G_v, H _v with respect to K_v, U_v over all v. Let K, U denote respectively the products of K_v, U_v over all v. For any finite subset ν⊂ I, we define G_ν = ∏_v ∈ν G_v and G^ν = G/G_ν. If ν = { v }, we denote G^ν simply as G^v. We similarly define the H, K and U versions.
For all but finitely many v ∈ I, say we are given L_v a normal compact open subgroup of K_v. Let I' ⊂
I denote the set of all such v and let 𝒩 denote the set of
all finite subsets of I '. Let Υ_G be a collection of compact open subgroups satisfying (T1)-(T3) that contains K and the groups L_vK^v
for all v ∈ I'.
Let Υ_H be a collection of compact open subgroups of H satisfying (T1)-(T3) and which contains ι^-1 ( Υ_G ). Fix an integral domain whose field of fractions Φ is a -algebra. Let
M_H , : 𝒫(H, Υ_H) →-Mod, M_G , : 𝒫(G, Υ_G ) →-Mod
be CoMack functors and ι _ * : M_H, → M_G, be a Mackey pushforward. Let x_U∈ M_H, ( U ) be a class and
denote y_K : = ι_U,K,*(x_U) its image in M_G, (K). Suppose we are also given for each v ∈ I ' an element ℌ_v∈ℋ_( K_v\ G_v / K_v ). Given any ν∈𝒩, any K' ∈Υ_G of the form K_ν K” with K”⊂ G^ν, we obtain by Lemma <ref> a well-defined -linear endomorphism
ℌ_ν, * : M_G, ( K ' ) → M_G, ( K ' )
induced by the tensor product ℌ_ν := ( K” ) ⊗⊗_v ∈νℌ_v.
For ν∈𝒩, denote
K[ν] : = K^ν×∏_v ∈ν L_v. If ν = { v }, we denote this group simply by K[v]. Note that K[ν] = K if ν = ∅. For ν ,μ∈𝒩 that satisfy ν⊂μ, denote the pushforward M_G, (K[μ]) → M_G, (K[ν]) by _μ, ν, *.
Construct classes y_ν∈ M_G, (K[ν] ) for ν∈𝒩 such that y_∅ = y_K and for all μ, ν∈𝒩 satisfying ν⊂μ, we have ℌ_μ∖ν , * ( y_ν ) = _μ, ν, * ( y_μ ).
Our “resolution" to this problem is by assuming the existence of abstract zeta elements at each v in suitable RIC functors whose restricted tensor product parameterizes classes in M_H,. Let Υ_G_v denote the collection of all compact open subgroups that are obtained as finite intersections of conjugates of K_v and L_v. By Lemma <ref>, Υ_G_v satisfies (T1)-(T3). Then any compact open subgroup in ∏_v ∈ IΥ_G_v whose component at v equals K_v for all but finitely many v belongs to ∈Υ_G. Let Υ_H_v = ι^-1_v ( Υ_G_v ) and let Υ_H, I⊂∏_v ∈ IΥ_H_v denote the collection of all subgroups whose component group at v is U_v for all but finitely many v. Then Υ_H , I satisfies (T1)-(T3) and Υ_H, I⊂ι^-1 ( Υ_G ) ⊂Υ_H.
Suppose that there exists a
morphism φ : N → M_H, where N : 𝒫(H, Υ_H,I) →-Mod is a restricted tensor product ⊗'_v ∈ I N_v of -torsion free functors N_v : 𝒫(H_v, Υ_H_v) →𝒪-Mod taken with respect to a collection {ϕ_U_v∈ N_v( U_v ) }_v ∈ I that satisfies φ ( ⊗_v ∈ Iϕ_U_v ) = x_U. If a zeta element ζ_v ∈𝒞( G_v/ K_v , N_v, Φ ) exists for (ϕ_U_v , ℌ_v , L_v) for every v ∈ I', then there exist classes y_ν∈ M_G, (K[ν]) for each ν∈𝒩 such that y_∅ = y_L and
ℌ_μ∖ν , * ( y_ν ) = _μ , ν, * ( y_μ )
for all ν , μ∈𝒩 satisfying
ν⊂μ.
For v ∈ I ', denote A_v : = H _v\ H_v· ( ℌ_v ) / K_v and for each
α_v∈ A_v,
let g_α_v∈ G_v be a representative for the class α_v. Denote H_α_v : = H_v∩ g_α_v K_v g_α _v ^ - 1, V_α_v : = H_ v ∩ g_α _v L _ v g_α _v ^ - 1 and 𝔥_α_v∈𝒞_ ( H_α_v\ H_ v / U_v ) be the (H_v, g_α_v)-restriction
ℌ_v with respect to
g_α_v.
By Corollary <ref>, the existence of ζ_v
is equivalent to the existence of ϕ_α_v∈ N_v(V_α_v) for all α_v∈ A _v
such that
𝔥_α_v, * ( ϕ_U_v ) = _V_α_v, H_α_v , * ( ϕ_α_v ) .
Denote by _* : N → M the pushforward given by the composition ι_*∘φ. Then _* is Mackey since ι_* is. Recall that 𝒩 denotes the set of finite subsets of I '. For ν∈𝒩, we denote A _ν := ∏_ v ∈ν A _v. Given a ν∈𝒩 and α = α_ν∈ A _ν, we let
α_v denote the v-th component of α for v ∈ν and set
H_α := ∏_v ∈ν H_α_v , V_α = ∏_v ∈ν V_α_v , g_α : = ∏_v ∈ν g_α_v ϕ_α = ⊗_v ∈νϕ_α_v∈⊗ _ v ∈ν N(V_α_v ) .
For ν∈𝒩, we let ϕ_U^ν denote
the restricted tensor product ⊗'_v ∉νϕ_U_v
and define
y_ν : = ∑ _α∈ A _ ν [U^νV_αg_α L_ν K^ν ]_* ( ϕ _ U ^ ν⊗ϕ_α ) ∈ M_G, (K[ν])
i.e., y_ν is the sum of classes obtained by applying mixed Hecke correspondences [U^ν V_α g_α L_ν K^ν ]_* : N(U^ν V_α) → M_G, (L_νK^ν ) = M_G, (K[ν]) to ϕ_U^ν⊗ϕ_α∈ N( U^ν V_α ) over all α∈ A_ν.
We claim that y_ν for ν∈𝒩 are the desired classes. It is clear that y_∅ = y_K as φ( ϕ_U ) = x_U. By Lemma <ref>, it suffices to prove the norm relation ℌ_μ∖ν, * (y_ν ) = _μ, ν, * ( y_μ ) for ν⊂μ such that μ∖ν = { v }. To this end, fix an α∈ A _ν for the remainder of this proof and
consider the inclusion Υ_H_v↪Υ_H (of sets) given by W_v↦ W_v V_α U^μ and the inclusion Υ_G_v↪Υ_G given by K_v ' ↦ K_v' L_ν K^μ.
Let
N_H_v, α : 𝒫(H_v, Υ_H_v) →-Mod, M_G_v, ν : 𝒫(G_v, Υ_H_v) →-Mod be respectively the functors obtained by fixing levels away from ν as specified by these inclusions. We then have a Mackey pushforward
_ v , α , * : N_H_v , α→ M_G_v , ν
where for a compatible pair (W_v, K_v') ∈Υ_H_v×Υ_G_v,
the map N_H_v, α(W_v ) → M_G_v, ν( K'_v ) is equal to map [U^μW^vV_α g_α L_ν K'_v K^μ]_* : N(U^μW_vV_α) → M_G, (L_ν K_v' K^ν).
Given ϕ_ W_v∈ N_v(W_v), we denote by ϕ_W_v, α∈ N_H_v , α ( W_v) the element
ϕ_U^μ⊗ϕ_W_v⊗ϕ_α∈ N_H_v, α ( W_v ) = N(U^μ W_v V_α). Similarly for any β_v∈ A_v, we let ϕ_β_v, α∈ N_H_v,α(V_v) denote the element ϕ_U^μ⊗ϕ_β_v⊗ϕ_α. Then for any β_v∈ A_v, we have
𝔥_β_v,* ( ϕ_U_v, α ) = ϕ_U^μ ⊗ 𝔥_β_v ,* ( ϕ_U_v ) ⊗ ϕ_α
= ϕ_U^μ ⊗ _V_β_v, H_β_v , * ( ϕ_β_v ) ⊗ϕ_α
= _V_β_v, H_β_v, * ( ϕ_β_v, α ) ∈ N_H_v , α (H_β_v )
A zeta element for the triple ( ϕ_U_v, α , ℌ_v , L_v ) therefore exists in 𝒞( G_v/K_v , N_H_v, α, Φ ) since the ϕ_β_v, α∈ N_H_v, α (V_β) (for β_v∈ A_v) satisfy the criteria Theorem <ref>. By Theorem
<ref>, we see that
ℌ_v, * ∘_v, α, * ( ϕ_U_v, α ) = ∑_β_v∈ A_v [V_β_v g_β_v K_v ]_* ( ϕ_β_v , α )
Therefore
_ν, μ, * (y_μ ) = ∑_α∈ A_ν∑ _ β_v∈ A_v [U^μ V_β_v V_α g_β_v g_α L_μ K^ν]_* ( ϕ_U^μ⊗ϕ_β_v⊗ϕ_α )
= ∑ _ α∈ A_ν∑_β_v∈ A_v [V_β_v g_β_v K_v]_* ( ϕ_β_v , α )
= ∑ _ α∈ A_νℌ_v∘ i _v, α, * ( ϕ_U_v, α )
= ∑_α∈ A_νℌ_v, * ∘ [U^ν V_α g _ α L_ν K^ν ]_* ( ϕ_U^ν⊗ϕ_α ) = ℌ_v, * ( y_ν )
which completes the proof.
The intended application
to Shimura varieties we have in mind is where we take I to be the set of all places where all groups at hand are unramified and reserve one element v_bad∈ I for all the bad places lumped together i.e., if S is the set of all bad places, G_v_bad = ∏_v ∈ S G_v, ϕ_v_bad = ϕ_U_S etc., and we take I' = I ∖{ v_bad}.
§.§ Traces in Schwartz spaces
Since the machinery developed so far
only allows us to recast norm relation problem from a larger group to the smaller one, it is useful to have some class of functors where identifying the image of the trace map is a more straightforward check. For instance when M_H, is the trivial functor, the trace map is multiplication by degree and Corollary <ref> uses this to give us a congruence criteria involving certain mixed degrees. This applies to pushforwards of fundamental cycles. For Eisenstein classes and cycles constructed from connected components of Shimura varieties, the parameter spaces are certain adelic Schwartz spaces.
In this subsection, we study the image of the trace map for such spaces and derive an analogous congruence criteria.
Let H be locally profinite group with identity element e and X a locally compact Hausdorff totally disconnected space endowed with a continuous right H-action X × H → X. By definition, X carries a basis of compact open neighbourhoods.
For a ring R, we denote by 𝒮_R(X) the R-module of locally constant compactly supported functions on X valued in R. Under the right translation action on functions, 𝒮_R(X) becomes a smooth left representation of H. In what follows, we will frequently use the following fact: the set of all compact open subsets of X is closed under finite unions, finite intersection and relative complements. Moreover if U ⊂ H is a compact open subgroup, then the set of compact open subsets of X that are invariant under U is such a collection as well.
Let W , V ⊂ H be compact open subgroups with V ⊂ W. We say that x ∈ X is (W,V)-smooth if there exist a V-invariant compact open neighbourhood Z of x such that Z γ for γ∈ V \ W are pairwise disjoint.
A W-invariant compact open neighbourhood Y ⊂ X is said to be (W,V)-smooth if Y = _ γ∈ V \ W Z_γ such that Z_ e is a V-invariant compact open neighbourhood of X and Z_γ = Z_ e γ for all γ∈ V \ W.
If Y =
_γ∈ V \ W Z_γ is (W,V)-smooth, the points of Z_γ are (W, γ V γ^-1 )-smooth but not necessarily (W,V)-smooth unless V ⊴ W. It is clear that any (W,V)-smooth neighbourhood is also ( W, γ V γ^-1 )-smooth for all γ∈ W. Smooth neighbourhoods behave well with respect to finite unions, finite intersections and relative complements.
Suppose that Y, S ⊂ X are compact opens such that Y is (W , V)-smooth and S is W-invariant. Then Y - S and Y ∩ S are (W,V)-smooth. If S is also (W,V)-smooth, then so is Y ∪ S.
Let Y = _ γ∈ V \ W Z_γ where Z_e is a V-invariant compact open and Z_γ = Z_eγ. Then Y ∩ S is a W-invariant compact open neighbourhood, Z_e∩ S is a V-invariant compact open neighbourhood contained in Y ∩ S and Z_γ∩ S = (Z _e∩ S ) γ. Thus Y ∩ S = _γ∈ V \ W (Z_e∩ S ) γ which implies (W,V)-smoothness of Y ∩ S. Similarly Y - S = _ γ∈ V \ W (Z_e - S ). If S is also ( W, V )-smooth, then since
Y ∪ S = ( Y -
S ) ⊔ ( S ∩ Y ) ⊔ ( S - Y )
is a disjoint union of (W,V)-smooth neighbourhoods, Y ∪ S is (W,V)-smooth as well.
Suppose that S ⊂ X is a W-invariant compact open subset that admits a covering by (W,V)-smooth neighbourhoods of X. Then S is (W,V)-smooth.
For all x ∈ S, let Y_x⊂ X denote a (W,V)-smooth neighbourhood around x. By Lemma <ref>, Y_x∩ S is (W,V )-smooth and we may therefore assume that Y_x⊂ S for all x ∈ S. Since S is compact and S = ⋃ _ x ∈ S Y_x, we have S = ⋃ _ i = 1 ^ n Y_i where Y_1 , …, Y_n form a finite subcollection of Y_x. Thus S is a finite union of (W, V)-smooth neighbourhoods and is therefore itself (W,V)-smooth by Lemma <ref>.
Next we have the following criteria for checking (W,V)-smoothness of point. For x ∈ X, let Stab_W(x) denote the stabilizer of x in W.
A point x is (W,V)-smooth if and only if Stab_W ( x ) ⊂ V.
The only if direction is clear, so assume that Stab_W(x) ⊂ V. Let U ⊂ V be a compact open subgroup that is normal in W. For σ∈ W, let C_σ : = x σ U ⊂ X denote the U-orbit of x σ. Thus two such subsets are disjoint if they are distinct. By continuity of the action of H, C_σ are compact and therefore closed in X. Since U ⊴ W, we have C_σ = x U σ and C_στ = x σ U ·τ U. Thus U \ W acts transitively on the orbit space ( xW )/ U = { C_σ | σ∈ W } via the right action (C_σ , U τ ) ↦ C_στ. Let U ^∘ denote the inverse image in W under W ↠ U \ W of the stabilizer of C_ e under this action. Clearly Stab_W(x) ⊂ U ^ ∘. If γ∈ U^∘, then x γ = x u for some u ∈ U by definition. This implies that u γ^-1∈Stab_W(x) ⊂ V and since U ⊂ V, we have γ∈ V. So U^∘ is a compact open subgroup of W such that Stab_W(x) ⊂ U^∘⊂ V .
It therefore suffices to show that x is (W,U^∘ )-smooth.
Let γ_1 , …, γ_n∈ W be a set of representatives for U ^ ∘\ W, δ_1 , …, δ_m∈ U ^ ∘ be a set of representatives for U \ U^∘ and denote C_i : = C_γ_i. Then C_i for i = 1 ,…, n are pairwise disjoint and each C_i is stabilized (as a set) by δ_j,i : = γ_i^-1δ_jγ_i for all j = 1 , … , m. For any compact open neighbourhood T of x, X' := T W is a compact open neighbourhood of X that contains C_i for all i. Since X' is compact Hausdorff, it is normal and we may therefore choose compact open neighbourhoods S_i contained in X'
such that S_i contains C_i and S_1 , …, S_n are pairwise disjoint. For each fixed k = 1 , … n, ℓ = 1 , … , m, let
Z _ k , ℓ : = S_k U δ_ℓ,k - ⋃ _ i ≠ k ⋃_j = 1 ^m S _ i U δ_j,i
where i runs over all integers 1 to n except for k. Since S_i U δ_j, i = S_iδ_j, i U by normality of U in W and { S_iδ_j , i U | j = 1 , …, m , i = 1 , … , n } is a collection of U-invariant compact open neighbourhoods, Z_k , ℓ are U-invariant compact open neighbourhoods as well. By construction, Z_k,ℓ intersects Z_k' , ℓ' if and only if k = k '. We claim that x δ_ℓγ_k = x γ_kδ_ℓ, k ∈ C_γ_k⊆ S_k U δ_ℓ, k is a member of Z_k, ℓ. Suppose for the sake of deriving a
contradiction that x δ_ℓγ_k∈ S_i U δ_j , i for some i , j with i ≠ k, so that
x δ_ℓγ_kδ_j, i ^-1 U = C_eδ_ℓγ_kδ_j, i ^-1 = C_eγ_kδ_ j , i ^ - 1 = C_γ_kδ_j, i ^-1
intersects S_i. As C_γ_i is the only element in { C_σ | σ∈ W } contained in S_i, this can only happen if C_γ_kδ_j,i ^ - 1 = C_γ_i or equivalently if C_γ_k = C_γ_iδ_j,i. But since δ_j,i stabilizes C_γ_i, this means that C_γ_k = C_γ_i which in turn implies
i = k, a contradiction. Thus x δ_ℓγ_k∈ Z_k, ℓ or equivalently, x ∈ Z_k , ℓγ_k ^-1δ_ℓ ^ - 1. Now let
Z : = ⋂_ k = 1 ^n⋂_ℓ = 1 ^ n Z _ k , ℓγ_k^-1δ_ℓ^-1 .
Then Z is a U-invariant compact open neighbourhood of x as each Z_k, ℓ is and U ⊴ W. Since Z δ_ℓγ_k⊆ Z_k, ℓ, Z δ_ℓγ_k and Z δ_ℓ'γ_k' are disjoint for any 1 ≤ℓ ' , ℓ '
≤ m, 1 ≤ k , k ' ≤ n with k ≠ k'. If we now let Z^∘ : = ⋃_ℓ = 1 ^ m Z δ_ℓ, then Z^∘ is U^∘-invariant compact open subset of X such that Z^∘γ_1 , … , Z^∘γ_n are pairwise disjoint. Thus x is (W,U^∘ )-smooth.
For each x ∈ X, we let V_x denote the subgroup of W generated by V and Stab_W(x). By Lemma <ref> V_x is the unique smallest subgroup of W containing V such that x is (W,V_x )-smooth. Let 𝒰 be the lattice of subgroups of W that contain V. For 𝒯⊂𝒰 a sub-collection, we denote by max 𝒯 the set of maximal elements of 𝒯 i.e., U ∈max 𝒯 if no U' ∈𝒯 properly contains U. We have a filtration
𝒰 = 𝒰_0⊋𝒰_1⊋…⊋𝒰_N = { V }
defined inductively as 𝒰 _k+1 : = 𝒰 _k - max 𝒰 _k for k = 0, … , N - 1. We let dep : 𝒰→{ 0 , …, N } be the function U ↦ k where k is the largest integer such that U ∈𝒰_k i.e., k is the unique integer such that U ∈max 𝒰_k. It is clear that dep is constant on conjugacy classes of subgroups. We let
dep = dep_W,V : X →{ 0 , 1, 2, …, N }
x ↦dep(V_x )
and refer to dep(x) as the depth of x. We say that S ⊂ X has depth k if inf{dep(x) | x ∈ S } = k.
If S ⊂ X has depth k,
the set of depth k points in S is closed in S.
Let T ⊂ S be the set of depth k points. By assumption, the depth of any point in S - T is at least k + 1. For x ∈ S - T, choose Y_x a (W, V_x)-smooth neighbourhood of x in X. Then each y ∈ Y_x is (W, γ V _ x γ^-1 ) smooth for some γ∈ W. Thus V_y⊆γ V_xγ^-1 by Lemma <ref> and so
dep(y) = dep ( V_y ) ≥dep ( γ V_xγ^-1 ) = dep ( V_x
) = dep(x) > k
for all y ∈ Y_x. Therefore Y_x∩ T = ∅ which makes Y_x∩ S an open (relative to S) neighbourhood of x contained in S - T. As x was arbitrary, S - T is open in S which makes T closed in S.
If V ⊴ W, then V_x = Stab_W(x) · V and [V_x : V ] = [ Stab_W(x) : Stab_W(x) ∩ V ]. The next result provides a necessary and sufficient criteria for a given function in 𝒮_R(X) to be the trace of a V-invariant function in terms of these indices.
Suppose that V ⊴ W, R is an integral domain and ϕ∈𝒮_R(X) ^ W.
Then there exists ψ∈𝒮 _ R ( X ) ^ V such that ϕ = ∑ _γ∈ W / V γ·ψ if and if only for all x ∈ ( ϕ ), ϕ(x) ∈ [V _ x : V ] R.
() Let ψ∈𝒮_R(X)^V be an element satisfying the trace condition. For x ∈ X, let V_x be as above, γ_1 , …, γ_n∈ W be a set of representatives for W / V_x and δ_1, …, δ _m∈ V_x be a set of representatives of V_x / V, so that γ_iδ_j run over a set of representatives for W / V. As V_x = Stab_W(x) V, we may assume that δ_i (and therefore δ_i^-1) belong to Stab_W(x). Since W / V is a group, δ_j^-1γ_i ^-1 also run over a set of representatives for W / V. Therefore
ϕ(x) = ∑ _γ∈ W / V γ·ψ (x) = ∑ _ i = 1 ^ n∑_j =1^mψ ( x δ_j ^ - 1γ_i ^-1 )
= ∑ _ i = 1 ^ n m ·ψ ( x γ_i ^ - 1 ) ∈ [V_x : V ] R .
() Set S : = ϕ and N : = dep V. By definition of 𝒮_R(X), S is a W-invariant compact open subset X. We inductively define a sequence S_0 , … , S_N of W-invariant compact open subsets of S such that
* S = S_0⊔ S_1⊔…⊔ S_N,
* all depth k points of S are contained in S_0⊔…⊔ S_k for each 0 ≤ k ≤ N,
* each S_k admits a sub-partition _ U ∈max 𝒰_k S_U where S_U is a (W,U)-smooth neighbourhood on which ϕ is constant and valued in [U :V ] R.
We provide the inductive step for going from k - 1 to k which covers base case as well by taking k = 0, S_-1 = ∅. So assume that for k ∈{0 , …, N-1 }, the subsets S_0, …, S_k-1 have been constructed.
Let T_ k be the (possibly empty) set of all depth k points in
R_k : = S - _i = 0 ^ k - 1 S_ i
where R_k = S if k = 0. By construction, R_k is a W-invariant compact open subset of S and depth of R_k is at least k. By Lemma <ref>, T_k⊂ R_k is closed and therefore compact. For each x ∈ T_k, let Y_x be a (W,V_x)-smooth neighbourhood of x. By Lemma <ref>, we may assume Y_x⊂ R_k.
As ϕ is W-invariant and locally constant, x is contained in a W-invariant compact open neighbourhood on which ϕ is constant. By intersecting such a neighbourhood with Y_x if necessary, may also
assume that ϕ is constant on Y_x for each x. Since T_k is compact and covered by Y_x, there exist x_1, …, x_n∈ T_k such that T_k⊆ Y_x_1∪…∪ Y_x_n. Let
S_k : = Y_x_1∪…∪ Y_x_n .
Clearly S_k is a W-invariant compact open subset of R_k since Y_x are and S_k is disjoint from S_1 , …, S_k-1. By construction, all the depth k points of R_k are in S_k and thus all the depth k points of S are in S_1⊔…⊔ S_k. Let Y_i : = Y_x_i - (Y_x_i+1∪…∪ Y_x_ n ). Then Y_i are (W, V_x_i )-smooth by Lemma <ref> and S = Y_1⊔…⊔ Y_n. As x_i∈ T_k, we have V_x_i∈max 𝒰_k and by construction, ϕ takes the constant value ϕ(x_i) ∈ [V_x_i : V] R on Y_i. For each U ∈max 𝒰_k, we let S_U : = _ V_x_i = U Y_i. Then S_k = _ U ∈max𝒰_k S_U is the desired sub-partition
and the inductive step is complete.
Now for each U ∈𝒰, let Z_U⊂ S_U be a U-invariant neighbourhood whose U \ W translates partition S_U. We define ψ : X → R by
ψ ( x ) = [U : V ] ^-1ϕ(x) if x ∈ Z_U
0 otherwise
Then ψ is well-defined since for all x ∈ S_U, ϕ(x) = [U:V] · r for a unique r ∈ R - { 0 }. As ψ takes a non-zero constant value on Z_U and is zero elsewhere, ψ = _ U ∈𝒰 Z_U. As each Z_U is V-invariant, ψ is V-invariant. Thus ψ∈𝒮_R(X)^V. Let ϕ' = ∑ _ γ∈ W / V ψ. As S is W-invariant and ψ⊆ S, ϕ ' ⊂ S as well. Thus ϕ and ϕ' agree on X - S and we show that they agree on S as well. For each x ∈ S, there exists a unique U ∈𝒰 and a unique γ∈ U \ W (both of which depend on x) such that x γ∈ Z_U. Let γ_1 , …, γ _ n ∈ W be a set of representatives of U \ W
and δ_1, …, δ_m∈ U a set of representatives of V \ U. Then γδ_jγ _i run over set of representatives for V \ W = W / V. Since x γ∈ Z_U and Z_U is U-invariant, x γδ_jγ_i∈ Z_Uγ_i for all i, j.
Thus x γδ_jγ_i∈ Z_U if and only if γ_i represents the identity class in U \ W. One then easily sees that
ϕ ' ( x ) = ∑_ i , j ψ ( x γδ_jγ_i ) = ∑_ jψ ( x γδ_j ) = [U : V] ψ (x) = ϕ( x) .
Hence ϕ = ϕ' and so ψ is the desired element.
Let ϕ∈𝒮 _R(X)^W and let x_α∈ X for α∈ I be a set of representatives for ( ϕ ) / W.
Then ϕ is the trace of an element 𝒮_R(X) ^V for V ⊴ W if and only if ϕ(x_α) ∈ [ V_x_α : V ] R for all α∈ I.
The only if direction is clear by Proposition <ref>. The if direction also follows from it since any x ∈ x_α W is (W, γ V_x_αγ^-1 )-smooth for some γ∈ W, so that
ϕ(x) = ϕ(x_α) ∈ [V_x_α : V] R = [V_x : V] R .
We resume the setup of <ref> and retain Notation <ref>. Assume moreover that M_H, is the functor associated with the smooth H-representation 𝒮_(X). In particular, x_U is a U-invariant Schwartz function ϕ_U : X →.
Suppose that p ∈ϕ_U is an H-fixed point. Then a zeta element exists for (ϕ_U , ℌ, L) only if ϕ_U(p) ·(𝔥_α,*) ∈ [H_α : V_α] for all α∈ H \ H ·(ℌ) / K.
By Theorem <ref>, a zeta element exists if and only if 𝔥_α^t·ϕ_U∈ M_H, ( H_α ) is the trace of an element in M_H, (V_α ) = 𝒮_(X)^V_α. By Theorem <ref>, this can happen only if
𝔥_α^t·ϕ_U(p) ∈ [H_α : V_α] .
Since p is H-fixed, 𝔥_α ^t·ϕ _U(p) = ϕ _U(p) ·(𝔥_α,*).
For Eisenstein classes, the local Schwartz functions are characteristic functions on lattices in certain vector spaces and the group H at hand acts via
linear transformations. The origin is therefore a fixed point for its action and Corollary <ref> provides a quicker initial check[this proved particularly helpful in <cit.> where the `convolution step' was quite involved] for applying the criteria of Theorem <ref>.
Incidentally, this is the same check as in Corollary <ref> which applies to fundamental cycles.
§.§ Miscellaneous results
We will study zeta elements for groups G that are product of two groups, one of which is abelian and it would be useful to record some auxiliary results that would be helpful in applying the criteria to such groups.
Suppose for the this subsection only that G = G_1× T where T is abelian with a unique maximal compact subgroup C. Suppose also that K = K_1× C, L = K_1× D where K_1⊂ G_1, D ⊂ C (so that d = [C:D]) and that
ℌ
= ∑_ k ∈ I e_ k ( K γ_ k ϕ_ k K ) ∈𝒞_ ( K \ G / K )
where e _ k ∈, γ _ k ∈ G_1 and ϕ_ k ∈ T.
Let ι_1 : H → G_1, ν : H → T denote the compositions H
G → G_1, H → G → T respectively. We suppose that ι_1 is injective, so we may consider H, U as a subgroup of G_1 as well as G. When we consider H, U as subgroups of G_1, we denote them by H_1, U_1 respectively.
Suppose that K _1γ_ k K_1 = _ j ∈ J_ k U_1σ _ j K _1 where J_k is an indexing set and σ _ j ∈ G_1. Denote σ_j,k = σ_jϕ_ k and H_1, σ_j = H_1∩σ_j K _ 1 σ_j^-1
Then
* ℌ = ∑_ k ∈ I ∑ _ j ∈ J_k e_k ( U σ_j, k K )
* [U σ _j , k K ]_* = [ U_1σ_j K _1 ]_*,
* [H ∩σ_j , k K σ_j , k : H ∩σ_j , k L σ_j, k ^-1 ] = [ H_1, σ_j : H_1 , σ_j∩ν^-1(D) ].
Since ν is continuous and C is the unique maximal compact subgroup of T, the image under ν of any compact subgroup of H is contained in C. For (a), it suffices to note that
K_1γ _k K_1 = _ j ∈ J_k U _1σ_k K_1 K γ_k ϕ_k K = _ j ∈ J _k U σ_jϕ_k K
since K γ _ k K = K_1γ_k K_1×ϕ_k C and ν(U) ⊂ C. For (b), note that H ∩σ_j , k K σ_j, k ^-1 = H ∩σ_j K σ_j^-1 as T is abelian. Since H _ 1 ∩σ_j K _ 1 σ_j^-1 is compact, ν ( H _1∩σ_j K _1σ_j^-1 ) ⊂ C and therefore
H ∩σ_j, k K σ_j, k ^-1 = ι_1^-1 ( σ _j K_1σ_j^-1 ) ∩ν^-1(C) = H_1∩σ _j K_1σ_j^-1 .
Similarly U ∩σ_jϕ_k K ( σ_jϕ_k ) ^-1 =U _1∩σ_j K_1σ_j^-1 and (b) follows. The argument for (c) is similar.
§.§ A prototypical example
In this subsection, we show how the machinery above may be applied to the case of CM points on modular curves to derive the Hecke-Frobenius valued norm relations at a split prime, which is essentially the n = 2 case of the example studied in <ref>. See also <cit.> for a more thorough treatment.
Let E be an imaginary quadratic field. Set = Res_E/_m and
= _2. Fix a -basis of E and let ι : 𝐇→ be the resulting embedding. Let 𝐓 be the torus of norm one elements E and set = ×. Let ν : 𝐇→𝐓 denote h ↦ h γ(h)^-1 where γ∈ ( E / ) is the non-trivial element and let
ι_ν : → , h ↦ ( ι (h ), ν ( h ) ) .
Then both ι and ι_ν is a morphisms of Shimura datum. The embedding ι signifies the construction of CM points on the modular curve. Under Shimura-Deligne reciprocity law for tori, the extensions corresponding to 𝐓 by class field theory are anticyclotomic
over .
Let G_f, G̃_f, H_f, T_f denote the _f points of , , 𝐇, 𝐓 respectively.
Let Υ_G̃_f denote the collection of all neat compact open subgroups of G̃ _f of the form K × C where K ⊂ G_f, C ⊂ T_f and let Υ_H_f denote the collection of all neat compact open subgroups of H_f. These collections satisfy (T1)-(T3) and ι_ν^-1 ( Υ_G̃_f ) ⊂Υ_H_f. For any rational prime p, the mappings N _ _p : Υ_H_f →_p -Mod M _ _p : Υ_G̃_f →_p -Mod
U ↦H^0_ ( Sh_(U) , _p ) K̃ ↦H^2_ ( Sh_ ( K̃ ) , _p ( 1 ) )
that send each compact open subgroup to the corresponding arithmetic étale cohomology of the corresponding Shimura varieties over E constitute CoMack functors. We note that if K̃ : = K × C, the Shimura variety Sh_ (K̃ ) is the base change of the modular curve over of level K to the extension of E determined by the compact open subgroup C.
The embedding ι : 𝐇→𝐆 induces a Mackey pushforward ι : N__p→ M__p of RIC functors. For each U, N__p(U) is the free _p-module on the class of 1_Sh_H(U) and N__p is the trivial functor on Υ_H_f.
Let ℓ≠ p be a rational prime that is split in E. Then
𝐇__ℓ≃_m×_m , 𝐓__ℓ≃_m
where the isomorphisms are chosen so that the map ν is identified with the map that sends (h_1, h_2) ∈__ℓ map to h_2 / h_1∈𝐓__ℓ. The particular choice is so that the action of uniformizer ℓ∈_ℓ^×≃(_ℓ) (in the contravariant convention) is identified with the action of geometric Frobenius Frob_λ^-1 on cohomology where λ corresponds to the first component in the identification H__ℓ≃_m×_m.
Fix for the rest of this discussion a split prime ℓ as above and a compact open subgroup K̃ = K × C ∈Υ_G̃_f such that K = K^ℓK_ℓ, C = C^ℓ C_ℓ where K_ℓ = _2(_ℓ), C_ℓ = _ℓ^× and K^ℓ, C^ℓ are groups away from ℓ. Let U : = ι^-1(K) and similarly
write U = U^ℓ U_ℓ where U_ℓ = _ℓ^××_ℓ^×. Let
ℌ _ℓ (X) : = ℓ·ch ( K ) - ch ( K σ _ ℓ K ) X + ch ( K γ _ ℓ K) X ^ 2 ∈ℋ_( K \ ( _f ) / K ) [ X ]
where σ_ℓ : = diag(ℓ , 1 ) and γ _ℓ : = diag ( ℓ , ℓ ).
Then
ℌ̃_ℓ : = ℌ_ℓ ( Frob_λ ) = ℌ_ℓ ( ℓ^-1 C ) ∈𝒞_(K̃\G̃ / K̃_ℓ )
induces a _p-linear map ℌ̃_ℓ, * : M__p ( K̃ ) → M__p(K̃). Let D = C^ℓ D_ℓ where D_ℓ = 1 + ℓ_ℓ and let x_U = 1_Sh(U). Set L̃ = K × D. We ask if there is a zeta element ( x_U , ℌ̃_ℓ , L̃ ). Recall that such an element would solve the corresponding question posed in <ref>. It is also clear that this checking can be done locally at the prime ℓ and that via Theorem
<ref>, one can produce a compatible system of such relations for such ℓ.
The local embedding of H_ℓ↪G̃ _ℓ is not the diagonal one on the _2 (_ℓ ) copy. We may however conjugate this embedding by an appropriate element of K_ℓ to study the zeta element problem and conjugate everything back at the end by the inverse of the said element[See <cit.> for details.]. So say that H_ℓ↪G̃_ℓ is the diagonal embedding where the first component of H_ℓ corresponds to the top the left matrix entry in _2(_ℓ). Define the following elements of G_ℓ:
σ_1 = [ ℓ ; 1 ], σ_2 = [ ℓ 1; 1 ], σ_3 = [ 1 ; ℓ ] , σ_4 = [ ℓ ; ℓ ] , τ = [ 1 ℓ^-1; 1 ]
and set σ̃_̃ĩ = ( σ_j, ( σ_j^-1) ). Then
ℌ̃_ℓ = ℓ·(U_ℓK̃) - ( (U σ̃_1K̃) + (U_ℓσ̃_2K̃_ℓ) + (U_ℓσ̃_3K̃_ℓ) ) + ( U_ℓσ̃_4K̃_ℓ) .
It is then clear that
g_0 : = (1,1) , g_1 : = ( τ , 1 ) , g_2 : = ( 1, ℓ ^ -2 )
form a complete system of representatives for H_ℓ\ H_ℓ· ( ℌ̃ ) / K̃_ℓ.
For i = 0, 1, 2, let H_ℓ, i = H_ℓ∩ g_iK̃_ℓ g_i^-1 and 𝔥_ℓ, i∈𝒞_(U_ℓ\ H_ℓ / H_ℓ, i )
denote the (H, g_i)-restriction of H̃_ℓ. Then
𝔥_ℓ,0 = ℓ· ( U_ℓ ) - (U_ℓ (ℓ, 1) U_ℓ )
𝔥_ℓ,1 = ( U_ℓ (ℓ, 1) H_ℓ, 1 )
𝔥_ℓ, 2 = (U_ℓ (1,ℓ) U_ℓ ) - ( U_ℓ (ℓ , ℓ ) U_ℓ )
from which it is easily seen that
(𝔥_ℓ,0,*) = ℓ - 1 , (𝔥_ℓ,1,*) = 1, (𝔥_ℓ,2,*) = 0 .
Finally, let d_i : = [ H_ℓ, i : H_ℓ∩ g_iL̃ _ℓ g_i^-1 ]. Then d_0 = d_2 = ℓ - 1, d_1 = 1.
Since
( 𝔥_ℓ, i,*) ∈ d_i_p
for i = 0 , 1, 2, Corollary <ref> implies that a zeta element exists for (1_Sh_U , ℌ̃_ℓ, L̃).
Note that our zeta element is supported on g_0K ∪ g_1K, even though H_ℓ\ H_ℓ· ( ℌ̃ ) / K̃_ℓ has three elements. See also Remark <ref> for a similar observation.
§ HECKE POLYNOMIALS
In this section, we describe the Hecke algebra valued polynomials associated with representations of the Langlands dual of a reductive group and record some techniques that can be used to compute them. On the way, we fix notations and terminology that will be used in carrying out the computations in Part II of this article.
Throughout this section, we let F denote a local field of characteristic zero, 𝒪_F its ring of integers, ϖ a uniformizer, =𝒪_F / ϖ its residue field, q=|| the cardinality of and ord : F →∪{∞} the additive valuation assigning 1 to ϖ. We pick once and for all [ ] ⊂_F a fixed choice of representatives for . We let F̅ denote an algebraic closure of F and let F^unr⊂F̅ denote the maximal unramified subextension. For M a free abelian group of finite rank, we will often denote by M_ the -vector space M ⊗_.
§.§ Root data
Let 𝐆 be an unramified
reductive group over F. This means that F is quasi-split over F and split over a finite unramified extension of F. Let 𝐀 be a maximal F-split torus in , 𝐏⊃𝐀 a F-Borel subgroup and 𝐍 the unipotent radical of 𝐏. Let 𝐌:=𝐙(𝐀) be the centralizer of 𝐀 which is a maximal F-torus in 𝐆. We will denote by G, A, P, M, N the corresponding groups of F-points of 𝐆, 𝐀, 𝐏, 𝐌, 𝐍 respectively. Let X^*(𝐌) (resp., X_*(𝐌)) denote the group of characters (resp., cocharacters) of 𝐌 and let
⟨-,-⟩: X_*(𝐌) × X^*(𝐌) →ℤ
denote the natural integral pairing. The natural extension of (<ref>) to X_*(𝐌)_× X^*(𝐌)_→ is also denoted as ⟨ - , - ⟩.
Let
Φ_F̅⊂ X^*(𝐌) denote the set of absolute roots of 𝐆 with respect to 𝐌, Φ^+_F̅⊂Φ _ F̅ the set of positive roots associated with 𝐏 and Δ_F̅ a base for Φ_F̅. For α∈Φ _ F̅, we denote by α^∨∈ X_* ( 𝐌 ) the corresponding coroot and denote the set of coroots by Φ_F̅ ^ ∨. Since Φ_F̅ is reduced, Δ_F̅^∨ = {α^∨ | α∈Δ_F̅} is a base for the positive coroots in Φ_F̅^∨. We let 𝐖_𝐌 = 𝐍 _(𝐌) / 𝐌 denote the absolute Weyl group scheme of 𝐆 and set W_M : = 𝐖_𝐌(F̅ ).
Then left action W_M on X^*(𝐌), X_*(𝐌) induced by conjugation action on 𝐌_F̅ identifies it with the Weyl group of the (absolute) root datum ( X^*(𝐌) , Φ_F̅ , X_*( 𝐌) , Φ_F̅^∨ ). Thus for α∈Φ_F̅, there is reflection element s_α = s_α^∨∈ W _ M that acts on λ∈ X_*( 𝐌 ) and χ∈ X^*(𝐌) via
λ↦λ - ⟨λ , α⟩α ^ ∨ χ↦χ - ⟨α^∨, χ⟩α
The pair ( W_M , {s_α} _α∈Δ_F̅ ) is a Coxeter system. We let ℓ _ F̅ : W _ M → the corresponding length function.
We will also need to work with the relative root datum of . Let X^*(𝐀), X_*(𝐀) denote respectively the set of characters and
cocharacters of 𝐀. As 𝐀 is split, all characters and cocharacters are defined over F.
Let
res : X^* ( 𝐌 ) ↠ X ^ * ( 𝐀 ) , cores : X_* ( 𝐀 ) ↪ X_* ( 𝐌 )
denote respectively the natural injection and surjection induced by 𝐀↪𝐌. Let Γ = (F^unr / F ) ≃ denote the unramified Galois group of F. Then Γ acts on X^*(𝐌) via (γ,χ) ↦γχ( γ^-1 x ) where γ∈Γ, χ∈ X^*(𝐌) and x ∈𝐌( F^unr ). Similarly Γ acts on X_*(𝐌)
and the pairing (<ref>) is Γ-invariant under these actions.
Since 𝐌 is defined over F, the action of Γ preserves Φ_F̅, Φ_F̅^∨ (as sets).
Since 𝐏 is defined over F, Γ also preserves Δ_F̅, Δ_F̅
^ ∨ and the action of Γ on these bases is via diagram automorphisms. We have
X^*(𝐌) _Γ, freeres≃ X^* ( 𝐀 ) , X_*(𝐀) cores≃ X_*(𝐌)^Γ
where X_*(𝐌)_Γ, free denotes the quotient of the group of coinvariants by torsion.
The pairing
⟨-,-⟩: X_*(𝐀) × X^*(𝐀) →ℤ
is compatible with (<ref>) i.e., if λ : _m→𝐀, χ : 𝐌→_m are homormophisms defined over F̅, then ⟨cores λ , χ⟩ = ⟨λ, res χ⟩.
Let 𝐖 _𝐀 : = 𝐍 _(𝐀)/𝐌 denote the Weyl group scheme of with respect to 𝐀.
Then 𝐖 _𝐀 is a constant group scheme over F and
𝐖 _𝐀(F) = 𝐍 _(𝐀)(F)/ 𝐌(F) = N_G(A)/M
by <cit.>. Using quasi-splitness of , it can also be shown that 𝐖 _𝐀(F) = 𝐖 _𝐌(F ) (<cit.>) and that 𝐖 _𝐌(F) = 𝐍 _(𝐌)(F)/ 𝐌 (F) (<cit.>). In particular 𝐖_𝐀(F) is the subgroup of Γ-invariant elements in W_M. We call W := N_G(A)/M the relative Weyl group of . It is clear that res and cores are equivariant under the action of W.
Let Φ_F⊂ X_*(𝐀) denote the set of restrictions
of elements of
Φ_F̅ to 𝐀. The elements of Φ_F (𝐀) are called the relative roots
of 𝐆 with respect to 𝐀. We denote by Q(Φ_F) the -span of Φ_F in X_*(𝐀). Then Φ_F forms a (possibly non-reduced) root system in Q(Φ_F)_. Since is quasi-split, Φ_F̅ does not intersect the kernel of restriction map. The set of elements of Φ_F̅ that restrict to the same element in Φ_F form a single Γ-orbit. The restrictions obtained from the Γ-orbits of Δ_F̅ constitute a base Δ_F for Φ_F (<cit.>) and we denote by Φ_F^+ the corresponding positive root system. The natural action of W on X^*(𝐀) identifies it with the Weyl group of the root system of relative roots. To each root α∈Φ_F, there is by definition an element α^∨ in the vector space dual of Q(Φ_F)_. The totality Φ_F^∨ of these elements α^∨ naturally forms a root system (<cit.>). We refer to Φ_F^∨ as the set of relative coroots of . The set {α ^ ∨ | α∈Φ_F^+} is then a system of positive (co)roots for Φ_F^∨. The subset Δ_F^∨ = {φ(α) | α∈Δ_F} where φ(α) = α^∨ if 2 α∉Φ_F and 1/2α^∨ if 2α∈Φ_F is a base for the positive relative coroots (<cit.>). By <cit.>, Φ_F̅^∨ embeds naturally into X_*(𝐀). The quadruplet (X^*(𝐀), Φ_F, X_*(𝐀 ) , Φ_F^∨ ) thus constitutes a root datum and will be referred to as the relative root datum of . See also <cit.>.
§.§ Orderings
In this subsection, we work with an abstract root datum first and then specialize the notations to the situation of the previous subsection. This is done to address the absolute and relative cases simultaneously. The notations for abstract datum will also be used in <ref>.
Let Ψ = (X, Φ , X ^∨ , Φ^∨ ) be a root datum. The perfect pairing X^∨× X → given as part of this datum will be denoted by ⟨ - , - ⟩. Given α∈Φ, β∈Φ^∨, we denote by α^∨∈Φ ^∨, β ^ ∨∈Φ the associated elements under the bijection ΦΦ^∨ given as part of Ψ. We let W_Ψ denote the Weyl group of Ψ. If α is in Φ or Φ^∨, we denote by s_α∈ W_Ψ the corresponding reflection.
Let Q be the span of Φ in X, Q^∨ the span of Φ^∨ in X^∨, X_0 the subgroup of X orthogonal to Φ^∨ and P ⊂ Q_ = Q ⊗_ the -dual of Q^∨. Then Q ⊂ P are lattices in Q_. We define X_0^∨, P^∨ in an analogous fashion. We refer to Q (resp. P, Q^∨, P^∨) as the root (resp. weight, coroot, coweight) lattice. The groups P / Q, P^∨/ Q^∨ are in duality and finite. It is clear that the action of W_Ψ preserves Q, P, Q ^∨, P^∨. If χ∈ X_0, χ - s_αχ = ⟨α^∨ , χ⟩α = 0 for all α∈Φ and thus W_Ψ acts trivially on X_0. Similarly it acts trivially on X_0^∨.
By <cit.>, the subgroup Q + X_0 of X has finite index in X and X_0∩ Q is trivial. Thus each χ∈ X can be written uniquely as χ_0 + χ_1 for χ_0∈ X_0,, χ_1∈ Q_. We refer to χ_0 as the central component of χ. As ⟨λ , χ_1⟩ = ⟨λ , χ⟩ for all λ∈ Q^∨ and ⟨λ , χ⟩∈ as χ∈ X, we see that χ_1∈ P for every χ∈ X. There is thus a well-defined X → P and its kernel is easily seen to be X_0. We call the map X → P the reduction modulo X_0 and χ_1 the reduction of χ modulo X_0. We similarly define these notions for X^∨.
It is however not true in general that X ⊂ X_0 + P e.g., consider the root datum of _2.
Let Δ⊂Φ be a base for Φ giving a positive system Φ^+ for Φ, Δ^∨ a base for the corresponding positive system for Φ^∨, S the set of reflections associated to Δ^∨ and ℓ : W _ Ψ→ the resulting length function. We say that λ∈ X^∨ is dominant (resp., antidominant) if for all α∈Δ, we have ⟨λ , α⟩≥ 0 (resp., ⟨λ , α⟩≤ 0) and we denote the set of such λ by (X^∨)^+ (resp., (X^∨)^-). It is clear that λ∈ (X^∨)^+ if and only if ⟨λ , β^∨⟩≥ 0 for all β∈Δ^∨ (since any element of Δ can be written as β^∨ or β^∨ / 2 for some β∈Δ^∨). We similarly define dominant elements in P^∨ and denote their collection by (P^∨)^+. Then λ∈ X^∨ is dominant if and only if its image λ̅∈ P^∨ under reduction modulo X_0^∨ is dominant.
There exists a partial ordering ≽ on X^∨ which also depends on the choice of basis Δ^∨. It is defined by declaring λ≽μ for λ , μ∈ X^∨ if
λ - μ = ∑ _ β∈Δ ^ ∨ n_ββ
for some non-negative integers n_β∈. In particular, λ and μ are required to have the same central component. We say that λ is positive with respect to ≽ if λ≽ 0 and negative if λ≼ 0. We similarly define the ordering ≽
for P^∨. It is easily seen that λ≽μ
for λ , μ∈ X^∨ iff λ, μ have the same central component and λ̅≽μ̅
where λ̅, μ̅∈ P^∨ denote respectively the reductions of λ , μ.
Let w ∈ W _ Ψ, β∈Δ^∨ be such that ℓ(w) = ℓ( w s_β) + 1. Then w β is negative.
Let V = Q^∨⊗. Then Φ ^ ∨ embeds in V and (V, Φ^∨) is a root system. Let Φ ' ⊂Φ the set of all indivisible roots. Then (V, Φ') is a reduced root system with the same Weyl group W_Ψ and Δ^∨⊂Φ' is a base for Φ'. The result then follows by <cit.>.
In general, a dominant λ∈ X^∨ need not be positive (consider λ∈ X_0 ^ ∨) and a positive λ need not be dominant (cf. the `dangerous bend' in <cit.>). We however have the following result.
λ in X^∨ or P^∨ is dominant if and only if for all w ∈ W_Ψ, λ≽
w λ.
This is essentially <cit.> where it is proved in the setting of root systems and where the ordering ≻ is defined by taking positive real coefficients.
We provide the necessary modifications. Since both the dominance relation and ≽ on X ^∨ are compatible modulo X_0 ^ ∨
and since the action of W_Ψ on X^∨ preserves central components,
the claim for X^∨ follows from the corresponding claim for P^∨. So let λ∈ P^∨. Since λ - s_βλ = ⟨λ , β^∨⟩β for any β∈Δ^∨ (see eq. (<ref>)), we see that λ is dominant if and only if λ≽
s_βλ for all β∈Δ^∨. So it suffices to show that λ≽ w λ for all w ∈ S implies the same for all w ∈ W. This is easily proved by induction on ℓ(w). Write w = w' s_β where β∈Δ ^∨ and ℓ(w) = ℓ(w') + 1. Then
λ
- wλ = λ - w' λ + w'( λ - s_βλ) .
Now λ - w' λ is positive by induction hypothesis. On the other hand, w'(λ - s_βλ ) = w (s_βλ - λ) = - ⟨λ, β ^∨⟩ wβ. Since -wβ∈ Q^∨ is positive by
Lemma <ref> and ⟨λ , β ^∨⟩∈_≥ 0 since λ≽ s_βλ, we see from (<ref>) that λ≽
w λ. This completes the induction step.
We now specialize back to the notation of <ref>. If λ, μ∈ X_*( 𝐀), we write λ≽μ to denote the ordering with respect to the relative root datum. If λ, μ∈ X_*(𝐌), we write λ≽_Mμ to emphasize that the ordering is with respect to the absolute root datum.
The set of dominant relative (resp., absolute) cocharacters is denoted X_*(𝐀) ^+ (resp., X_*(𝐌)^+). Since res(Δ_F̅ ) = Δ_F, cores induces an inclusion X_*(𝐀)^+↪ X_*(𝐌)^+. We denote by X_*(𝐀)_0, X_*(𝐌)_0 the groups orthogonal to Δ_F, Δ_F̅ respectively. Then X_*(𝐀)_0 = X_*(𝐌)_0 ^ Γ.
Recall that we denote by W the relative Weyl group for . Let S := { s_α | α∈Δ_F} be the set of simple reflections and ℓ = ℓ_F : W → the resulting length function. The longest Weyl element w_∘∈ W is defined to be the unique element which attains the maximum length in W. Then w_∘ is also maximal under Bruhat ordering and is the unique element of W satisfying w_∘·Δ_F = - Δ_F (as a set). We have w_∘^2 = id_W. For each λ∈ X_*( 𝐀 ), we define λ ^ opp := w_∘λ. Then for λ∈ X_*(𝐀) ^ +, λ^opp is the unique element in the Weyl orbit of λ that lies in X_*(𝐀)^-. Moreover
λ≽μ⟺ - λ ^ opp≽ - μ ^ opp
for any λ , μ∈ X_*(𝐀 ) since - w_∘ ( λ - μ ) ≽ 0.
We will say that w_∘ = - 1 as an element of W if w_∘ (α) = - α for all α∈Δ_F. We can similarly define λ^opp for any λ∈ X_*(𝐌). This is compatible with cores by the following.
w_∘ is also the longest element in W_M.
Since w_∘∈ W = (W_M)^Γ, the action of w_∘ on Φ_F̅ is Γ-equivariant. In particular, w_∘ preserves Γ-orbits. Since restriction res: Φ_F̅→Φ_F is W-equivariant and sends positive (resp., negative) absolute roots to positive (resp., negative) relative roots, we see that w_∘·Δ_F̅ = - Δ_F̅.
If w_∘ = - 1 as an element of W, then λ + λ^opp∈ X_*(𝐀)_0 for any λ∈ X_*(𝐀). Moreover if λ≽μ for some μ∈ X_*(𝐀), then λ + λ^opp = μ + μ^opp.
The first claim follows since ⟨λ + λ^opp , α⟩ = ⟨λ , α + w_∘α⟩ for any α∈Δ_F. The second claim follows since λ - μ is a positive integral sum of positive coroots and applying - w_∘ acts as identity on this sum.
§.§ Iwahori Weyl group
From now on, we denote X_*(𝐀) by Λ. We fix throughout a smooth reductive group scheme 𝒢 over 𝒪_F such that 𝐆 equals the generic fiber 𝒢_F of 𝒢.
Then K := (_F) is a hyperspecial maximal compact subgroup of G = 𝒢(F). Let A^∘ := A ∩ K, M^∘ : = M ∩ K. As is unramified, A^∘, M^∘ are the unique maximal compact open subgroups of A, M respectively.
In particular, these do not depend on 𝒢. Moreover W is identified with (K ∩ N_G(A) ) /M^∘. We have isomorphisms Λ∼→ A / A^∘∼→ M / M^∘ induced respectively by λ↦ϖ^λ A^∘, A ↪ M (see <cit.>). We denote by
v : A / A^∘→Λ
the inverse of the negative
isomorphism Λ→ A/ A^∘, λ↦ϖ^-λ A^∘.
The quotient W_I : = N_G(A) / M ^∘ is called the Iwahori Weyl group of G. It naturally isomorphic to the semi-direct products M/M^∘⋊ W ≃ A/A^∘⋊ W (<cit.>) and we identify W_I with these groups. The mapping (<ref>) induces a further
isomorphism v : W_I = A/A^∘⋊ W Λ⋊ W where ϖ^λ A^∘∈ W_I for λ∈Λ is identified with (-λ, 1).
Let Q ^ ∨ _ F = Q(Φ_F^∨ ) denote the relative coroot lattice. The subgroup W_aff : = Q_ F ^ ∨⋊ W of Λ⋊ W is called the (relative) affine Weyl group.
The group W_aff acts on the vector
space Q^∨_F⊗ by translations and it is customary to denote the element (λ,1) ∈ W_aff by t_λ or t(λ). Similarly when the coroot lattice Q^∨_F is viewed as a subgroup of W_aff, it is written as t ( Q ^∨ _F ). More generally, we denote the element ( λ , 1 ) ∈Λ⋊ W by t(λ) and consider it as a translation of Λ⊗. If Φ _ F is irreducible, α _ 0 ∈Φ_F is
the highest root and s_α_0∈ W denotes the reflection associated with α_0, the group W_aff is a Coxeter group with generators S_aff : = S ⊔{ t_α_0 ^ ∨ s_α_0}. In general, W_aff is a Coxeter group whose set of generators S_aff is obtained by extending the set S by the reflections associated to the simple affine root of each irreducible component of Φ_F. In particular, its rank (as a Coxeter group) is the number of irreducible components of Φ_F added to the rank of W.
We denote by ℓ : W_aff→ the extension of ℓ : W → and by ≥ the strong Bruhat order on W _ aff induced by the set S_aff. Via the isomorphism W _IΛ⋊ W, we identify W_aff as a subgroup of W_I. The quotient Ω := W_I / W _aff acts on W_aff by automorphisms (of Coxeter groups) and one has an isomorphism W_I≃ W_aff⋊Ω. One extends the length function to a function
ℓ : W_I→
by declaring the length of elements of Ω to be 0. Similarly, the strong Bruhat ordering on W_aff is extended to W_I by declaring w ρ≥ w' ρ' for w , w ' ∈ W_aff, ρ , ρ ' ∈Ω if w ≥ w' and ρ = ρ'. Each W \ W_I / W
has a unique minimal length representative in W_I via which we can define a partial ordering on the double cosets. Under the identification Λ^+≃ W \ W_I / W, the ordering ≽ restricted to Λ^+ is identified with the ordering on representatives in W \ W _I / W. See <cit.>.
See <cit.> and <cit.> for the role of buildings in defining these groups.
Buildings will be briefly used in <ref>.
§.§ The Satake transform
Fix a Haar measure μ_G on G such that μ_G(K)=1. For a ring R, let ℋ_R(K \ G / K) be the Hecke algebra of level K with coefficients in R (Definition <ref>) and R⟨ G / K⟩ be the set of finite R-linear combinations on cosets in G / K. For σ∈ G, we denote by (K σ K ) ∈ℋ_R( K \ G / K ) the characteristic function of K σ K which we will occasionally also write simply as (K σ K). For λ∈Λ, denote by e^λ the element corresponding to λ in the group algebra ℤ[Λ] and e^Wλ the (formal) sum ∑_μ∈ W λ e^μ. This allows one to convert from additive to multiplicative notation for cocharacters. The half sum of positive roots δ : = 1/2∑ _α∈Φ_F̅^+α is an element of P (Φ_F̅) by <cit.>. For λ∈Λ = X_*(𝐀), let ⟨λ, δ⟩ denote the quantity ⟨cores(λ) , δ⟩ = ⟨λ, res(δ ) ⟩.
Let ℛ=ℛ_q denote the ring ℤ[q^±1/2] ⊂ where q^1/2∈ denotes a root of x^2-q and q^-1/2 denotes its inverse. Denote by p: G / K → K \ G / K the natural map and p^*: ℋ_ℛ(K \ G / K) →ℛ⟨ G / K⟩ the induced map that sends the characteristic function of K σ K to the formal sum
of left cosets γ K contained in K σ K. Let ℐ: ℛ⟨ G / K⟩→ℛ[Λ] denote the ℛ-linear map defined by ( ϖ ^ λ n K ) ↦ q^-⟨λ, δ⟩ e^λ for λ∈Λ, n ∈ N.
This is well defined by <cit.> (since M K / K ≃ M / M^∘≃Λ). The composition
𝒮 : ℋ_ℛ(K \ G / K) ℛ⟨ G / K ⟩ℛ [Λ]
is then a homomorphism of ℛ-algebras known as the Satake transform. Its image lies in the Weyl invariants ℛ[Λ]^W. By
<cit.> or <cit.>), the induced map 𝒮_ over is an isomorphism onto [ Λ ] ^W. We note that { ( K ϖ ^λ K ) | λ∈Λ^+} is a basis for ℋ_ℛ(K \ G / K ) by Cartan decomposition. We are therefore interested in the Satake transform of such functions. For λ∈Λ^+, write
𝒮(K ϖ^λ K )
= ∑_μ∈Λ q^- ⟨μ, δ⟩ a_λ(μ) e^μ
where a_λ(μ) ∈_≥ 0. By definition, a_λ ( μ ) is equal to the number of distinct left cosets ϖ^μ n K for n ∈ N such that ϖ ^ μ n K ⊂ K ϖ ^ λ K. The W-invariance of 𝒮 implies that q ^ - ⟨μ_1 , δ⟩ a_λ(μ) = q ^ - ⟨μ_2 , δ⟩ a_λ(μ_2 )
for all μ_1, μ_2∈Λ such that W μ_1 = W μ _ 2. Let ≽ denote the same partial ordering in <ref>.
For λ , μ∈Λ^+, a_λ(μ ) ≠ 0 only if λ≽μ. Moreover, a_λ(λ^opp) = 1.
Set κ = λ ^ opp and ν := μ ^ opp. Then - κ , - ν∈Λ^+. Since the image of 𝒮 is W-invariant, a_λ ( μ ) ≠ 0 if and only if a_λ(ν) ≠ 0. By definition, this is equivalent to ϖ^ν N K ∩ K ϖ^λ K ≠∅. Now ϖ^ν N = N ϖ ^ ν as A normalizes N and Kϖ^λ K = K ϖ^κ K as K ∩ N_G(A) surjects onto W. Thus
a_λ(μ) ≠ 0 ⟺ K ϖ^κ K ∩ N ϖ^ν K ≠∅ .
By <cit.> and the identification of ≽ on Λ^+ with the Bruhat ordering on W \ W_I / W,
we get that K ϖ^κ K ∩ N ϖ ^ν K ≠∅ - κ≽ - ν[the negative sign arising from the normalization (<ref>)].
But the last condition is the same as λ≽μ by (<ref>). This establishes the first part. By <cit.>, K ϖ^κ K ∩ N ϖ ^ κ K = ϖ^κ K i.e., the only coset of the form ϖ^κ n K where n ∈ N such that ϖ^κ n K ⊂ K ϖ^λ K is ϖ^κ K. The second claim follows.
A weaker version of above
appears in <cit.>.
See also <cit.>.
For λ∈Λ^+, 𝒮 ( K ϖ ^ λ K ) - q ^ ⟨λ, δ⟩ e^W λ lies in the ℛ-span of { e ^ W μ | μ∈Λ^+ , μλ}.
Since w_∘δ = - δ by Lemma <ref>, we see that
⟨λ^opp , δ⟩ = ⟨λ , w_∘δ⟩ = - ⟨λ , δ⟩.
The second part of
Proposition <ref> therefore implies that
q^- ⟨λ^opp , δ⟩ a_λ(λ^opp) = q ^ ⟨λ, δ⟩ .
Thus the coefficient of e^Wλ in 𝒮(K ϖ^λ K ) is q^⟨λ , δ⟩. The claim now follows by the first part of <ref>.
The Satake transform induces an isomorphism ℋ_ℛ (K \ G / K ) ≃ℛ [Λ]^W of ℛ-algebras.
Fix λ∈Λ^+. We wish to show that e^W λ lies in the image of 𝒮. Let 𝒰_0 = {μ∈Λ^+ | μ≼Λ} and inductively define 𝒰_k as the set 𝒰_k∖max 𝒰_k for k ≥ 1. It is clear that 𝒰_0 and hence each 𝒰_k is finite.
By Corollary <ref>, f_1 : = 𝒮 ( q ^- ⟨λ, δ⟩ ( K ϖ ^λ K ) ) - e^Wλ∈ℛ [ Λ ] ^ W equals a sum ∑ c_λ(μ ) e^W μ where μ runs over the set 𝒰_1 = {μ∈Λ^+ | μλ} and c_λ(μ) ∈ℛ.
By Corollary <ref> again,
f_2 := 𝒮 ( q ^ - ⟨λ , δ⟩ ( K ϖ^λ K ) - ∑ _μ∈max𝒰_1 q ^ - ⟨μ , δ⟩ c_λ ( μ ) ( K ϖ ^ μ K ) ) - e^W λ
is a linear combination of e ^ W μ∈ℛ[Λ]^W for μ∈𝒰_2. Continuing this process, we obtain a sequence of elements f_k∈ℛ[Λ]^W for k ≥ 1 that are supported on 𝒰_k and such that e^W λ + f_k lies in the image of 𝒮. Since 𝒰_k are eventually empty, f_k are eventually zero and we obtain the desired claim.
Suppose w_∘ = -1 as an element of W. Then the transposition operation ℋ_R(K \ G / K ) corresponds under Satake transform to the negation of cochracters on ℛ[Λ] ^ W.
For λ∈Λ^+, (K ϖ^λ K ) ^t = ( K ϖ^κ K ) where κ := -λ^opp∈Λ^+. By Lemma <ref>, κ = λ + λ_0 for some λ_0∈ X_*(𝐀)_0. Since ϖ^λ_0 is central in G, (Kϖ^κK )= (K ϖ^λK ) * (Kϖ^λ_0 K) and as W λ_0 = λ_0,
𝒮( K ϖ^κ K ) = 𝒮( K ϖ^λ K ) e^λ_0 .
Now for any μ∈Λ such that a_λ(μ) ≠ 0, we have λ≽μ by Proposition <ref> and Lemma <ref>. Thus - μ^opp = μ + λ_0 by Lemma <ref>. The result now follows since e^Wμ· e^λ_0 = e^W (μ + λ_0 ) = e^ W ( - μ ).
For λ∈Λ ^ +, we call the element q^⟨λ, δ⟩ e^W λ∈ℛ[Λ]^W the leading term of the Satake transform of ( K ϖ^λ K ) and the number q^⟨λ, δ⟩ its leading coefficient. If gK ⊂ K ϖ^λ K is a coset, we call the unique cocharacter μ∈Λ such that gK = ϖ^μ n K for some n ∈ N the shape of the coset gK. The shape μ of any g K ⊂ K ϖ^λ K for λ∈Λ^ + satisfies λ≽μ by the results above.
Proposition <ref> and most of its corollaries may be found in several places in literature,
though the exact versions we needed are harder to locate. We have chosen to include proofs primarily to illustrate our conventions, which will also be useful in computations in Part II.
Cf. <cit.>.
One can strengthen Proposition <ref> to a_λ(μ) ≠ 0 ⟺λ≽μ. See <cit.>.
§.§ Examples
In this subsection, we provide a few examples of Satake transform computations for _2 to illustrate our conventions in a simple setting.
Let 𝐆 = _2 , F, 𝐀 = _m×_m ↪𝐆 be the standard diagonal torus and K = _2 ( _F ). For i = 1, 2, let e _i : 𝐀→_m for i =1 , 2 be the characters given by diag(u_1, u_2) ↦ u_i, i = 1 , 2 and f _i : _m→𝐀 be the cocharacters that insert u into the i-th component. Then Φ = {± ( e_1 - e_2 ) } and Λ = f_1⊕ f_2. We will denote λ = a_1 f_1 + a_2 f_2∈Λ by (a_1 ,a_2 ). We take χ : = e_1 - e_2∈ X^*(𝐀) as the positive root, so that δ = χ/2 and Λ^+ is the set (a_1, a _2 ) such that a_1≥ a_2. Let α : = e^f_1, β : = e^f_2 considered as elements of the group algebra [ Λ ]. Then ℛ [Λ] ^W = ℛ [α^±, β^± ] ^S_2 where the non-trivial element of S_2 acts via α↔β.
Let λ = f_1∈Λ^+. Then
λ^opp = f_2. As is well-known,
K ϖ^λ K = [ 1 ; ϖ ] K ⊔ _ κ∈ [] [ ϖ κ; 1 ] K .
In this decomposition, there is 1 coset of shape f_2 and q cosets of shape f_1. Therefore, we obtain
𝒮 ( K ϖ^λ K )
= q^1/2β + q · q^ -1/2α = q^1/2 ( α + β ) ∈ℛ [ Λ]^W .
Let λ = 2f_1∈Λ^+. Then
λ^opp = 2f_2. It is easy to see that
K ϖ^λ K = [ 1 ; ϖ^2 ] K ⊔ _ κ∈ [] ∖{ 0 }[ ϖ κ; ϖ ] K ⊔ _ κ_1, κ_2∈ [] [ ϖ ^ 2 κ_1 + ϖκ_2; 1 ] K .
In this decomposition, there is one coset of shape 2f_2, q-1 cosets of shape f_1 + f_2 and q^2 of shape 2f_1. So,
𝒮 ( K ϖ^λ K ) = q β ^2 + (q-1) ·αβ + q^2· q^-1α ^2
= q (α ^2+ β ^2 ) + ( q - 1 ) αβ∈ℛ [ Λ ] ^ W .
One can in fact write an explicit formula for 𝒮 ( K ϖ^λ K) for any λ∈Λ. See <cit.> for a formula
in terms of ℛ-basis α ^mβ ^n of ℛ[Λ].
§.§ Macdonald's formula
The Satake transform is not explicit in the sense that the coefficients of the non-leading terms are not explicit. In general, the coefficients can be quite cumbersome expressions in q. There is however the following formula due to I.G. Macdonald <cit.> (see also <cit.>).
Suppose 𝐆 is split and Φ_F̅ = Φ_F is irreducible. Then for any λ∈Λ ^+,
𝒮 ( K ϖ^λ K ) = q^⟨λ , δ⟩/ W_λ (q^-1 ) ∑_ w ∈ W ∏ _ α∈Φ^ +
e ^ w λ· 1 - q ^ - 1 e ^ - w α ^ ∨/ 1 - e ^ - w α^∨
where W_λ(x) : = ∑_w ∈ W^λ x^ℓ(w) denotes the Poincaré polynomial of the stabilizer W^λ⊂ W of λ.
For arbitrary reductive groups, there is a similar but slightly more complicated expression as it takes into account divisible/multipliable roots and different contributions of root group filtrations. We refer the reader to <cit.> and <cit.> for details. These formulas however will not be needed.
Retain the notations of <ref>. We have e^-χ^∨ = α^-1β and
1 - q ^ - 1 e ^ - χ ^ ∨/ 1 - e^-χ^∨ = α - q^-1β/α - β , 1 - q ^ - 1 e^χ^∨/ 1 - e ^χ^∨ = β - q ^-1α/β - α .
For λ = 2 f_1, we compute
𝒮 ( K ϖ^λ K ) = q ( α ^ 2 ·α - q^-1β/α - β + β ^2·β - q ^-1α/β - α )
= q ( α^2 + αβ + β ^2 ) - αβ
= q ( α^2 + β^2 ) + (q-1) αβ
which agrees with Example <ref>.
§.§ Representations of Langlands dual
Let 𝐆̂ denote the dual group of 𝐆 considered as a split reductive group over . Let 𝐌̂⊂𝐆̂ denote the maximal torus such that X_*(𝐌̂ ) = X^*(𝐌). We let 𝐏̂ be the Borel subgroup of 𝐆̂ corresponding to Φ̂_F̅^+ := (Φ_F̅ ^ ∨)^+⊂ X^*(𝐌̂) = X_*(𝐌). The action of Γ on based root datum of 𝐆 together with a choice of pinning determines an action of Γ on 𝐆̂ which is unique up to an inner automorphism by 𝐌̂. We define the Langlands dual to be ^L𝐆 = ^ L 𝐆_F : = 𝐆̂⋊Γ considered as a disconnected locally algebraic group over . We refer the reader to <cit.> for a detailed treatment of this group. See also <cit.>.
The subscript F in the notation ^L𝐆_F is not meant to suggest base change of algebraic groups but rather the fixed field for the Galois group Γ. If E / F is an unramified field extension, and ^ L𝐆_E denotes the subgroup 𝐆̂⋊Γ_E of ^L𝐆 _ F.
Since the weights of algebraic representations of 𝐆̂ are elements of X^*(𝐌̂ ) = X_* ( 𝐌 ), we also refer to elements of X_*(𝐌) as coweights.
For each dominant coweight λ∈ X_*(𝐌)^+, there exists a simple representation (π, V_λ ) of 𝐆̂ unique up to isomorphism such that λ≽_Mμ for any coweight μ appearing in V_λ (<cit.>). Since 𝐆̂ is defined over , so is the representation V_λ (<cit.>). For μ is a coweight of V_λ, we denote by V_λ^μ the corresponding coweight space.
Let φ : 𝐆̂→𝐆̂ be an endomorphism that sends 𝐏̂, 𝐌̂ to themselves and preserves λ i.e., λ∘φ = λ as maps 𝐌̂→_m.
Then the representation of 𝐆̂ obtained via the composition π∘φ also has dominant coweight λ and is therefore isomorphic to V_λ. Since End(V_λ ) ≃ (<cit.>), there is a unique isomorphism
T_φ : (π, V_λ ) ( π∘φ , V_λ )
of 𝐆̂-representations such that T _ φ is identity on the highest weight space V _λ ^λ. In other words,
T_φ : V_λ→ V_λ is determined by the conditions that T_φ( gv) = φ(g) T_φ (v) for all g ∈𝐆̂(), v ∈ V_λ and that T_φ : V_λ^λ→ V_λ^λ is the identity map. Let us define (g, φ ) : V_λ→ V_λ to be the mapping v ↦ g · T_φ ( v ) for any g ∈( ), v ∈ V_λ. If ψ : → is another such automorphism, it is easily seen by the characterizing property of these maps that
T_ψ∘ T_φ = T_ψ∘φ, so that
(h, ψ) ( ( g , φ ) (v) ) = ( h ψ(g) , ψ∘φ ) (v)
for all h, g ∈𝐆̂, v ∈ V_λ.
Thus if Ξ⊂Aut( 𝐆̂ ) is a subgroup of automorphisms preserving 𝐏̂, 𝐌̂ and λ, then the construction just described determines an action of 𝐆⋊Ξ on V_λ extending that of 𝐆̂.
Now suppose that the coweight λ lies in Λ^+ = X_*(𝐀) ^+↪ X_*(𝐌)^+ i.e., λ is Γ-invariant. Since the action of Γ on 𝐆̂ preserves 𝐌̂, 𝐏̂ by definition, one can extend the action of 𝐆̂ on V_λ to an action of ^L𝐆_F on V_λ by taking Ξ = Γ in the discussion above. Thus for λ∈Λ^+, (π , V_λ ) is naturally a representation of ^L𝐆.
Note that the action of Γ on V_λ may not be trivial, even though it is required to be so on the highest weight space. See <cit.> or <cit.> for an example.
Let γ denote the Frobenius element in Γ. Recall that the trace of a finite dimensional algebraic -representation (ρ, V) of ^L𝐆̂_F is defined to be the map
tr_ρ : 𝐌̂() → (m̂, γ) ↦tr ( ρ(m̂, γ) )
where m̂∈𝐌̂(). By <cit.> and its proof, tr_ρ is naturally an element of [Λ]^W. Since the weight spaces V^μ of V are defined over and (1, γ) acts on these spaces by finite order rational matrices, the trace of ρ(1,γ) on V^μ is necessarily integral. Hence the trace of ρ(m̂,γ) = ρ(m̂,1) ρ(1,γ) restricted to V^μ is an integral multiple of μ(m̂) for any m̂∈𝐌̂ ( ). It follows that tr_ρ belongs to the sub-algebra [ Λ] ^W of [ Λ ] ^W. In particular,
the trace of ⋀^i V_λ for any
λ∈Λ^+ lies in [Λ]^W for all i. Cf.<cit.>.
Let λ∈Λ ^ +. The Satake polynomial 𝔖_λ(X) ∈ [ Λ ]^W [ X ] is defined to be the reverse characteristic polynomial of 𝐌̂⋊γ acting on V _ λ. For s ∈1/2, the Hecke polynomial ℌ_λ, s(X) ∈ℋ_ℛ(K \ G / K ) [X] centered at s is defined to be the unique polynomial that satisfies 𝒮 ( ℌ_λ, s ( X) ) = 𝔖_λ( q ^ - s X) ∈ℛ [ Λ ] ^ W.
In other words, 𝔖_λ(X) ∈ [ Λ ] ^ W [ X ] is the polynomial of degree d = _ V _ λ such that the coefficient of X^k in 𝔖_λ(X) is (-1)^k times the trace of 𝐌̂⋊γ on ⋀^ k V (λ ) and ℌ_λ, s is the polynomial such that the Satake transform of the coefficient of X^k in ℌ_λ, s (X) is q^- k s times the coefficient of X^k in 𝔖_λ ( X ).
The coweight we are interested in for a given Shimura variety for a reductive group over arise out of the natural cocharacter μ_h : _m, →_ associated with the Shimura datum for . The ()-conjugacy class of this cocharacter is defined over a number field E, known as the reflex field of the datum. At a rational prime ℓ where the group is unramified, choose a prime v of E above it. Then E_v / _ℓ is unramified and the orbit of μ_h under the (absolute) Weyl group of _E_v is stable under the action of the unramified Galois group of Γ_E_v of E_v. By <cit.>, we can
pick a unique dominant cocharacter λ (with respect to a Borel defined over E_v) of the maximal split torus in _E_v whose (relative) Weyl group orbit is identified with the Γ_E_v-stable absolute Weyl group orbit of cocharacters μ_h. This λ is the coweight whose associated representation we are interested in.
In the situation above, F is intended to be E_v.
If E_v≠_ℓ, the Satake polynomial corresponds to a polynomial over the Hecke algebra of (E_v) whereas the Hecke operators that act on the cohomology of Shimura variety need to be in the Hecke algebra of (_ℓ). This is remedied by considering traces of (𝐌̂⋊γ )^[E_v: _ℓ] instead. This makes sure that the traces on ⋀ ^k V_λ belong to [ Λ__ℓ ] ^W__ℓ where Λ__ℓ, W__ℓ are defined relatively for over _ℓ. The exponentiation by [E_v : _ℓ] here can then
be interpreted as a base change morphism from Hecke algebra of _E_v to the Hecke algebra of __ℓ.
The Hecke polynomial of <ref> is obtained in this manner.
§.§ Minuscule coweights
The representations of ^L𝐆_F that will be interested in will be associated to certain dominant cocharacters that arise out of a Shimura data. Such cocharacters satisfy the special condition of being `minuscule'. In this subsection, we recall this notion and record some results scattered over several exercises of <cit.>.
The reader may consult
<cit.> and
<cit.>
for general reference of the material provided here. Cf. <cit.>.
It will also be convenient to record our results in terms of abstract root data. Fix Ψ an abstract root datum (X, Φ, X^∨, Φ^∨ ) and retain the notations introduced in <ref> before Lemma <ref>. We assume throughout that Φ is reduced.
Let λ be an element in X^∨ or P^∨. We say that λ is minuscule if ⟨λ , α⟩∈{ 1 , 0 , - 1 } for all α∈Φ.
A subset S of X^∨ or P^∨ is said to be saturated or Φ-saturated if for all x ∈ S, α∈Φ and integers i lying between 0 and ⟨ x , α⟩, we have x - i α ^ ∨∈ S. For λ in X^∨ (resp., P ^ ∨), we define S(λ) to be the smallest saturated subset of X^∨ (resp., P ^ ∨ ) containing λ i.e., S ( λ ) is the intersection of all saturated subsets
in X^∨ (resp., P ^∨) that contain λ.
Given λ∈ X^∨, we will denote its reduction modulo X_0 in P^∨ by λ̅. Similarly given a set S ⊂ X^∨, we denote the set of reductions of its elements by S̅. It is then easy to see that λ∈ X^∨ is minuscule iff λ̅ and S ⊂ X is saturated only if S̅ is. Moreover if λ∈ X^∨, the reduction of S(λ) equals S( λ̅).
If a subset S of X^∨ or P ^∨ is saturated, then s_α (x ) = x - ⟨ x , α⟩α ^∨ belongs to S for all x ∈ S, α∈Φ. Thus any saturated set is W_Ψ-stable. In particular, the orbit W _ Ψλ is contained in S(λ) for any λ in X^∨ or P^∨.
A dominant λ in P^∨ or X^∨ is minuscule if and only if S(λ ) = W_Ψλ.
Let λ∈ X^∨. Then λ is minuscule if and only if λ̅ is and S(λ) equals W_Ψλ if and only if S(λ) = S(λ̅) equals W _ Ψλ̅. It therefore suffices to establish the claim for λ∈ (P^∨)^+. Denote V^∨ = P^∨⊗, V = Q ⊗. Then P^∨⊂ V^∨, Q ⊂ V are dual lattices under ⟨ - , - ⟩.
Let ( - , - ) : V^∨× V ^∨→
be a W _ Ψ-invariant pairing. Then V is identified with V^∨, ⟨ - , - ⟩ with ( - , -), Q with
P^∨ and α∈Φ_F̅ with 2 α ^ ∨ / ( α ^ ∨ , α ^ ∨ ). In particular,
( λ , α^∨ ) = ⟨λ, α⟩/ 2 · ( α^∨ , α^∨ ) .
Note that ⟨λ , α⟩ and therefore (λ , α^∨ ) are non-negative for α∈Φ as λ is dominant.
() Suppose S(λ) = W _ Ψλ and suppose moreover for the sake of contradiction that λ is not minuscule. Then there exists α∈Φ ^ + such that k : = ⟨λ , α⟩ > 1. Then ( λ , α^∨ ) = k/2 (α^∨, α^∨ ). Set μ : = λ - α^∨∈ P ^ ∨. Then μ∈ S(λ) by definition. Now
(μ , μ ) = (λ , λ ) - k ( α ^ ∨ , α^∨ ) + ( α^∨ , α^∨) < (λ, λ ) .
Since elements of W_Ψλ must have the same length with respect to ( - , - ), μ∉ W_Ψλ = S(λ), a contradiction. Therefore k ∈{0, 1 } and we deduce that λ is minuscule.
() Suppose that λ is minuscule. For all w ∈ W _ Ψ, ⟨ w λ , α⟩ = ⟨λ , w ^-1α⟩∈{ 1, 0 , - 1 } which implies that w λ - i
α ^ ∨∈{ w λ , s_α ( w λ ) } for integers i lying between 0 and ⟨ w λ , α⟩. Thus W _ Ψλ is saturated and therefore W _ Ψλ = S ( λ ).
Every non-empty saturated subset of the coweight lattice contains a minuscule element.
Retain the notations in the proof of Proposition <ref>. Let S ⊂ P^∨ be a saturated subset. Let λ∈ S be the shortest element i.e., λ := (λ, λ)^1/2 is minimal possible for λ∈ S. We claim that λ is minuscule. Suppose on the contrary that there exist α∈Φ such that ⟨λ, α⟩∉{ 1,0,-1 }. Replacing α with - α if necessary, we may assume that ⟨λ , α⟩ > 1. Then λ - α ^ ∨∈ S by definition and the length calculation in the proof of <ref> shows that λ - α is a shorter element.
Under additional assumptions, one can describe the minuscule elements of X^ ∨ more explicitly. Let Δ = {α_1 , …, α_n} and let ω̅ _1, …, ω̅_ n∈ P ^ ∨ denote the basis dual to the basis Δ of Q. The elements ω̅_i are referred to as the fundamental coweights of Φ. If Φ is irreducible, there exists a highest root (<cit.>)
α̃ = ∑_j=1^n m_α_jα_j∈Φ ^ +
where m_α_j≥ 1 are integers. Let J ⊂{1, …, n } be the subset of indices j such that m_α_j = 1.
For irreducible Φ, {ω̅_j}_j∈ J is the set of all non-zero minuscule elements in (P^∨)^+. These elements form a system of representatives for non-zero classes in P^∨ / Q^∨.
Let λ∈ ( P ^ ∨ ) ^ + be non-zero. Since ω̅ _1, …, ω̅_n is a basis of P^∨, we can write λ = a_1ω̅_1 + … + a_nω̅_n uniquely. Since λ is dominant and non-zero, we have a_1, …, a_n≥ 0 and at least one of these is positive, say a_k. Now λ is minuscule only if a_1 m_α_1 + … + a_n m _ α_n = ⟨λ , α̃⟩ = 1 as both a_k , m_α_k≥ 1. But this can only occur if a_k = 1, k ∈ J and a_i = 0 for i ≠ j. Thus minuscule elements of P ^∨ - { 0 } are contained in the set {ω̅_j}_j ∈ J. Since α̃ is highest, any root ∑_j=1^n p_α_jα_j∈Φ_F satisfies m_α_j≥ p_α_j and one easily sees that all ω̅_j for j ∈ J are minuscule. The second claim follows by Corollary of Proposition 6 in <cit.>
For λ∈ X^∨, set Σ ( λ ) : = {μ∈ X^∨ | λ≽ w μ for all w ∈ W _ Ψ}. Similarly define Σ(λ) ⊂ P^∨ for λ∈ (P^∨)^+. Then λ∈Σ(λ) by Lemma <ref> and Σ(λ) is easily seen to be saturated. Therefore W_Ψλ⊂ S(λ ) ⊂Σ(λ).
A dominant λ in X^∨ or P^∨ is minuscule if Σ(λ ) = W_Ψλ. The converse holds if Φ is irreducible.
It is clear that Σ(λ) = W_Ψλ is equivalent to Σ( λ̅ ) = W_Ψλ̅ so it suffices to prove these claims for λ∈ ( P ^ ∨ ) ^ +.
() Suppose Σ (λ) = W_Ψλ. As, W _ Ψλ⊂ S(λ) ⊂Σ ( λ ), the equality Σ(λ) = W_Ψλ implies that S(λ) = W_Ψλ which by Proposition <ref> implies that λ is minuscule.
() Suppose Φ is irreducible and λ∈ ( P^∨ ) ^ + is minuscule. Then λ∈ Q^∨ implies λ is zero, since the only non-zero dominant minuscule elements in P^∨ are those fundamental coweights which by Lemma <ref> form representatives of non-zero elements in P^∨/Q^∨. So is suffice to prove the claim for λ∉ Q^∨.
Suppose now on the contrary that there exists a μ∈Σ(λ) - W_Ψλ.
We may assume μ is dominant since Σ(λ) - W_Ψλ is stable under W_Ψ and W_Ψμ contains a dominant element. Since Σ ( λ ) is saturated and contains μ, Σ ( λ ) ⊃ S(μ ). By Corollary <ref> S(μ) contains a minuscule element λ_1. Since all elements of W_Ψλ_1 are minuscule and S(μ) is W_Ψ-stable, we may take λ_1 to be dominant. Since S(μ) ⊂Σ(λ), λ≽_♭λ _1. In particular, λ - λ_1∈ Q ^ ∨. Since λ∉ Q ^ ∨, λ and λ_1 are distinct non-zero dominant coweights that represent the same non-zero class in P ^ ∨ / Q ^ ∨. But this
contradicts the second part of Lemma <ref>. Hence Σ(λ) must equal W_Ψλ. The final claim is immediate.
Now resume the notations of <ref>. Fix λ∈Λ^+ and let V_λ be the irreducible representation of ^L of highest weight λ. For each μ∈ X_*(𝐌)^+ with μ≼λ, the dimension (as a vector space over ) of the coweight space V_λ^μ is called the multiplicity of μ in V_λ. Corollary <ref> implies that when λ is minuscule and Φ _F̅ is irreducible, the set of coweights in V_λ is just the Weyl orbit W_Mλ. Since W_M permutes the weights spaces, the multiplicities of all coweights are 1. If is split, then the action of Γ on is trivial and so is its action on the coweight spaces of V_λ. We therefore get the following result.
Suppose is split and Φ_ F = Φ_F̅ is irreducible. Then for all minuscule λ∈Λ^+, 𝔖_λ(X) = ∏_μ∈ W λ ( 1 - e^μ X ) ∈ℛ[Λ]^W.
The content of this subsection is developed in Exercises 23-24 of 1 and Exercise 5 of 2 in <cit.>. While the results are well-known, the version we need and their
written proofs seem harder to find. We have included proofs here for future reference.
§.§ Kazhdan-Lusztig theory
We finish this section by recording an important property of the coefficients of Satake transform when taken modulo q - 1. We assume for all of this section that 𝐆 is split and Φ_F̅ = Φ_F is irreducible. We refer the reader to <cit.>, <cit.> and <cit.> for the material presented here.
See also <cit.> for a generalization to non-split case.
The Hecke algebra ℋ_ℛ(W_I) of W_I is the unital associative ℛ-algebra with ℛ-basis { T_w }_ w ∈ W _I subject to the relations
T_s^2 = (q-1) T_s + q T_e for s ∈ S_aff
T_w T_w' = T_ww' if ℓ(w) + ℓ(w') = ℓ(ww')
Each element T_w possesses an inverse in ℋ_ℛ ( W_ I ). Explicitly, T_s^-1 = q^-1 T_s - ( 1 - q ^-1 ) T_e. The -linear map ι : ℋ_ℛ(W_I) →ℋ_ℛ(W_I) induced by T_w↦ (T_w^-1)^-1 and q^1/2↦ q^ - 1/2 induces a ring automorphism of order two known as the Kazhdan-Lusztig involution.
For each y , w ∈ W_I such that x ≤ w in (strong) Bruhat ordering, the Kazhdan-Lusztig polynomial P_x,w (q) ∈[q] (considering q as an indeterminate) are uniquely characterized by the following three properties:
1
* ι ( q ^ - ℓ ( w ) / 2 ∑ _ x ≤ w P_x,w(q) T_x ) = q ^ ℓ ( w ) / 2 ∑ _ x ≤ w P_x,w ( q ) T_x,
* P_x,w(q) is a polynomial of degree at most ( ℓ(w) - ℓ(x) - 1 ) / 2 if x ⪇ w,
* P _ w , w ( q ) = 1.
If x ≰ w, we extend the definition of these polynomials by setting P_x,w(q) = 0. We will refer to P_x,w for any x,w ∈ W_I as KL-polynomials.
For any λ∈Λ, there is a unique element denoted w_λ which has the longest possible length in the double coset W t(λ) W ⊂ W_I. When λ∈Λ^+, this element is t(λ) w_∘ and ℓ(t(λ) w_∘ ) = ℓ ( t(λ) + ℓ ( w_∘ ) = 2 ⟨λ , δ⟩ + ℓ ( w_∘ ). For any λ, μ∈Λ^+, we have λ≽μ (<ref>) iff w_λ≥ w _ μ.
Let λ∈Λ^+ and χ_λ∈ [ Λ ] ^ W denote the trace of 𝐌̂ on V_λ. Then
χ_λ = ∑ _ μ≼λ q ^ - ⟨λ , δ⟩ P _ w_μ , w_λ (q) 𝒮 ( K ϖ^μ K )
where the sum runs over μ∈Λ^+ with μ≼λ.
See <cit.>. We also note that the proof provided in <cit.> carries over with minor changes.
χ_λ = ∑ _μ≼λ P_w_μ , w_λ (1) e ^ W λ.
Using Macdonald's formula (Theorem <ref>) for the expression 𝒮(K ϖ^μ K) in the Kato-Lusztig formula <ref>, we obtain an expression for χ_λ as a linear combination in e^W μ which has coefficients in ℛ_q (see <cit.>). Since the χ_λ is independent of q, we can formally replace q with 1 which yields the expression above.
Let ℐ = ℐ_q⊂ℛ _q denote the ideal generated by q ^1/2 - 1 and let 𝒮 = 𝒮_q : = ℛ/ℐ. For f ∈ℛ[Λ] ^W, we let [f] ∈𝒮[Λ]^W denote the image of f. Similarly, for ξ∈ℋ_ℛ(K \ G / K ), we let [ξ] ∈ℋ_𝒮 ( K \ G / K ) denote the class of ξ. For f = ∑ _ μ∈Λ^+ c_μ e^W μ∈ℛ[Λ]^W, let
ξ_f : = ∑ _μ∈Λ ^ + c_μ (K ϖ^μ K ) ∈ℋ_ℛ(K \ G / K ) .
Let f ∈ℛ[Λ]^W and ξ = 𝒮^-1(f). Then [ξ] = [ξ_f ].
Since χ_λ form a -basis for [ Λ]^W, it suffices to establish the claim for f = χ_λ. But this follows by Kato-Lusztig formula and Corollary <ref>.
§ DECOMPOSITIONS OF
DOUBLE COSETS
In this section, we derive using the elementary theory of Tits systems a recipe for decomposing certain double cosets into their constituent left cosets. Invoking the existence of a such a system on the universal covering of the derived group of a
reductive group over a local field, we obtain a recipe for decomposing Hecke operators arising out of double cosets of what are known
as parahoric subgroups
of unramified reductive groups.
The method used here for decomposing such double cosets is based on the one introduced in <cit.> in the setting of split Chevalley groups.
Theorem <ref>, the main result of this section, will be our primary tool for executing the machinery of <ref> in concrete situations.
§.§ Motivation
To motivate what kind of decomposition we are looking for, let us take a look at the case of decomposing K σ K where K = _n(_v) for v a rational prime and σ = diag(v,…, v, 1, …, 1 ) where there are k number of 1's. Let G denote _n(_v). There is a natural G-equivariant bijection between G / K and the set of _v-lattices in _v^n where K is mapped to the standard lattice. Then σ K corresponds to the lattice generated by the basis where the first n - k standard vectors are replaced by multiples of the uniformizer v. Thus K σ K
/ K corresponds to the K-orbit of this lattice. It is clear that any such lattice lies between the standard lattice _v ^n and v _v ^n. Reducing modulo v therefore gives a bijection between K σ K / K and the 𝔽_v-points of the Grassmannian Gr(k,n) of k-dimensional subspaces in an n-dimensional vector space. Since Gr(k,n)(𝔽_v) admits a stratification by Schubert cells, one obtains an explicit description of K σ K / K by taking _v lifts of their 𝔽_v points. See Example <ref> that illustrates this for n = 4, k = 2. We would like a similar recipe for more general reductive groups and arbitrary cocharacters.
§.§ Coxeter systems
Throughout this subsection, (W,S) denotes a Coxeter system. Given X ⊂ S, we let W_X⊂ W be the group generated by X. Then (W_X, X) is a Coxeter system itself and W_X∩ S = X. We refer to groups obtained in this manner as standard parabolic subgroups of (W , S ). Let ℓ : W → denote the length function. Then ℓ_| W_X is the length function on W_X. Given X , Y ⊂ W and a ∈ W, consider an element w ∈ W_X a W_Y of minimal possible length. The deletion condition for Coxeter groups implies that any w ' ∈ W _ X a W_Y can be written as w' = x w y for some x ∈ W_X, y ∈ W_Y such that
ℓ ( w' ) = ℓ ( x ) + ℓ ( w ) + ℓ ( y ) .
It follows that w ∈ W_X a W_Y is the unique element of minimal possible length. We refer to w as the (X,Y)-reduced element of W_X a W_Y and denote the set of (X,Y)-reduced elements in W by [ W_X\ W / W_Y ]. If w ∈ W is (X, ∅ )-reduced, then we have the stronger property that ℓ ( x w ) = ℓ ( x ) + ℓ(w) for all elements x ∈ W_X. An arbitrary σ∈ W can be written uniquely as σ = xw for some x ∈ W _X and w ∈ W a (X,
∅)-reduced element. Similarly for (∅ , Y )-reduced elements. An element in W is (X,Y)-reduced iff it is (X, ∅ )-reduced and ( ∅, Y)-reduced.
The stronger properties of minimal length representatives for one-sided cosets of parabolic subgroups can be generalized to double cosets as follows. Let σ∈ W be (X, Y )-reduced. Then W_X∩σ W_Yσ ^-1 is a standard parabolic subgroup of (W_X, X) generated by Z:= X ∩ (W_X∩σ W_Yσ^-1) and
ℓ ( τσυ ) = ℓ ( τσ) + ℓ ( υ ) = ℓ ( τ ) + ℓ ( σ ) + ℓ ( υ )
for any τ∈ [ W_X / W_ X∩σ W_Y σ ^ - 1 ], υ∈ W_Y. In other words, the equality above holds for any (X,Y)-reduced element σ∈ W, any (∅, Z)-reduced element τ∈ W_X and arbitrary υ∈ W_Y.
There is a generalization of these facts to a slightly larger class of groups. Let Ω be a group and Ω× W → W be a left action that restricts to an action on Ω× S → S. We refer to elements of Ω as automorphisms of the system (W,S). Since such automorphisms are length preserving, we may form the extension W̃ : = W ⋊Ω and extend the length function ℓ : W̃→ by declaring ℓ( σρ ) = ℓ ( σ ) for σ∈ W, ρ∈Ω. We refer to elements of Ω⊂W̃ as length zero elements. Given A ⊂ W, we denote by A ^ ρ the set ρ A ρ ^-1⊂ W. Then ρ W_Xρ ^-1 = W _ X ^ρ⊂ W for any X ⊂ S. Given X , Y ⊂ S, b = a ρ∈W̃ where a ∈ W, ρ∈Ω, there is again a unique element w ∈ W_X b W_Y of minimal possible length given by w = σρ where σ is the (X, Y ^ ρ )-reduced element in W _X a W_ Y ^ ρ. Moreover W_X∩ w W_Y w^-1 = W_X∩σ (W_ Y ^ ρ ) σ^-1 is still a standard parabolic subgroup of W_X with respect to X and the length formula (<ref>) continues to hold when σ is replaced with w = σρ.
We continue to call the unique element σρ as the (X,Y)-reduced element of W_X b W_Y and denote the collection obtained over all double cosets by [ W_X\W̃ / W_Y ]. If w ∈W̃ is (X, ∅)-reduced, we again have ℓ(xw) = ℓ(x) + ℓ(w) for all x ∈ W_X.
The result on (X,Y)-reduced elements in the first paragraph above appear in <cit.> from which we have also borrowed its terminology. See also <cit.>.
Detailed proofs of all claims in the second paragraph can be found in <cit.> or <cit.>.
Groups W̃ as above are sometimes called quasi-Coxeter groups.
§.§ Tits Systems
A Tits system 𝒯 is a quadruple (G, B , N,S) where G is a group, B , N are two subgroups of G and S is a subset of N / ( B ∩ N ) such that the following conditions are satisfied:
(T1) B ∪ N generates G and T = B ∩ N is a normal subgroup of N
(T2) S generates the group W = N / T and consists of elements of order 2
(T3) s B w ⊂ B w B ∪ B s w B for all s ∈ S, w ∈ W.
(T4) s B s ≠ B for all s ∈ S
We call W the Weyl group of the system and let ν : N → W denote the natural map.
For any v, w ∈ W, the products w B, Bvw , vBw etc are well-defined since if, say, n_w∈ N is a representative of w, then any other is given by n_w t for t ∈ T ⊂ B and one has n_w t = t ' n_w for some t' ∈ T by normality of T in B ∩ N.
For any such system, the pair (W,S) forms a Coxeter system. We denote by ℓ : W → the corresponding length function. The set S equals the set of non-trivial elements w ∈ W such that B ∪ B w B is a group. Hence S is uniquely determined by the groups G, B , N and the axioms (T1)-(T4). We therefore also say that (G,B,N) is a Tits system or that (B,N) constitutes a Tits system for G. The axiom (T3) is equivalent to
BsBwB ⊂ BwB ∪ BswB.
Since BsBwB is a union of double cosets, it must equal either BswB or BwB ∪ Bs wB and the two cases correspond to whether ℓ(sw) equals ℓ(w) + 1 or ℓ(w)-1.
In particular, Bs Bs B equals B ∪ Bs B by (T4).
The subsets BwB ⊂ G for w ∈ W are called Bruhat cells which provide a decomposition
G =
_w ∈ W Bw B
called the Bruhat-Tits decomposition. If w = s_ 1⋯ s_ℓ(w) is a reduced decomposition of W, then BwB = Bs_1 B · B s_2 B ⋯ B s_ℓ(w) B. A subgroup of G that contains B is called a standard parabolic. There is a bijection between such subgroups of G and subsets X of S given in one direction as follows: given X ⊂ S, we let K_X : = B W_X B ⊃ B where W_X⊂ W is the group generated by X. Then K_X is the standard parabolic subgroup associated with X. In particular, K_∅ = B, K_S = G. Any standard parabolic subgroup of G equals its own normalizer in G.
If N ' is a subgroup of N such that ν(N') = W_X, then (K_X, B, N', X) is a Tits system itself.
If X, Y ⊂ S, the bijection B \ G / B W induces a bijection
K_X\ G / K_Y≅ W_X\ W / W_Y
given by sending K_X w K_Y↦ W_X w W_Y. For Z a normal subgroup of G contained in B, denote G ' = G/Z and let B' = B/Z, N' = N / ( N ∩ Z ) denote the images of B , N in G '. Set W ' = B' / (B ' ∩ N ') and S ' the image of S under W → W '. Then (W,S) → (W ', S') is an isomorphism of Coxeter groups and (G', B', N', S' ) is a Tits system which is said to be induced by (G,B,N,S).
Let (G,B,N,S) be a Tits system. We say that the system is commensurable if B s B / B is finite for all s ∈ S. Then Bw B/ B is finite for all w ∈ W. We let q_w denote the quantity | B w B / B | = [ B : B ∩ w B w^-1 ].
Let (G,B,N,S) be a commensurable Tits system. For any σ , τ∈ W such that ℓ( σ ) + ℓ ( τ ) = ℓ ( στ ), q _τσ = q_τ q_σ
Since B w B / B is finite for all w ∈ W, one may form the convolution algebra ℋ_ ( B \ G / B ) with product ( B w B ) * (B v B) given as in <ref>. The linear map ind : ℋ_ ( B \ G /B ) → given by ( B w B ) ↦ q_w is then a homomorphism of rings. If s ∈ S, w ∈ W, we have
( Bw B) * ( B s B ) = ∑_ u ∈ W c_w, s ^u ( B u B )
where c_w,s^u = | ( B w B ∩ u BsB )/ B | (see eq. (<ref>)). Note that c^u_w,s≠ 0 if and only if BuB ⊂ B w B s B. Suppose that ℓ ( w ) + ℓ ( s) = ℓ (w s ), so that BwBsB = Bws B. This implies that c_w,s^ u = 0 for u ≠ ws and that wBsB ⊂ B ws B. Since BsBsB = B ∪ BsB, we have
ws BsB ⊂ w ( B s B s B ) = w ( B ∪ BsB) ⊂ wB ∪ BwsB .
Using the above inclusion, we see that
w B ⊂ BwB ∩ ( ws BsB ) ⊂ BwB ∩ ( wB ∪ BwsB ) = wB
where the last equality follows by disjointness of BwB, BwsB. It follows that BwB ∩ ws BsB = w B and therefore c^ w s _ w, s = 1. Combining everything together, we see that ( B w B ) * ( B s B ) = ( B ws B ). Repeating this argument by writing σ = w s = w' s' s, we see that ( B σ B ) = ( B s_1 B ) * ⋯ * ( B s_ℓ(w) B ) where σ = s_1⋯ s_ℓ(w) is a reduced decomposition. Since ind is a homomorphism, we see that q_σ = q_s_1⋯ q_s_ℓ(σ ) and similarly for q_τ, q_τσ. The claim follows since the product of two reduced expressions for σ, τ in that order is a reduced
word expression for στ.
Let (G,B,N,S) be a Tits system and φ : G →G̃ be a homomorphism of groups. Then φ is said to be (B,N)-adapted if
1.5
(i) φ⊂ B,
(ii) for all g ∈G̃, there is h ∈ G such that g φ(B) g ^-1 = φ ( h B h ^-1 ) and g φ(N) g^-1 = φ ( h N h^-1 ).
For any such map, φ(G) ◃G̃ and the induced map G / (φ ) ↪G̃ is adapted with respect to the induced Tits system on G / φ.
Let φ : G →G̃ be a (B,N)-adapted injection and consider G as a (necessarily normal) subgroup of G̃. Denote by T = B ∩ N, W = N / T as above and set Ω = G̃ / G. Let B̂, N̂ denote respectively the normalizers of B, N in G̃ and set Γ = B̂∩N̂. Since every g ∈G̃ has a h ∈ G such that g^-1 h ∈Γ, we see that G̃ = Γ G. If g is taken to be in B̂, h is forced to lie in B as B equals its own normalizer in G. Therefore B̂ = Γ B. That N_G (B) = B also implies that Γ∩ G = Γ∩ B from which it follows that Γ / Γ∩ B and B̂/ B are both canonically isomorphic to Ω.
Define Ñ = N Γ and T̃ = Ñ∩ B. As Γ normalizes N, Ñ = N Γ = Γ N is a group and therefore so is T̃. Invoking N_G(B) = B again, we see that T̃ = N Γ∩ B = T (Γ∩ B ). Since Γ normalizes both B and N, it normalizes the intersections T = B ∩ N and Γ∩ B. Thus Γ normalizes the product T̃ = T ( Γ∩ B ). If n ∈ N, b ∈Γ∩ B, there exist n' ∈ N such that bn = n ' b. The decomposition (<ref>) implies that n', n represent the same class in W, and so n 'n^-1 = b n b ^-1 n ^-1∈ T. This implies that n b ^-1 n ^-1 lies in T̃. It follows from this that N also
normalizes T̃. Consequently, T̃ is a normal subgroup of Ñ.
We let
W̃ = Ñ / T̃ .
Since N contains T, N ∩T̃ = N ∩ T ( Γ∩ B ) = T ( N ∩Γ∩ B ) = T .
Similarly Γ∩T̃ = Γ∩ B.
Thus
the inclusion of N (resp. Γ) in Ñ allows us
to identify W (resp. Ω) as a subgroup of W̃. Since Γ normalizes N, Ω normalizes W. Since Γ∩ N = Γ∩ B ⊂T̃, W ∩Ω is trivial in W̃.
It follows that
W̃ = W ⋊Ω .
Since γ (B ∩ B w B ) γ^-1 = B ∩ B γ w γ^-1 B for any w ∈ W, γ∈Γ and since B ∩ B u B is a group for u non-trivial if and only if u ∈ S, we see that Ω normalizes S. Consequently Ω acts on (W,S) by automorphisms and we may extend the length function from W to W̃. From the decomposition (<ref>) and the normalizing properties of Γ, we obtain a generalized Bruhat-Tits decomposition
G̃ =
_ w ∈W̃ B w B .
Similarly, if X, Y ⊂ S, K_X, K_Y⊂ G denote the corresponding groups, we obtain from <ref> a decomposition K_X\G̃ / K_Y≅ W_X\W̃ / W_Y.
For the general theory of Tits systems, we refer the reader to <cit.>.
The material on (B,N)-adapted morphisms and commensurable Tits systems is developed in Exercises 2, 8, 22, 23, 24 of op. cit. and we have included their proofs here.
The terminology of Definition <ref> is taken from <cit.>. This notion is referred to as generalized Tits systems in <cit.>.
§.§ Decompositions
Assume for all of this subsection that (G,B,N,S) is a commensurable Tits system and φ : G ↪G̃ is a (B,N)-adapted inclusion. Retain also the notations W, W̃, Ω and q_w for w ∈ W introduced above. For each s ∈ S, let _s⊂ G denote a set of representatives of B / ( B ∩ s B s^-1 ) (so |_s| = q_s) and let s̃ denote a lift of s to N under ν (so that ν( s̃ ) = s). Define
g_s : _s→ G , _s∋κ↦κs̃
considered as a map of sets.
Fix a w = σρ∈W̃ where σ∈ W, ρ∈Ω and let ρ̃∈Γ denote a lift of ρ. Then ρ̃ B = B ρ̃ is independent of the choice of the lift and we may therefore denote ρ̃ B
simply as ρ B. Let m = m_w : = ℓ ( σ ) denote the length of σ and let r( σ ) = (s_1, … , s_m) denote a fixed reduced word decomposition of σ. Denote by _r(σ) the product _ s_1×_s_2×⋯×_s_m .
B w B = _ κ⃗∈_r(σ ) g_s_1 ( κ_1 ) ⋯ g_s_m ( κ_m ) ρ B
where κ_i denotes the i-th component of κ⃗.
We have B w B = B σ B ρ = B s_1 B ⋯ B s_m B ρ. Now
B σ B = ⋃ _κ_1∈_1 g_s_1( κ_1 ) B s_2 B ⋯ B s_m B
= ⋃ _ ( κ_1, κ_2 )
∈_1×_2 g_s_1 ( κ_1 ) g_s_2( κ_2 ) B s_3 B ⋯ B s_m B = ⋯ = ⋃ _ κ⃗∈_r(w) g_s_1( κ_1 ) ⋯ g_s_m ( κ _m ) B
As | B σ B / B | = q_σ = q_s_1⋯ q_s_ m by Lemma <ref>, the union above is necessarily disjoint. Multiplying each coset in the decomposition above on the right by ρ and moving it inside next to σ on the left hand side, we get the desired decomposition of B w B.
Retain the notations w, σ, ρ, m. We
define 𝒳_r(σ) , ρ : _ r ( σ ) →G̃ / B to be the map κ⃗↦ g_s_1 ( κ_1 ) ⋯ g_s_m( κ_m ) ρ̃ B.
(where we have suppressed the dependency on the choices of lifts).
Then in this notation,
BwB = _κ⃗∈_r(σ)𝒳_r(σ) , ρ (κ⃗) .
In particular, the image of 𝒳_r( σ ) , ρ in G̃ / B is independent of all the choices involved. Since we will only be interested in the image of 𝒳_r(σ), ρ modulo subgroups of G containing B, we will abuse our notation to denote this map simply as 𝒳_w. Moreover we will consider 𝒳 _w as taking values in G̃ as opposed to G̃/B, if it is understood that these are representatives of left cosets for some fixed subgroup that contains B.
Similarly we denote _r(σ) by _ w.
Let X , Y ⊂ S, W_X, W_Y be the subgroups of W generated by X, Y respectively and let K_X = B W_X B, K_Y = B W_Y B. For any w ∈ [ W_X\W̃ / W_Y ], we have
K_X w K_Y = _ τ _ κ⃗∈_τ w 𝒳_τ w ( κ⃗ ) K_Y
where τ runs over [ W_X / ( W_X ∩ w W_Y w ^ - 1 ) ]. In particular, | K_X w K_Y / K_Y | =
∑ _τ | _τ w |.
First note that K_X w K_Y = ⋃_ x ∈ W _X B x B w B K_Y = ⋃_x ∈ W_X B x w K_Y where the second equality follows
since ℓ ( x w ) = ℓ ( x ) + ℓ ( w ) for all x ∈ W_X (see <ref>). Since B \G̃ / K _ Y is in bijection with W̃ / W_Y, we infer that Bx w K_Y = B x' w K_Y for x , x ' ∈ W_X if and only if xw W_Y = x'w W_Y. It follows that
K_X w K_Y = _ τ B τ w K_Y
where τ runs
over a set of representatives of W_X / ( W_X∩ w W_Y w^-1 ) and which we are free to take from the set A := [W_X / ( W_X∩ w W_Y w ^ -1 ) ] ⊂ W_X. Fix a τ∈ A. We have
B τ w K_Y = B τ w B K_Y = ⋃ _ κ∈ _ τ w 𝒳_τ w ( κ⃗ ) K_Y
by Lemma <ref>. Say κ⃗_1 , κ⃗_2∈_τ w are such that g_1 K _Y = g_2 K _Y where g_ i : = 𝒳_τ w ( κ⃗_i ) ∈G̃ for i = 1 , 2. As K_Y = B W_Y B, we have
g_1 K_Y = _ y ∈ W_Y _ κ⃗∈ _ y g_1𝒳 _ y ( κ⃗ ) B
by Lemma <ref> again. As g_2 B ⊂ g_2 K_Y = g_1 K_Y, there exists y ∈ W _Y and κ⃗_y∈_y such that g_1𝒳_y ( κ⃗_y ) B = g_2 B . Now observe that
B g_1𝒳_y ( κ⃗_y ) B ⊂ B g_1 B 𝒳_y ( κ⃗ _y ) B = B τ w B y B
and B τ w B y B = B τ w y B since ℓ ( τ w y ) = ℓ ( τ w ) + ℓ ( y ) by
(<ref>). Therefore, g_2 B = g_1𝒳 _ y ( κ _ y ) B ⊂ B τ w y B. Since g_2 B is also contained in B τ w B, we see that B τ w B = B τ w y B . This can only happen if y = 1_W_Y which in particular means that _y is a singleton and 𝒳_y ( κ⃗ _y ) B = B. We therefore have g_1 B = g_1𝒳_y(κ⃗ ) B = g_2 B which in turn implies that κ_1 = κ_2. The upshot is that the right hand side of (<ref>) is a disjoint union for each fixed τ∈ A. Thus
K _ X w K _ Y = _ τ∈ A B τ w K_Y = _ τ∈ A _ κ⃗∈ _ τ w 𝒳 _ τ w , K_Y ( κ⃗ )
which completes the proof.
The proof of Theorem <ref> is inspired by <cit.>.
§.§ Reductive Groups
In this subsection, we recall the relevant results from the theory of Bruhat-Tits buildings. We primarily follow <cit.> in our exposition and refer the reader to book <cit.> for additional details and background.
Retain the notations introduced in <ref> and <ref>.
In particular, denotes an unramified reductive group over F and G its group of F points. Additionally, we let 𝐆̃ be the simply connected covering of the derived group 𝐆 ^ der of 𝐆 and let ψ : 𝐆̃→𝐆 denote the resulting map. For a group 𝐇⊂𝐆, we denote by 𝐇̃⊂𝐆̃ the pre-image of 𝐇 under ψ.
Let ℬ be the Bruhat-Tits building of G̃ : = 𝐆̃(F) and let 𝒜⊂ℬ be the apartment stabilized (as a subset) by à : = 𝐀̃(F). By definition 𝒜 is an affine space under the real vector space Ṽ := X_*( 𝐀̃ ) ⊗. Let M̃ : = 𝐌̃ ( F). There is a unique homomorphism ν : M̃→Ṽ determined by the condition
χ ( ν ( m ) ) = - ord ( χ ( m) )
for all m ∈M̃, χ a F-rational cocharacter of 𝐀̃. The kernel of ν is a maximal compact open subgroup M̃ ^∘ of M̃. Set Ã^∘ : = Ã∩M̃^∘. Then à / Ã^∘ = M̃ / M̃^∘ via the inclusion Ã↪M̃ and the image ν(M̃) ⊂Ṽ is identified with X_*(𝐀̃). Let 𝒩̃ denote the stabilizer of 𝒜 (as a subset of ℬ). The map ν admits a unique extension 𝒩̃→Aut(𝒜 ) where Aut(𝒜) denotes the group of affine automorphisms of 𝒜. The action of G̃ on ℬ is then uniquely determined by this extension.
Fix x_0∈𝒜 a hyperspecial point via which we identify Ṽ with 𝒜. Then ν identifies 𝒩̃ / M̃^∘ with W_aff = Λ̃⋊ W. Let C ⊂𝒜 be an alcove (affine Weyl chamber) containing x_0 such that the set S_aff chosen in
<ref> is identified with the set of reflections in the walls of C. Let B̃ be the (pointwise) stabilizer of C in G̃. Then (G̃, B̃, 𝒩̃ )
is a Tits system with Weyl group W_aff and the morphism ψ : G̃→ G is (B̃ , 𝒩̃ )-adapted. The action of G on G̃ induced by the natural map 𝐆→Aut(𝐆̃) determines an action of G on ℬ. The stabilizer 𝒩⊂ G of the action of G on 𝒜 equals the normalizer N_G(A) of A in G. If we denote by ν : 𝒩→Aut(𝒜) the canonical morphism, the inverse image of translations coincides with M = 𝐌(F) and 𝒩̃/ M̃ = 𝒩 / M = W. By the discussion in <ref>, the quotient G / ψ(G̃) acts naturally on (W_aff , S_aff). There is thus an induced map ξ : G →Aut(𝒜 ) such that each ξ(g) for g ∈ G sends C to itself. Let
G ^1 : = { g ∈ G | χ(g) = 1 for χ : 𝐆→_m} .
Then M^∘ = M ∩ G^1 and ψ(G̃) ⊂ G ^1. Let B ⊂ G^1 be the set of elements that stabilizes C (as a subset of ℬ) and K ⊂ G^1 the sub-group of elements stabilizing x_0.
Then B is a Iwahori subgroup of G and K a hyperspecial subgroup. In particular, K = B w B for w ∈ W. We will assume that the group scheme 𝒢 in <ref> is chosen so that 𝒢 (_F ) = K.
Finally, let G^0 = G^1∩ξ and let 𝒩^0 = G^0∩𝒩. Since G
= ψ(G̃) M
and ψ(G̃) ⊴ G, we infer that G^0 = ψ(G̃) M^∘, B = ψ(B̃) M^∘, 𝒩^0 = ( ψ(G̃) ∩𝒩 ) M^∘ = ψ(𝒩̃) M^∘.
It is then elementary to see (G^0 , B , 𝒩^0 ) is a Tits system with Weyl group W_aff (see <cit.>)
and that G^0↪ G is a (B, 𝒩 ^0)-adapted whose extended Weyl group is the Iwahori Weyl group W_I.
One may therefore apply the result of Proposition <ref> to the inclusion G^0→ G to obtain decompositions of double cosets in K_1\ G / K_2 where K_1, K_2⊂ G^0 are subgroups containing B.
If s ∈ W_aff denotes the reflection in a wall of the alcove, B / ( B ∩ s B s ) has cardinality q^d(s) for some d(s) ∈ and a set of representatives can be taken in the F points of the root group U_α where α∈Φ_F is the vector part of the corresponding affine root associated with s. The precise description of d(s) is given in terms of the root group filtrations and is recorded on the corresponding local index which is the Coxeter diagram of W_aff with additional data. When 𝐆 is split, d(s) = 1. We refer to <cit.>
for more details.
In the notations of <cit.>, we have 𝐌(F)^1 = 𝐌(F)^0 = M^∘ as 𝐌 is split over an unramified extension.
The group G^0
therefore coincides with <cit.>. That (G^0, B, 𝒩^0 , S_aff ) forms a Tits system is established in Theorem 7.5.3 of op.cit.
In the sequel, we will denote the Iwahori subgroup B ⊂ G by the letter I.
§.§ Decompositions for _2
Retain the notations introduced in <ref>. Let χ^∨ = f_1 - f_2 denote the coroot associated with χ and s = s_χ denote the unique non-trivial element in W. Let
w_0 = [ 1/ϖ; ϖ ] , w_1 = [ 1; 1 ] , ρ = [ 1; ϖ ]
Then w_0, w_1, ρ normalize A and ρ w_0ρ^-1 = w_1, ρ w_1ρ^-1 = w_0. Under the conventions introduced, the matrices w_0, w_1 represent the two simple reflections S_aff = { t(χ^∨ ) s , s } of the affine Weyl group ⟨ f_1 -f_2⟩⋊ W. The element ρ represents t(-f_2) s_χ∈Λ⋊ W = W_I and is a generator of Ω = W_I / W_aff. The action of ρ on ℬ preserves the alcove C and permutes the two walls corresponding to w_0, w_1. We say that ρ induces an automorphism of the Coxeter-Dynkin diagram
[extended, labels = 0, 1, edge length = 1cm]A1
given by switching the two nodes.
Let I denote the Iwahori subgroup corresponding to the set of affine roots χ and -χ+1 (considered as functions on the space Λ⊗). Then I is the usual Iwahori subgroup of _2(_F) given by matrices that reduce to upper triangular matrices modulo ϖ. Let x_0, x_1 : _a→_2 denote the following `root group' maps
x_0 : u ↦[ 1 ; ϖ u 1 ], x_1 : u ↦[ 1 x; 1 ] .
and let [] ⊂_F denote a set of representatives of . Then x_i ( [κ] ) constitute a set of representatives for I / ( I ∩ w_i I w_i ) for i = 0 , 1.
Let g_ w_i : [] → G be the maps κ↦ x_i(κ) w_i. For w = s_w, 1⋯ s_w, ℓ(w)ρ_w∈ W_I a reduced word decomposition (where s_w,i∈ S_aff ,
ρ_w∈Ω = ρ ^) such that w is shorter of the two elements in w W, define
𝒳_w : []^ℓ(w) → G/ K
(κ_1 ,…, κ_ℓ(w) ) ↦ g_s_w,1 ( κ_1 ) ⋯ g_s_w, ℓ(w) (κ_ℓ(w)) ρ _ w K .
The maps 𝒳_w may be thought of as parameterizing _F lifts of certain Schubert cells[See <ref> that makes the connection with classical Schubert cells of Grassmannians more precise.]
and will be referred to as such.
Proposition <ref> provides a decomposition of double cosets K ϖ ^ λ K for λ∈Λ in terms of these maps. Let us illustrate this decomposition with a few simple examples.
Let λ = f_1. Then K ϖ ^λ K = K ϖ^λ^opp K = K ρ K. Clearly ρ∈ [W \ W_I / W ] and [ W / (W ∩ρ W ρ^-1 ) ] = W. The decomposition therefore reads
K ϖ^λ K / K = im ( 𝒳_ρ ) ⊔im ( 𝒳_w_1ρ ) .
Explicitly, we have
im( 𝒳_ρ ) = * [ 1 ; ϖ ] K and im(𝒳_w_1ρ ) = * [ ϖ κ; 1 ] K κ∈ [ ] .
There are a total of q+1 left cosets contained in K ϖ^λ K.
Let λ = 2f_2. Then K ϖ ^ λ K = K w_0ρ ^2 K and w : = w_0ρ^2∈ [W \ W_I / W ] and [ W / W ∩ w W w^-1] = W.
The decomposition therefore reads
K ϖ^λ K / K = im ( 𝒳_ w ) ⊔im ( 𝒳 _ w_1 w ) .
Explicitly, we have
im(𝒳_ w ) = *[ 1 ; κϖ ϖ^2 ] K κ∈[ ] and im(𝒳_w_1 w ) = * [ ϖ^2 κ _1ϖ + κ_2; 1 ] K κ_1 , κ_2∈ [ ] .
There are q
( q + 1 ) cosets contained in K ϖ^λ K.
Cf. Example <ref>.
As seen from the examples, the Schubert cell maps 𝒳_w are recursive in nature and going from one Schubert cell to the `next' amounts to applying a reflection operation on rows and adding a multiple of one row to another. We also note that the actual product of matrices in 𝒳_w in the example above may not necessarily be upper or lower triangular as displayed e.g., with the choices above, 𝒳_w_0ρ^2 ( κ ) = g_w_0(κ) ρ^2 equals
[ 1; ϖ^2 κϖ ] .
However, since we are only interested left K-coset representatives, we can replace 𝒳_w_0ρ^2 (κ ) with 𝒳_w_0ρ^2 (κ ) γ for any γ∈ K.
In general, multiplying by a reflection matrix on the left has the effect of `jumbling up' the diagonal entries of the matrix. While performing these computations, it is desirable to keep the `cocharacter' entries on the diagonal and one may do so by applying a corresponding reflection operation on columns using elements of K. In the computations done in Part II,
this will be done without any comment.
In computing 𝒳_w, one can often establish certain `rules' specific to the group at hand that dictate where the entries of the a particular cell are supposed to be written depending on the permutation of λ described by the word. For instance, the rule of filling a Schubert cell
[ ϖ^a □; 0.9◯ ϖ^b ]
as displayed above is as follows:
* if a ≥ b, the 0.9◯-entry is zero and the □-entry runs over a set of representatives of ϖ^a_F / ϖ^b_F
* if a < b, then □-entry is zero, and the 0.9◯-entry runs over representatives of ϖ^b_F / ϖ^a+1_F.
§.§ Reduced words
Retain the notations introduced <ref> and <ref>. Fix a λ∈Λ ^ +. The recipe of Proposition <ref> requires writing the reduced decomposition of the word w ∈ W_I of minimal possible length such that K ϖ^λ K = K w K. This is of course the same for K ϖ^λ^oppK. We may equivalently think of W_I as t(Λ) ⋊ W via the morphism (<ref>) and the length we seek is the minimal possible length of elements in W t ( - λ^opp ) W ⊂ t(Λ) ⋊ W. For any μ∈Λ, we denote the minimal possible length in W t(μ) W
by
ℓ_min(t(μ)).
Let Ψ = Φ_F^red⊂Φ_F denote the subset of indivisible roots and let Ψ^+ = Ψ∩Φ_F^+.
For any λ∈Λ, the minimal possible length of elements in t(λ) W ⊂ W_I is achieved by a unique element. If Φ _ F is irreducible, the length of this element is given by
∑ _ α∈Ψ_λ | ⟨λ , α⟩ | + ∑ _ α∈Ψ^λ ( ⟨λ , α⟩ - 1 )
where Ψ _λ = {α∈Ψ ^ + | ⟨λ , α⟩≤ 0 }, Ψ^λ = {α∈Ψ^+ | ⟨λ, α⟩ > 0 }.
If λ∈Λ^+, the minimal length in t(λ) W also equals ℓ_min ( t( λ ) ) = ℓ_min(t (-λ^opp)).
The first claim holds generally for any Coxeter group (<ref>). Assume Φ_F is irreducible. It is clear that P_F^∨ = P(Φ_F^∨ ) is the weight lattice associated with the irreducible reduced root system Ψ. By <cit.>, P^∨_F⋊ W is an extension of the Coxeter group W_aff = Q^∨_F⋊ W by Ω' = P^∨_F / Q^∨_F which acts on W_aff by automorphisms. Thus the length function on W_aff can be extended to P^∨⋊ W. Let φ : Λ⋊ W → P^∨_F⋊ W be the map given by (λ, w) ↦ ( λ̅ , w) where λ̅ = λX_0^∨. The φ factorizes as Λ⋊ W → ( Λ / X_0 ) ⋊ W → P^∨_F⋊ W. As both maps in this composition are length preserving, we see that φ is length preserving. The second claim then follows by <cit.>. Since the sum is maximized for dominant λ and is the same for both λ and - λ^opp, we obtain the last claim.
Retain the notation of <ref>. Let λ = 5f_1∈Λ^+. Then
ℓ_min(t(λ)) = ⟨ 5f_1 , e_1 - e_2⟩ - 1 = 5 - 1 = 4 .
Say w ∈ W_I is of length 4 and K ϖ^λ K = K w K. Since ( ϖ^λ ) = 5, we may assume that w = v ρ^5 where v is a word on S_aff = { w_0 , w_1}. Now the final letter of v cannot be w_0, since ρ w_0ρ^-1 = w_1∈ K. Thus we may assume that v = v' w_1. Since we can only place w_0 next to w_1 for a reduced word, we see that the only possible choice is w = w_0 w_1 w_0 w_1ρ^3.
§.§ Weyl orbit diagrams
Retain the notations introduced in <ref> and <ref>. Besides the usual Bruhat order ≥ on the Weyl group W, there is another partial order that will be useful to us. We say that w ≽ x for w,x∈ W if there exists a reduced word decomposition for x which appears as a consecutive string on the left of some reduced word for w. The pair (W, ≽ ) is then a graded lattice <cit.> and is known as the weak (left) Bruhat order.
For λ∈Λ, let W^λ denote the stabilizer of λ in W. The Weyl orbit diagram of λ is the Hasse diagram on the set of representatives of W / W^λ of minimal possible length with respect to ≽. As W / W^λ = W λ, the nodes of such a diagram can be labelled by elements of W λ.
Assume that Φ_F is irreducible. Let λ∈Λ^+ and let w_λ∈ W_I be the unique element of minimal possible length such that K ϖ^λ K = K w_λ K. By Proposition <ref>, we see that w _ λ = ϖ^λ^oppσ_λ for a unique σ_λ∈ W and W ∩ w_λ W w_λ^-1 is just the stabilizer of - λ^opp (equivalently λ^opp) in W. So we can make the identification
[ W / ( W ∩ w_λ W w_λ^-1 ) ] ≃ [ W / W ^λ^opp ] .
Thus the decomposition of K ϖ^λ K / K as described by Proposition <ref> can be viewed as a collection of Schubert cells 𝒳_μ, one for each node μ∈ W λ^opp = W λ of the Weyl orbit diagram of λ (though note that 𝒳_μ is an abuse of notation). See the proof of Proposition <ref> which illustrates this point.
In the following, we adapt the convention of drawing the Weyl orbit diagrams of λ∈Λ^+ from left to right, starting from the anti-dominant cocharacter λ^opp and ending in λ. The permutation of λ corresponding to the node then `appears' in the matrices of the corresponding Schubert cell. For example, in the notations of <ref>, the Weyl orbit diagram of f_1 is
f_2 f_1
and the matrices in im ( 𝒳_ρ ), im ( 𝒳_w_1ρ ) in Example <ref> have `diagonal entries' given by ϖ^f_2, ϖ^f_1 respectively. We will often omit the explicit cocharacters on the nodes in these diagrams and only display the labels of the arrows. See also <cit.>.
Observe that the shape (in the sense of Definition <ref>) of the matrices in these cells may not match the corresponding cocharacter. In Example <ref>, the shape of the matrices that appear in the decomposition of K ϖ^2f_1 K can be 2f_1, 2f_2 or f_1+f_2 when converted to upper triangular matrices.
§.§ Miscellaneous results
In this subsection, we record assortment of results that are useful in determining the structure of mixed double cosets in practice.
Suppose G is a group, X , Y ⊂ G are subgroups. Then for σ , τ∈ G, X σ Y = X τ Y only if X ∩σ Y σ ^-1 and X ∩τ Y τ^-1 and X-conjugate.
X σ Y = X τ Y ⟺σ = x τ y for x ∈ X, y ∈ Y X ∩σ Y σ ^-1 = x ( X ∩τ Y τ ^-1 ) x ^-1.
Let ι : H ↪ G be an inclusion of groups, K ⊂ G a subgroup and U = K ∩ H. Then for any h_1 ,h_2∈ H, g ∈ G, Uh_1 g K = U h_2 g K if and only if U h_1 H_g = U h_2 H_g where H_g denotes H ∩ g K g^-1. Moreover for any h ∈ H, the index [ H_hg : U ∩ hg K ( hg ) ^-1 ] is equal to [H_g : H_g∩ hU h^-1 ].
The map (of sets) H ↠ H g K / K, h ↦ hgK induces a H-equivariant bijection H / H_g H g K/ K where H acts by left multiplication. Thus the orbits of U on the two coset spaces are identified i.e., U \ H / H_g U \ H g K / K which proves the first claim. For any h ∈ H, H_hg = h H_g h^-1 and H_g∩ hUh^-1 = h ( U ∩ g K g )h^-1 which proves the second claim.
The next result is helpful in describing the structure of double cosets associated with certain non-parahoric subgroups. It is needed in <cit.>.
Let H be a group, σ∈ H an element and U, U_1 ,X be subgroups of H such that U_1σ U / U, X U_1 / U_1 are finite sets and U_2 = XU_1 is a group. Then U_2σ U / U is finite and
e · ( U_2σ U ) = ∑ _δ ( δ U_1σ U )
where (Y) : H → denotes the characteristic of Y ⊂ H, δ∈ X run over representatives of X / ( X ∩ U_1 ) and e = [U_2∩σ U σ^-1 : U_1∩σ U σ ^-1 ]. If U_2∩σ U σ^-1 is equal to the product of X ∩σ U σ ^-1 and U_1∩σ U σ ^-1, then e = [ X ∩σ U σ ^-1 : X ∩ U_1∩σ U σ ^-1 ].
Let W_i : = U_i∩σ U σ ^-1 for i = 1, 2, Z : = X ∩ U_1 and let γ_1, …, γ_m∈ U_1 be representatives of U_1 / W_1, δ_1 , ⋯, δ_n∈ X be representatives of X / Z. We first show that δ_jγ_i form a complete set of distinct representatives of the coset space U_2 / W_1. Let x ∈ X, u ∈ U_1. Then there exists a z ∈ Z, w ∈ W _1 and (necessarily unique) integers i, j such that xz = δ_j, z^-1 u w = γ_i. In other words, x u W_1 = ( x z) (z^-1 u w ) W_1 = δ_jγ_i W_1. Therefore, every element of U_2 / W _ 1 is of the form δ_jγ_i W _ 1 and so
U _2 = ⋃_j = 1 ^ n ⋃ _ i = 1 ^ m δ_jγ_i W_1
We claim that this union is disjoint. Suppose x, y ∈ X, u , v ∈ U_1 are such that xu W_1 = y v W_1. Then v^-1 y^-1 x u ∈ W _1. Since U_2 is a group containing both v^-1∈ U_1 and y^-1 x ∈ X, v^-1 y ^-1 x ∈ U_2. Since U_2 is equal to X · U_1, there exists x_1∈ X, u_1∈ U_1 such that v^-1 y^-1 x = x_1 u _ 1 or equivalently, y^-1 x = v x_1 u _1. Now
v^-1 y^-1 x u ∈ W _ 1 x_1 u_1 u ∈ W _ 1 ⊂ U_1
x_1∈ U_1
y^-1 x = v x_1 u_1∈ U_1 x Z = y Z
Thus if x , y are distinct modulo Z, xu W _ 1, y v W _1 are distinct left W_1-cosets for any u, v ∈ U_1. Thus, in the union above, different j correspond to necessarily distinct
W_1-cosets. It is clear that δ_jγ_i_1 W _ 1 = δ_jγ_i_2 W_1 iff i_1 = i_2. Thus the union above is disjoint as both δ_j and γ_i vary.
Now we prove the first claim. Let p : U_2 / W_1→ U_2 / W_2 be the natural projection map. Since U_2 / W_1 is finite, so is U_2 / W_2 and therefore U_2σ U / U. Moreover, as W_2 / W_1↪ U_2 / W_1, e = [ W_2 : W_1 ] is finite. Let y = a W_2∈ U_2 / W_2 be a W_2-coset of U_2. Then p^-1( y) = { a w W_1 | w ∈ W_2} and we have
| p^-1(y) | = p^-1(W_2) = [ W_2 : W_1 ] = e .
Thus in the list of mn left W_2-cosets given by δ_1γ_1 W_2, δ_1γ_2 W_2 , … , δ_nγ_m W_2, each element of U_2 / W_2 appears exactly e times. Equivalently, among the mn left U-cosets δ_1γ_1σ U, δ_1γ_2σ U , … , δ _ m γ _ n σ U,
each element of U_2σ U/U appears exactly e times. Since U_1σ U = _ i = 1 ^ m γ_iσ U, we see that
e ·(U_2σ U ) = ∑ _ i ,j ( δ_jγ_iσ U ) = ∑ _j ( δ_j U_1σ U )
and the first claim is proved. The second claim follows since W_2 = (X ∩σ U σ^-1) W_1 implies that W_2/ W_1 = ( X ∩σ U σ^-1 ) W_1 / W_1 = ( X ∩σ U σ ^-1 ) / ( X ∩ W_1 ).
Part 2. Examples
§ ARITHMETIC CONSIDERATIONS
In this section, we record two embeddings of Shimura-Deligne varieties that are of arithmetic interest from the perspective of Euler systems. Our goal here is only to motivate the local zeta element problems arising from these scenarios, cast them in the axiomatic framework of <ref> and justify various choices of data in order to align these problems with the actual arithmetic situation. In particular, we will make no attempt to study the arithmetic implications of these problems.
In the sections that follow, we solve the resulting combinatorial problems using techniques developed in Part I. These examples are meant to test our machinery in situations where the computations are relatively straightforward in comparison to, for instance, <cit.>. For a concrete arithmetic application of such combinatorial results to Euler system constructions,
we refer the reader to <cit.>.
§.§ Unitary Shimura varieties
Let E ⊂ be an imaginary quadratic number field and γ∈(E/ ) denote the non-trivial automorphism. Let J = diag(
1,…, 1
,
-1,…,-1
) be the diagonal matrix where there number of 1's is p and the number of -1's is q. Clearly γ(J)^t = J i.e., J is E /-hermitian. Let GU_p,q denote the algebraic group over whose R points for a -algebra R are given by
GU_p,q(R) : = { g ∈_p,q(R) | γ(g)^tJ γ(g) = sim(g) J for some sim(g) ∈ R^×} .
The resulting map
sim : GU_p,q→_m is a character called the similitude.
Let
h : →𝐆_
z ↦diag(
z,…,z
,
z̅, …, z̅
)
and let 𝒳 be the 𝐆()-conjugacy class of h. Then (𝐆, X) constitutes a Shimura-Deligne data that satisfies (SD3) if p, q ≠ 0 (see <cit.> for terminology). The dimension of the associated Shimura varieties is p q. There is an identification 𝐆_E≃_m,E×_p+q, E induced by the isomorphism of E-algebras E ⊗ R ≃ R^×× R^×, (e,r) ↦ (er , γ(e)r) for any E-algebra R. The cocharacter μ_h : _m, →_m, ×_p+q, associated with h is given by z ↦ ( z , diag(
z,…,z
,
1,…,1
) ). The reflex field is then easily seen to be E if p ≠ q and otherwise.
For m ≥ 1 an integer, let : = GU_1,2m-1. Then the so-called arithmetic middle degree[one plus the dimension of the variety] of the Shimura varieties of is 2m. Thus one construct classes in this degree by taking pushforwards of special cycles of codimension m. One such choice is given by the fundamental cycles of Shimura varieties of
𝐇 : = GU_1,m-1 × _ _mGU_0,m.
where the fiber product is over the similitude map. There is
a natural embedding 𝐇↪ which constitutes a morphism of SD data and gives an embedding of varieties is over E.
We note that μ_h for corresponds to the representation of ^L𝐆_E which is trivial on the factor 𝒲_E and which is the standard representation on = _m×_n. Thus at a choice of a split prime λ of E above ℓ, we are interested in the Hecke polynomial of the standard representation of _n×_m. This case is studied <ref>. When ℓ is inert, we are interested in the base change of the standard L-factor (Remark <ref>).
This setup is the studied <ref> for the case m = 2. As we are pushing fundamental cycles of the Shimura varieties of 𝐇, we are led to consider the trivial functor that models the distribution relations of these cycles. See <cit.> for a description of the relevant Galois representations which the resulting norm relations are geared towards.
To construct classes that go up a tower of number fields, we need to specify a choice of torus 𝐓 and a map ν : 𝐇→𝐓, so that the Shimura set associated with 𝐓 corresponds to non-trivial abelian extensions of the base field E. We can then construct classes in towers by considering the diagonal embedding ↪×.
One such choice is 𝐓 := 𝐔_1, the torus of norm one elements in E. It is considered as a quotient of 𝐇 via
ν : 𝐇→𝐓 (h_1, h_2) ↦ h_2 / h_1 .
The extensions determined by the associated reciprocity law are anticyclotomic i.e., the natural action of (E/) on them is by inversion. The behaviour of arithmetic Frobenius Frob_λ at a prime λ of E in an unramified extensions contained in such towers is rather special. Let ℓ be the rational prime of below λ. When ℓ is split, we denote by λ̅ the other prime above ℓ. Then Frob_λ is trivial if ℓ is inert and Frob_λ = Frob_λ̅^-1 if ℓ is split. If ℓ is split, the choice of λ above ℓ allows us to pick identifications 𝐇__ℓ≃_m×_m×_m and 𝐓__ℓ≃_m, so that ν is identified with the map (c, h_1, h_2) ↦ h_2 / h_1. With these conventions, the induced map ν∘μ_h sends the uniformizer at λ in ∈ E_λ ^× to 1 ∈𝐓(_ℓ) if ℓ is inert and to ℓ^-1∈_ℓ≃𝐓(_ℓ) if ℓ is spilt. The group 𝐓(_ℓ) has a compact open subgroup of index ℓ +1 (resp., ℓ-1) if ℓ is inert (resp., split). These groups provide the `layer extensions' for our zeta element problem.
The choice of ν is made to match that in <cit.>. One equivalently work with ν' that sends (h_1, h_2) ↦ h_1 / h_2 in which case λ is sent to ℓ∈(_ℓ) for ℓ split. The Shimura varieties we have written also admit certain CM versions, and the local zeta element problem studied in <ref> apply to these more general versions too.
That the resulting Euler system is
non-trivial is the subject of a forthcoming work. This particular embedding of Shimura varieties is motivated by a unitary analogue of the period integral of Friedberg-Jacquet <cit.>, <cit.>. A first step towards interpolating these periods and the construction of a suitable p-adic L-function is taken in <cit.>, <cit.>.
The inert case of the situation above studied in <ref> also serves as a precursor for a slightly
more involved calculation performed in <cit.> for the twisted exterior square representation.
§.§ Symplectic threefolds
Let 𝐆 : = GSp_4
and 𝐇 = _2×__m_2 where the fiber product is over the determinant map. We have an embedding ι : ↪
obtained by considering the automorphisms of the two orthogonal
sub-spaces of the standard symplectic vector space V spanned by e_1 , e_3 and e_2, e_4 where e_i are the standard bases vectors. Let
h : →𝐆_
(a+b √(-1)) ↦[ a b; a b; -b a; -b a ].
Note that h factors through ι. Let 𝒳_ (resp., 𝒳_) denote
the () (resp.,()) conjugacy class of h. Then (_ , 𝒳_), (𝐆_, 𝒳_) satisfies axioms SV1-SV6 of <cit.> and in particular, constitutes a Shimura data. These Shimura varieties are respectively the fibered product of two modular curves and the Siegel modular threefold that parametrizes abelian surfaces with polarization and certain level structures. The reflex fields of both of these varieties is . The cocharacter μ_h associated to h corresponds to the four dimensional spin representation of ^LGSp_4 = GSpin_5×𝒲_ and we are thus interested in establishing norm relations involving the Hecke polynomial associated to the spinor representation. See <cit.> for a description of the relevant four dimensional Galois representations to which such norm relations are geared towards.
As the codimension of the two families of Shimura varieties is 1, one needs to push classes from H^2_ of the source variety to be able construct classes in arithmetic middle degree of the target Shimura variety. As first proposed by Lemma in <cit.>, one can take (integral) linear combinations of the cup products of two Eisenstein classes in the H^1 _ of each modular curve for this purpose. The distribution relations of such cup products can then be modelled via the tensor product of two CoMack functors associated to Schwartz spaces of functions on 2 × 1 adelic column vectors minus the origin. This tensor product is then itself a Schwartz space over a four dimensional adelic vector space (minus two planes that avoid the origin) which then becomes our (global) source functor.
The local source bottom class (<ref>) is then the such a characteristic function.
Apriori, one can only define Eisenstein classes integrally by taking integral linear combinations of torsion sections determined by the level structure of the modular curves. The main result of <cit.> upgrades this association to all integral Schwartz functions, which justifies our use of these function spaces as source functors for the zeta element problem.
To construct classes in a tower, we can consider the torus 𝐓 = _m which admits a map ν : 𝐇→𝐓 given by sending a pair of matrices to their common determinant.
As above, we consider the embedding 𝐇↪×𝐓 which in this case also factors through 𝐇↪. With this choice, the map induced by μ_h : _m→ is identity i.e., locally at a prime ℓ, the pullback action of ℓ∈(_ℓ) corresponds to the action of geometric Frobenius. Then 1 + ℓ_ℓ⊂(_ℓ) provides us with a `layer extension' of degree ℓ - 1. Under the reciprocity law for , these layer extensions correspond to the ray class extensions of of degree ℓ - 1.
Although the zeta element problem is only of interest over _ℓ, we have chosen to work with an arbitrary local field for consistency of notation.
The question that this construction leads to a non-trivial Euler system is addressed
in <cit.>.
As the arithmetic middle degree is even, one may ask if interesting classes can be constructed in this degree via special cycles. Such a setup was proposed in <cit.> which allows one to construct classes over an imaginary quadratic field. It would be interesting to see if this construction indeed sees the behaviour of an L-function.
§ STANDARD -FACTOR OF
In this section, we study the zeta element problem for the split case of the embedding discussed in <ref>.
The symbols F, 𝒪_F, ϖ, , q and [] have the same meaning as in Notation <ref>.
The letter 𝐆 will denote the group scheme 𝔾_m×GL_n over _F where n is a positive integer and is assumed to be even from <ref> onwards. We will denote G : = 𝐆 (F) and K : = 𝐆(𝒪_F). For a ring R, we let ℋ_R = ℋ_R(K \ G / K) denote the Hecke algebra of G of level K with coefficients in R with respect to a Haar measure μ_G such that μ_G(K)=1. For simplicity, we will often denote (K σ K ) ∈ℋ_R simply as (K σ K ).
§.§ Desiderata
Let 𝐀 = _m^n+1 and dis : 𝐀→𝐆 be the embedding given by
(u_0, u_1, …, u_n ) ↦ ( u_0, diag( u_1 , … , u_n ) ) .
Then dis
identifies 𝐀 with a maximal torus in 𝐆. We denote A : = 𝐀(F) the F-points of A and A ^∘ : = A ∩ K the unique maximal compact subgroup. For i = 0, … n, let e_i : 𝐀→_m be the projection on the i-th component and f_i : _m→𝐀 be the cocharacter inserting u in the i-th component of 𝐀. We will denote by Λ the cocharacter lattice f_0⊕⋯⊕ f_n. The element a_0 f_0 + … + a_n f_n∈Λ will also be denoted as (a_0, …, a_n ). The set Φ⊂ X^*(𝐀 ) of roots of 𝐆 are ± ( e_i - e_j ) for 1 ≤ i < j ≤ n
which constitutes an irreducible root system of type A_n-1. We let Δ = {α_1, …, α_n-1}⊂Φ where
α_1 = e_1 - e_2, α_2 = e_2 - e_3, …, α_n = e_n-1 - e_n .
Then Δ constitutes a base for Φ. We let Φ ^ + ⊂Φ denote the set of resulting positive roots. The half sum of positive roots is then
δ : = 1/2∑_k=1^n (n-2k+1) e_k
With respect to the ordering induced by Δ, the highest root is α_0 = e_1 - e_2n. We let I = I_G be the standard Iwahori subgroup of G, which corresponds to the alcove determined by the simple affine roots α_1 + 0, α_2 + 0, …, α_n-1 + 0, - α_0 + 1. The coroots corresponding to α_i are
α_0 ^ ∨ = f_1 - f_n , α_1^∨ = f_1 - f_2, α_2^∨ = f_2 - f_3, …, α_n-1^∨ = f_n-1 - f_n
and their span in Λ is denoted by Q ^∨. An element λ = (a_0 , …, a_n ) ∈Λ is dominant iff a_1≥ a_2≥…≥ a_n and anti-dominant if all these inequalities hold in reverse. We denote the set of dominant cocharacters by Λ^+. The translation action of λ∈Λ on Λ⊗ via x ↦ x + λ is denoted by t(λ). We denote ϖ^λ∈ A the element λ(ϖ ) for λ∈Λ and v : A / A ^∘→Λ be the inverse of the map Λ→ A / A ^∘, λ↦ϖ^-λ A^∘. Let s_i be the reflection associated with α_i for i = 0, …, n. The action of s_i on Λ is given explicitly as follows:
* s_i acts by the transposition f_i↔ f_i+1 for i = 1,2, …, n - 1
* s_0 acts by transposition f_1↔ f_n.
For λ∈Λ, we let e ^ λ∈ [ Λ ] denote the element corresponding to λ and e ^ W λ∈ [ Λ ] denote the element obtained by taking the formal sum of elements in the orbit W λ.
Let S_aff = { s_1 , s_2 , … , s_n-1 , t ( α _ 0 ^ ∨ ) s_0} and W, W_aff, W_I be the Weyl, affine Weyl and Iwahori Weyl groups respectively determined by A. We consider W_aff as a subgroup of affine transformations of Λ⊗. We have
* W = ⟨ s_1 , …, s_n-1⟩≅ S_n-1,
* W_aff = t(Q^∨ ) ⋊ W
* W_I = N_G(A) / A^∘ = A/A^∘⋊ W v≃Λ⋊ W
where v is the map (<ref>).
The pair (W_aff, S_aff ) forms a Coxeter system of type Ã_n-1. We consider W_aff a subgroup of W_I via W_aff≃ Q^∨⋊ W ↪Λ⋊ W v≃ W_I.
The natural action of W_aff on Λ⊗ then extends to W_I with λ∈Λ acting as a translation t(λ). We set Ω : = W_I / W_aff, which is a free abelian group on two generators and we have W_I≅ W_aff⋊Ω.
We let ℓ : W_I→ denote the induced length function with respect S_aff. Given λ∈Λ, the minimal length ℓ_min(t(λ)) of elements in the coset t ( λ ) W is achieved by a unique element. This length can be computed using Lemma <ref>. We let
w_1 : =
0.9[ 0 1 ; 1 0 ; 1 ; ⋱ ; 1 ; 1 ],
w_2 : = 0.9[ 1 ; 0 1 ; 1 0 ; 1 ; ⋱ ; 1 ], … , w_n-1 : = 0.9[ 1 ; 1 ; ⋱ ; 1 ; 0 1; 1 0 ],
w_ 0 : = 0.9[ 0 1.091/ϖ; 1 ; 1; ⋱; 1; ϖ 0 ], ρ =
0.9[ 0 1 ; 0 1 ; 0 1 ; ⋱ ⋱ ; 0 1; ϖ 0 ]
which we consider as elements of N_G(A) (the normalizer of A in G) whose component in _m is 1. The classes of w_0 , w_1 , …, w_n-1 in W_I represent t(α_0 ^∨ ) s_0, s_1 , … , s_n-1 respectively and the class of ρ is a generator of Ω / ⟨ t( f_0 ) ⟩. The reflection s_0 in α_0 is then represented by w_α_0 : = ϖ^f_1 w_0. We will henceforth use the letters w_i, ρ to denote both the matrices and the their classes in W_I if no confusion can arise. We note that conjugation by ρ on W _ I acts by cycling the (classes of) generators via w_n-1→ w_n-2→…→ w_1→ w_0→ w_n-1, thereby inducing an automorphism of the extended Coxeter-Dynkin diagram
[extended,Coxeter,
edge length= 1cm,
labels=,1,2,n-2,n-1, labels*=0]
A[1]
where the labels below the vertices correspond to the index of w_i. Note also that ρ^n = ϖ^(1,1,…,1)∈ A is central.
For i = 0,1, …, n - 1, let
x_i : _a→𝐆 be the root group maps defined by
x_1 : u ↦0.9[ 1 u ; 1 ; ⋱ ; 1 ; 1 ], x_2 : u ↦0.9[ 1 ; 1 u ; 1 ; ⋱ ; 1; ],
…,
x_n-1 : u ↦0.9[ 1 ; 1 ; ⋱ ; 1 u; 1; ]
,
x_0 : u ↦0.9[ 1 ; 1 ; ⋱ ; 1 ; ϖ u 1; ]
where again the matrices are considered as elements of with 1 in the _m component. Let g_w_i : [ ] → G be the maps κ↦ x_i(κ ) w_i. Then I w_i I = _κ∈ [] g_w_i(κ) I. For w ∈ W_I such that w is the unique minimal length element in the coset w W, choose a reduced word decomposition w = s_w,1 s_w,2⋯ s_w, ℓ(w) ρ_w where s_w,i∈ S_aff, ρ_w∈Ω. Define
𝒳_w : [ ] ^ ℓ(w) → G / K
(κ_1 , …, κ_ℓ(w) ) ↦ g_s_w,1 ( κ_1 ) ⋯ g_s_w, ℓ(w) ( κ_ℓ (w) ) ρ_w K
where we have suppressed the dependence on the decomposition chosen in the notation. By Theorem
<ref>, the image of 𝒳_w is independent of the choice of decomposition and # im ( 𝒳_w ) = q ^ ℓ(w).
We note that ℓ(w) = ℓ_min( t(-λ_w ) ) where λ_w∈Λ is the unique cocharacter such that w K = ϖ^λ_w K.
Cf. the matrices in <cit.>.
§.§ Standard Hecke polynomial
Let ℛ=ℛ_q denote the ring ℤ [ q^±1/2 ] and let y_i:=e^f_i∈ℛ[Λ] the element corresponding to f_i. Then ℛ[Λ]=ℛ[y_0^±, ⋯, y_n^±]. We are interested in the characteristic polynomial of the standard representation of the dual group 𝐆_F = 𝔾_m×GL_n whose highest coweights are μ_std =f_0+f_1. Note that μ_std is the cocharacter obtained from the Shimura data in <ref>.
Since μ_std is minuscule,
the (co)weights of the associated representation are the elements in the Weyl orbit of μ_std. These are f_0+f_1, f_0+f_2 , … , f_0 + f_n.
The Satake polynomial (see Definition <ref>) for μ_std is therefore
𝔖_std(X)=(1-y_0 y_1 X)(1-y_0 y_2 X) ⋯ ( 1 - y_0 y_n X ) ∈ [ Λ ] ^ W [X]
As in <ref>, we let 𝒮: ℋ_ℛ→ℛ[Λ]^W denote the Satake isomorphism.
The polynomial ℌ_std, c(X) ∈ℋ_ℛ[ X] is defined so that 𝒮(ℌ_std, c(X))=𝔖_std(q^-c/2 X)
for any c ∈ℤ.
Let ϱ = ϖ^f_0ρ∈ N_G(A). Then
ℌ_std, c(X)= ∑_k=0^n(-1)^k q^-k(n-k+c)/2(K ϱ^k K) X^k .
In particular if n is even and c is odd, ℌ_std,c(X) ∈ℋ_[q^-1][X].
Let p_k = p_k(y_1, …, y_n) ∈[Λ]^W denote the k-th elementary symmetric polynomial in y_1 , …, y _n.
Then 𝔖_std(X) = ∑_k=0^n (-1)^k x_0^k p_k X^k. So it suffices to establish that
𝒮 ( K ϱ^k K ) = q^k(n-k)/2 x_0^k p_k .
For k ≥ 1, set μ _k : = f_0 + f_1 + … + f_k∈Λ ^ +. Then, K ϱ ^k K = K ϖ^μ_ k K as double cosets. But μ _k are themselves minuscule. Therefore, Corollary <ref> and the second part of Corollary <ref> together imply that 𝒮 ( K ϖ^μ _ k K ) is supported on x_0^k p_k and that the coefficient of x_0 p_k is q^⟨μ _ k , δ⟩ where δ is as in (<ref>). One easily calculates that ⟨μ _k, δ⟩ = k(n-k)/2.
The formula for ℌ_std,c was first obtained by Tamagawa <cit.> and the case n = 2 is due to Hecke <cit.>, hence the terminology `Hecke polynomial' – see the note at the bottom of <cit.> and the historical commentary in 4, 8 of <cit.>. Cf. <cit.>.
An alternate proof of Proposition <ref> that does not use Corollary <ref> may be obtained using the decomposition of K ρ ^ k K described in Proposition <ref> which is closer in spirit to the proof by Tamagawa.
§.§ Decomposition of minuscule operators
In this section, we study the decomposition of Hecke operators K ϱ ^ k
K for k ∈{ 1, …, n } into individual left cosets. Here ϱ = ϖ^f_0ρ as above. Since (ϖ^k,1) ∈ G is central, it suffices to describe the decomposition K ρ ^k K, so that the left coset representatives γ will have 1 in the _m-component.
Let k be an integer satisfying 1 ≤ k ≤ n. A Schubert symbol of length k is a k-element subset 𝐣 of [n] : = { 1 , …, n }. We write the elements of 𝐣 = { j_1, …, j_k} such that j_1 < ⋯ < j_n.
The dimension of 𝐣 is defined to be 𝐣 = j_1 + … + j_k - k+12. The set of Schubert symbols of length k is denoted by J _ k. We have | J _k | = nk.
We define a partial order ≼ on J_k by declaring 𝐣≼𝐣' for symbols 𝐣 = { j_1, …, j_k}, 𝐣' = { j_1', …, j_k' } if j_i≤ j_i' for all i = 1 , …, k. Then (J_k, ≼) is a lattice (in the sense of order theory). The smallest
and the largest elements of J_k are {1, …, k} and {n-k+1, …, n } respectively. We assign a grading to J_k so that the smallest element has length is 0.
For 𝐣∈ J_k, the Schubert cell 𝒞_𝐣 is the finite subset of Mat_n × k ( F ) consisting of all n × k matrices C such that
* M has 1 in ( j_i , i )-entry, which are referred to as pivots.
* the entries of M that are below or to the right of a pivot are zero,
* M has entries in [] ⊂_F elsewhere.
Then | 𝒞_𝐣 | = q ^ 𝐣. Given C ∈𝒞_𝐣, we let φ_𝐣(C) ∈_n(_F) be the n × n matrix obtained by inserting the i-th column of 𝒞_𝐣 in the j_i-th column of φ_𝐣(C), making the rest of the diagonal entries ϖ and inserting zeros elsewhere.
We let 𝒳_𝐣⊂_n(F) denote the image of φ_𝐣(𝒞_𝐣)
and consider 𝒳_𝐣⊂ G by taking 1 in the _m-component.
Let n = 4, , k = 2. Then the Schubert cells are
𝒞_{ 1 , 2 } = 0.9[ 1 ; 1; ; ], 𝒞_ { 1,3 } = 0.9[ 1 ; *; 1; ] , 𝒞_{2,3 } = 0.9[ * *; 1 ; 1; ]
𝒞 _ { 1 , 4 } = 0.9[ 1 ; *; *; 1 ], 𝒞_ { 2 , 4 } = 0.9[ * *; 1 ; *; 1 ] , 𝒞 _ { 3, 4 } = 0.9[ * *; *; 1 ; 1 ]
where the star entries are elements of [] and zeros are omitted. The corresponding collections 𝒳_ 𝐣 are
𝒳_{1,2}
=0.9[ 1 ; 1 ; ϖ ; ϖ ],
𝒳_{1,3}
=
0.9[ 1 ; ϖ * ; 1 ; ϖ ],
𝒳_{2,3 }
=
0.9[ ϖ * *; 1 ; 1 ; ϖ ],
𝒳_{1,4}
=
0.9[ 1 ; ϖ *; ϖ *; 1 ],
𝒳_{2, 4}
=
0.9[ ϖ * *; 1 ; ϖ *; 1 ],
𝒳_{3,4}
=
0.9[ ϖ * *; ϖ * *; 1 ; 1 ]
We have a total of 1 + q + q ^ 2 + q ^ 2 + q ^ 3 + q ^ 4 matrices in these six sets.
For 1 ≤ k ≤ n, K ρ^k K = _ 𝐣 ∈ J_k _ γ∈𝒳_𝐣γ K .
Let λ _ k = ∑_i=1^k f_ n - k+i∈Λ ^-. We have ρ^k W ρ^-k = ⟨ S_aff∖ w_n - k⟩ and therefore
W ∩ρ ^k W ρ^-k = Stab_W ( λ _ k ).
By Theorem <ref>,
Kρ^k K = _ w ∈ [ W / W ^λ_k ] im ( 𝒳_w ρ ^ k ) .
where W^λ_k := Stab_W(λ_k) and [ W / W^λ_k ] denotes the set of representatives in W of W / W^λ_k of minimal possible length.
For λ∈ W λ_k, let 𝐣(λ ) ∈ J_n-k be the Schubert symbol consisting of integers 1 ≤ j_1 < … < j_n-k≤ n such that the coefficient f_j_i in λ is 0. If w ∈ [W / W ^λ_k ] and λ = w λ_k ∈ W λ_k, we let 𝐣(w) : = 𝐣( w λ_k ). We let ≼ denote the left (weak) Bruhat order on W with respect to S. Then (W, ≼) is a graded lattice with grading given by length.
Claim 1.
The map w ↦𝐣(w) sets up an order preserving bijection [W / W^λ_k ] J_n-k.
The set W / W ^λ_k is in one-to-one correspondence with the orbit W λ_k⊂Λ. The orbit consists of the nk permutations of the cocharacter λ_k = f_n-k+1 + ⋯ + f_n. Picking a permutation of λ_k in turn is the same thing as choosing n-k integers 1 ≤ j_1 < … < j_n-k≤ n such that f_j_1 , …, f_j_n-k have coefficient zero in the permutation of λ_k. This establishes the bijectivity of w ↦𝐣(w). The identity element is mapped to {1 , …, k } and one establishes by induction on the length that the mapping preserves the orders.
Claim 2. For all w ∈ [W / W ^ λ _ k ],
im ( 𝒳_w ρ ) = {γ K | γ∈𝒳_𝐣(w) }.
We proceed by induction on the the length of w. If w is of length 0, then w is the identity element and 𝐣 = 𝐣_λ_k = { 1, …, n-k }. Now im(𝒳_ρ^k ) = {ρ ^ k K } is a singleton and 𝒳_𝐣 = ϖ^λ_k K. As ρ ^ k K = ϖ ^ λ_k K, the base case holds. Now suppose that the claim holds for all w ∈ [W / W ^ λ _k ] of length m. Let v= s w where s ∈{ s_1 , … , s_n-1}, w ∈ [W / W ^λ_k] such that ℓ(v) = ℓ(w) + 1 and ℓ(w) = m. Let 𝐣_v, 𝐣_w be the Schubert symbols corresponding to v, w respectively. By Claim 1,
there exists a unique j ∈{ 1, …, n-1 } such that i ∈𝐣_w, j +1 ∈𝐣_v and 𝐣_w∖{ j + 1 } = 𝐣_v∖{ j }. If σ K ∈𝒳_ w ρ, then σ K = φ_𝐣(w)(C) K for some C ∈𝒞_𝐣_w by by induction hypothesis. Denote τ := φ_𝐣(w)(C). By definition, τ ( j , j ) = 1, τ ( j+1,j+1 ) = ϖ and τ ( j,j_1 ) = τ ( j_2, j ) = τ ( j+1, j_3 ) = 0 for j_1 , j_2 > j, j_3≠ j + 1.
τ =
[name = matrix]
⋱
1 0 ⋯ 0
0 ϖ
⋮
0 ⋱[decorate, decoration= calligraphic brace,transform canvas = yshift=-2em, thick] (row-6-|col-4.east) – node[midway, below=1pt]0.8j (row-6-|col-3.west) ;
[decorate, decoration= calligraphic brace,transform canvas = yshift=0em,xshift=2.7em, thick] (row-3-|col-6) – node[midway, right=1pt]0.8j (row-4-|col-6) ;
Then g_w_j(κ) τ K = x_j(κ) w_jτ w_j K i.e., the the effect of multiplying τ K by g_w_jx_j is to switch the rows and columns in indices j and j + 1 and then adding κ times the j+1-st row to the j-th row. Clearly, x_j(κ) w_jτ w_j∈𝒳_𝐣(v). Since g K was arbitrary, we see that im(𝒳_w ρ^k ) = {γ K | γ∈𝒳 _ 𝐣(w) } for w ∈ [W / W ^ λ_k ] with ℓ(w) = m + 1. By induction, we get the claim.
This can also be proved directly by appealing to the stratification of the Grassmannian that parametrizes n-k-dimensional subspaces in an n-dimensional vector space over a finite field.
§.§ Mixed decompositions
From now on, let n = 2m be even. If g ∈_2m(F), we will denote by A_g , B_g, C_g, D_g∈Mat_m × m (F) so that
g = [ A_g B_g; C_g D_g ] .
If g ∈ G, then A_g, B_g, C_g, D_g denote the matrices associated with the _2m(F) component of G. Moreover, we adapt the following
An element of _2m(F) is considered as an element of G via the embedding _2m(F) ↪ G in the second component.
Let ι : 𝐇↪𝐆 be the subgroup generated 𝐀 and root groups of Δ∖{α_m}. Then 𝐇≃_m×_m×_m embedding block diagonally in . We denote H = 𝐇(F), U = H ∩ K and H_1 = H_2≃_n the two components so that H = F^×× H_1× H_2. If h ∈ H, we denote by h_1, h_2 the components of H in H_1, H_2 respectively. We let W _ H ≃ S_m× S_m denote the Weyl group of W which we consider as a subgroup of W generated by s_1 , …, s_m-1, s_m+1 , …, s_2m. The roots of H are denoted by Φ_H. These are ± (e_i - e_j ) for 1 ≤ i, j ≤ m and for m+1 ≤ i, j , ≤ 2m and we have a partition Φ_H = Φ_H_1⊔Φ_H_2 into a union of two root systems isomorphic to A_m-1. For α = e_i - e_j∈Φ_H and k ∈, we let U_α, k denote the unipotent subgroup of H with 1's on diagonal and zeros elsewhere except for the (i,j) entry, which is required to have ϖ-adic valuation less than or equal to k.
For k = 0, … , 2m, let P_k denote the set of pairs (k_1, k_2 ) of non-negative integers such that k_1 + k_2 = k and k_1,k_2≤ m. For κ = ( k_1, k_2 ) ∈ P_k, denote l(κ) : = min (k_1 , m - k _2 ) and let
λ _κ : = ∑_ i = 1 ^k_1 f_i + ∑_ j = m-k_2 + 1 ^ m f_m+j∈Λ
For i = 0, …, m, let
t_i := diag( ϖ^-1…, ϖ^-1_ i , 0 , …, 0 _ m - i ) ∈Mat_m × m(F) and
τ_i : = [ 1_m t_i; 1_m ]∈_2m (F).
Set H_τ_i : = H ∩τ_i K τ_i^-1. For g ∈ G, let U ϖ^Λ g K denote the set of all double cosets U ϖ ^ λ g K for λ∈Λ.
For i = 0, … , m, the collections U ϖ ^ Λτ_i K are disjoint.
It suffices to show that H τ_i K are distinct double cosets. Suppose for the sake of contradiction that that τ_i∈ H τ_j K for i ≠ j. Then τ_i ^ -1 h τ_j∈ K. Say h = (u, h_1 , h_2 ). Now
τ_i ^-1 (h_1, h_2 ) τ_j = [ h_1 h_1 t_j - t_i h_2; h_2 ]
and therefore τ_i ^-1 h τ_j∈ K implies that h_1, h_2∈_m ( _F ) and h_1 t_j - t_i h_2∈ Mat _m × m ( _F ). But the second condition implies that the reduction modulo ϖ of one of h_1, h_2 is singular (the determinant vanishes modulo ϖ), which contradicts the first condition.
For each k = 0, 1, …, 2m,
(K ρ ^k K) = ∑_κ∈ P_k∑ _ i = 0 ^ l (κ) ( U ϖ ^ λ_κτ_i K) .
We first claim that for each k = 0 , 1 …, 2m, the double cosets U ϖ^λ_κτ_i K for distinct choices of κ∈ P_k and i = 0 , 1, …, l (κ). By Lemma <ref>, two such cosets are disjoint for distinct i, so it suffices to distinguish the cosets for different κ but fixed i. By Lemma <ref>, it suffices to show that U ϖ^λ_κ H_τ_i are pairwise disjoint for κ∈ P_k. Since H_τ_i⊂ U, it in turn suffices to show that U ϖ^λ_κ U are pairwise disjoint for κ∈ P _ k. But this follows by Cartan decomposition for H.
Fix a k. For κ = (k_1 , k_2 ) ∈ P _ k, let 𝐣 = {1,…, k_1}∪{ m - k_2 + 1, …, 2m }. From the description of the Schubert cell 𝒳_𝐣 and Proposition <ref>, it is easy to see that ϖ^λ_κτ_i K ⊂ K ρ ^ k
K (and therefore ϖ^λ_κτ_i K ⊂τ_i U ϖ^λ_κτ_i K) for all κ∈ P_k, 0≤ i ≤ l(κ). So to prove the claim at hand, it suffices to show that for any γ∈ G such that γ K ⊂ K ρ^k K, there exist κ and i such that U γ K = U ϖ^λ_κτ_i K. By Proposition <ref>, it suffices to restrict attention to γ∈𝒳_𝐣 for some Schubert symbol 𝐣∈ J_k. Furthermore, since any γ∈𝒳_𝐣 has non-zero non-diagonal entries only above a pivot and these entries are in _F, we can replace γ by an element γ ' such that A_γ', D_γ' are diagonal matrices and U γ K = U γ' K. Let us define a set 𝒴_𝐣⊂_n(_F ) that contains all such γ' as follows. An element g ∈ G lies in 𝒴_𝐣 if
* the diagonal of g has 1 (referred to as pivots) in positions (j,j) for j ∈𝐣 and ϖ if j ∉𝐣,
* A_g, D_g are diagonal matrices and C_g = 0,
* B_g has non-zero entries only in columns of H that contain a pivot and rows that do not.
For any 𝐣∈ J_k, let 𝐣_1 (resp., 𝐣_2 ) denote the subset of elements not greater than m (resp., strictly greater than m) and let κ( 𝐣 ) : = ( | 𝐣_1 | , | 𝐣_2 | ) ∈ P_k. It suffices to establish the following.
Claim. For any Schubert symbol 𝐣∈ J_k and any γ∈𝒴_𝐣 there exists an integer i ∈{0, 1, …, l(κ ( 𝐣 ) ) } such that U γ K = U ϖ^λ_κτ_i K.
We prove this by induction on m. The case m = 1 is straightforward. Assume the truth of the claim for some positive integer m - 1 ≥ 1. If 𝐣_1 = ∅, then A_γ = I_m, B_γ = C_γ = 0 and D_γ is diagonal. Since w_m+1, …, w_2m lie in both U and K, one can put all the k ≤ m pivots in the top diagonal entries of D_γ and we are done. We can similarly rule out the case 𝐣_2 = { m+1, …, 2m }. Finally, if B_γ = 0, we can again use reflections in H to rearrange the A_γ and D_γ diagonal entries to match ϖ^λ_l,k.
So suppose that k _ 1 : = | 𝐣_1| > 0,
k_2 : = | 𝐣 _2 | < m and B_γ≠ 0. Pick j_1∈𝐣_1 such that the j_1-th row of B_γ is non-zero and let j_2∉𝐣_2, m+1 ≤ j_2≤ 2m be such that the (j_1, j_2 ) entry of γ in B_γ is not 0. If j_1≠ 1, then using row and columns operations, one can switch the first and j _ 1-th row and columns to obtain a new matrix γ '. Clearly, γ ' is an element of 𝒴_𝐣 ' for some new 𝐣', U γ K = U γ ' K and the (1,j_2) entry of γ ' is non-zero. Similarly if j_2≠ m+1, we can produce a matrix using row and columns operations so that (m+1, m+1) diagonal entry of the new matrix is 1 and the class of this matrix in U \ G / K is the same as γ. The upshot is that we may safely assume that j_1 = 1, j_2 = m+1 (so in particular, 1 ∈𝐣, m+1 ∉𝐣).
γ =
cccc|cccc[margin]
⋱
ϖ *
⋱
□
⋱
1
⋱[decorate, decoration= calligraphic brace,transform canvas = yshift=-2em, thick] (row-8-|col-4.south east) – node[midway, below=1pt]0.9j_1 (row-8-|col-3.south west) ;
[decorate, decoration= calligraphic brace,transform canvas = yshift=-2em, thick] (row-8-|col-8.east) – node[midway, below=1pt]0.9j_2 (row-8-|col-7.west) ;
[decorate, decoration= calligraphic brace,transform canvas = yshift=0em,xshift= 3 em, thick] (row-3-|col-8) – node[midway, right=1pt]0.9j_1 (row-4-|col-8) ;
⇝ γ' = cccc|cccc[margin]
ϖ *
⋱
⋱
1
⋱
□
⋱[decorate, decoration= calligraphic brace,transform canvas = yshift=-2em, thick] (row-8-|col-4.south east) – node[midway, below=1pt]0.9j_1 (row-8-|col-3.south west) ;
[decorate, decoration= calligraphic brace,transform canvas = yshift=-2em, thick] (row-8-|col-8.east) – node[midway, below=1pt]0.9j_2 (row-8-|col-7.west) ;
[decorate, decoration= calligraphic brace,transform canvas = yshift=0em,xshift= 3 em, thick] (row-3-|col-8) – node[midway, right=1pt]0.9j_1 (row-4-|col-8) ;
Since the top left diagonal entry of B_γ is non-zero, we can use elementary operations for rows and columns with labels in 𝐣_2[the non-zero columns of B_γ are above a pivot of γ] to make all the other entries of the first row of B_γ zero and keep D_γ a diagonal matrix. The column operations may change the other rows of B_γ but the new matrix still belongs to 𝒴_𝐣 and
has same class in U \ G / K. Similarly, we can use elementary operations for rows and columns with labels in {1, …, m }∖𝐣_1 to make all the entries below (1,m+1) in B_γ equal to zero, while keeping A_γ a diagonal matrix.
Finally, conjugating by an appropriate element of the compact diagonal A^∘⊂ U, we can also assume that the top left entry of B_γ is 1.
In summary, we have arrived at a matrix that has the same class in U \ G / K as the original γ and has zeros in rows and columns labeled 1, m + 1 except for the diagonal entries in positions (1,1), (m+1, m+1), (1,m+1) which are ϖ, 1, 1 respectively. The submatrix obtained by deleting the first and (m+1)-th rows and columns is a (2m-2) × (2m-2) matrix in 𝒴_𝐣' for some 𝐣' of cardinality k-1. By induction, this matrix can be put into the desired form using the groups U and K associated with _m×_2m - 2. The possible value of i that can appear from this submatrix have to be at most max(k_1 - 1, m-1+k_2) by induction hypothesis and therefore the bound for possible i holds for m as well. This completes the proof.
Suppose
m = 2 and k = 2, so that P_k = { (2,0), (1,1) , (0,2) }. Proposition <ref> says that
K [ ϖ ; ϖ; 1; 1 ] K = U [ ϖ; ϖ; 1; 1 ] K
+ U
[ ϖ 1; ϖ; 1; 1 ] K + U [ ϖ 1; ϖ 1; 1; 1 ] K
+ U [ ϖ; 1; 1; ϖ ] K
+ U [ ϖ 1; 1; 1 ; ϖ ] K + U [ 1 ; 1; ϖ; ϖ ] K
§.§ Mixed degrees
For 1 ≤ r ≤ m, let 𝒳_r : = _r(F). We have inclusions 𝒳_1↪𝒳_2↪…↪𝒳_m obtained by a considering a matrix σ∈𝒳_r as a (r+1) × (r+1) matrix whose top left r × r submatrix is σ, has 1 in last diagonal entry and zeros elsewhere. For each r, let
let
_ r : 𝒳_ r → G σ↦ι( σ , σ ) = [ σ ; σ ]∈ G
where σ is considered as an element of H_1, H_2 as above, so that j_ r factorizes as 𝒳_ r ↪𝒳_m G. We henceforth consider all 𝒳_ r as subgroups of G and omit _r unless necessary. We denote 𝒳_r ^∘ = 𝒳∩ K ≃_ r (_F ).
For α = e_i - e_j∈Φ_H, k ∈, let U_α, k be the unipotent subgroup of matrices h ∈ H such that the diagonal entries of h are 1, the (i,j) entry of h has valuation at least k and all other entries are 0. For each r ≥ 1, let ψ_r : Φ_H→ be the function
ψ_s (α ) = 1 if α∈{ e_i - e_j∈Φ_H | either 1 ≤ j ≤ r
or m + 1 ≤ i ≤ m + r }
0 otherwise
and let H _ ψ_ r be the subgroup generated by U_α, ψ_ r (α) and A ∩τ_r K τ_ r ^-1
More explicitly, H_ψ_ r
is the subgroup of elements (v,h_1, h_2) ∈ U satisfying the three
conditions below:
* all the non-diagonal entries in the first r columns of h_1 are divisible by ϖ,
* all non-diagonal entries in the first r rows of h_2 are divisible by ϖ,
* the difference of the j and j+ m -th diagonal entries of h = ( h_1 , h_2 ) ∈ G is divisible by ϖ for all j = 1, …, r.
H_τ_ r = 𝒳_ r ^ ∘ H_ψ_ r = H_ψ_ r 𝒳_ r ^ ∘ for r = 1, … , m.
The _m component on both sides are _F^× and we may therefore ignore it. Let h = ( h_1 , h_2 ) ∈ H. Then h ∈ H_τ_ r and if and only if h ∈ U and
h_1 t _ r - t _ r h_2∈Mat_ m × m ( ϖ_F )
(see the calculation in Lemma <ref>). It is then clear that H_τ_r ⊃𝒳_r^∘· H_ψ_ r. Let h = (h_1 , h_2 ) ∈ H_τ_ r. From the description of H_τ_r, we see that the r × r submatrix σ formed by first r rows and columns of h_1 must be invertible (and similarly for h_2). Then _r (σ^-1) · h has the top r × r block equal the identity matrix. Since this matrix lies in H_τ_ r, we see again from the
description of elements of H_τ_ r that _ r ( σ ^-1 ) h ∈ H_ψ_r. This implies the reverse inclusion H_τ_r ⊂𝒳^∘ _ r H_ψ_ r. Since the product of 𝒳_r ^∘ and H_ψ_ r is a group, 𝒳_r ^∘ H_ψ_r = H_ψ_r 𝒳_i^∘.
Recall that Φ_H = Φ_H_1⊔Φ_H_2.
Declare α_1 , … , α_m∈Φ_H_1 and - α_m+1 , …, - α_2m∈Φ_H_2 to be the set of positive roots of Φ_H. Then α_1,0 : = e_1 - e_m∈Φ_H_1, α_2, 0 = e_2m - e_m+1∈Φ_H_2 are the highest roots. Let s_1, 0, s_2, 0∈ W_H denote the reflections associated with α_1 , 0, α_2, 0 respectively. Then the affine Weyl group W_H, aff (as a subgroup of W_aff) is generated by
S_H, aff = { t( α_1, 0 ^ ∨ ) s_1,0 , s_1, …, s_m-1}⊔{ t( α_2, 0 ^ ∨ ) s_2, 0 , s_m+1, …, s_2m-1}
and (W_H_aff , S_H, aff ) is a Coxeter system of type Ã_m-1×Ã_m-1. We denote by ℓ_H : W_H→ the resulting length function. The extended Coxeter-Dynkin has two components
[extended,Coxeter,
edge length= 1cm,
labels=,1,2,m-2,m-1, labels*=0_1]
A[1] [extended , Coxeter,
edge length= 1cm,
labels=,m+1,m+2,2m-2,2m-1, labels*=0_2]
A[1]
where the labels 0_1, 0_2 correspond to the two affine reflections corresponding to α_0,1, α_0,2.
Now let I _ H_1 (resp., I_H_2) be the Iwahori subgroup of H _ 1 (resp., H_2 ) consisting of integral matrices that reduce modulo ϖ to upper triangular (resp., lower triangular) matrices and set I_H : = _F^× × I_H_1× I_H_2. Then I_H is the Iwahori subgroup associated with alcove determined by S_H, aff. We let
ρ_1 : = 0.9[ 0 1 ; 0 1 ; 0 1 ; ⋱ ⋱ ; 0 1; ϖ 0 ]∈ H_1 ρ_2 : = 0.9[ 0 ϖ; 1 0 ; 1 0; 1 0 ; ⋱ ⋱ ; 1 0 ]∈ H_2
(so we have ρ_1 = ρ_2^t). Both ρ_1, ρ_2 normalize I_H and the effect of conjugation w ↦ρ_1 w ρ_1^-1 (resp., w ↦ρ_2 w ρ_2^-1) is by cycling in clockwise (resp., counterclockwise) direction the left (resp., right) component of the diagram displayed in (<ref>). We set ρ_H : = (ρ_1, ρ_2 ) ∈ H and for κ = (k_1 , k_2 ) ∈ ^ 2, we denote by ρ_H^κ the element ( ρ_1 ^k_1 , ρ^k_2 _ 2 ) ∈ H. We will denote by - κ the pair (-k_1, - k _2 ).
For r = 0 , … , m, let I_H, r denote the subgroup of H which contains I_H and whose Weyl group W_ H , r ⊂ W_H is generated by S_H, r : = { s_r … , s_m - 1 , s_m+r , … , s_2m - 1 }. More explicitly, I_H,r is the subgroup of U consisting of all matrices as below
cccc|cccc[margin]
(row-1-|col-1) – (row-4-|col-4) ;
(row-1-|col-1) – (row-4-|col-1) ;
(row-4-|col-1) – (row-4-|col-4) ;
(row-5-|col-5) – (row-5-|col-8);
(row-5-|col-5) – (row-8-|col-8);
(row-5-|col-8) – (row-8-|col-8);
[decorate, decoration= calligraphic brace,transform canvas = yshift=0.2em, thick] (row-1-|col-1) – node[midway, above=1pt]0.9r (row-1-|col-4);
[decorate, decoration= calligraphic brace,transform canvas = yshift=-0.3 em, xshift =0.1 em, thick] (row-8-|col-8) – node[midway,
below=1pt]0.8r (row-8-|col-5);
[decorate, decoration= calligraphic brace,transform canvas = yshift=-1.3em, xshift = 1.0 em, thick] (row-7-|col-4) – node[midway, left =1pt]0.8r (row-4-|col-4);
[decorate, decoration= calligraphic brace,transform canvas = yshift = 0 em, xshift = -1.1 em, thick] (row-1-|col-5) – node[midway, right =1pt]0.8r (row-4-|col-5);
such that the non-diagonal entries inside the two triangles are divisible by ϖ.
For any k = 0, …, 2m, κ∈ P_k and r = 0,…, l ( κ), we have H _ τ_ r ϖ^-λ_κ U = I _ H , r ρ_H ^ - κ U.
Since r ≤ l (κ), ϖ^-λ_κ commutes with 𝒳_r ^∘ and therefore H_τ_rϖ^-λ_k U = H_ψ_rϖ^-λ_κ U. It is also easily seen that
I_H , r = H_ψ_r· A^∘·∏_α∈Φ_H,r^+ U_α, 0
where Φ_H,r^+ : = { e_i - e_j∈Φ_H ^ + | either 1 ≤ j ≤ r
or m + 1 ≤ i ≤ m + r }. Since ϖ^λ_κ commutes with A^∘ and U_α, 0 for α∈ S_r, we see that H_ψ_rϖ^-λ_κ U. Since ϖ^λ_κ U = ( ρ_1^k_1, ρ_2^-k_2 ) U = ρ_H^-κ U, the claim follows.
For κ = (k_1 , k_2 ) ∈ P_k, r = 0, …, l (κ), let W_κ, r⊂ W_H,r denote the subgroup generated by S_H,r∖{ s_k_1 , s_2m - k_2}. Then W_κ, r is a Coxeter subgroup of W_H, r. Let
P_κ , r : = ∑_ w ∈ [ W_H,r / W_κ, r ] q ^ ℓ _ H (w)
denote the Poincaré polynomial of [ W_H,r / W_κ, r ] ⊂ W_H.
For any k, κ∈ P_k and r = 0,…, l ( κ), we have [U ϖ^λ_κτ_ r K ]_* = P_κ, r (q).
We have [U ϖ^λ_κτ_r K ]_* = [ H_τ_ r ϖ^-λ_κ U ] which is by definition the cardinality of H _ τ_ r ϖ^-λ_κ U / U. By Lemma <ref>, H_τ_rϖ^-λ_κ U / U = I_H,r ρ_H ^ - κ U / U. Theorem <ref> therefore implies that [ U ϖ^λ_κτ_r K ] _ * is the Poincaré polynomial of [ W_H,r / ( W_H, r∩ρ^-κ W_Hρ^κ ) ]. Now ρ ^ - κ W_Hρ^κ is the subgroup of W_I,H generated by
S_H, aff∖ρ_H^-κ{ s_1,0, s_2,0}ρ _ H ^ κ = S_ H , aff∖{ s_k_1 , s_2m-k_2}
where the equality follows since ρ_1^-1s_1,0ρ_1 = s_1 and ρ_2 ^-1 s_2,0ρ_2 = s_2m (see above for the description of the action of ρ_1 ,ρ_2 on (<ref>)). Thus we have
W_H,r ∩ρ^-k W_Hρ^k = W_κ, r
and the claim follows.
With notation as above, [ U ϖ^λ_κτ _ r K ] _*≡0.9m- rm-k_1m-rk_2q-1.
| W_H, r | = ( m- r) ! · ( m- r)! since W_H,r is the product of the groups generated s_r, … , s_m-1 and s_m+r , … , s_2m-1, each of which have cardinality (m-r)!. Similarly, W_κ, r is the product of four groups generated by four sets of reflections labeled
r + 1 , …, k_1-1, k_1 + 1, …, m-1 , m+r + 1 , … , 2 m - k _2 - 1 , 2m - k _2 + 1 , … , 2m-1
which have sizes ( k_1 - r ) !, ( m - k_1 ) !, ( m - k_2 - r ) ! and k_2 ! respectively.
§.§ Zeta elements
We now formulate the zeta element problem relevant to the situation of <ref> and show that one exists using the work done above. Let T : = F ^ ×, C = _F^×⊂ T the unique maximal compact subgroup, D = 1 + ϖ_F a subgroup of index q - 1 and ν :
H → T be the map given by (h_1 , h_2 ) ↦(h_2 ) / h_1. Let 𝒪 be any integral domain containing [
q ^ - 1 ]. Set
* G̃ = G × T,
* ι̃ = ι×ν : H →G̃,
* U ⊂ H and K̃ : = K × C ⊂G̃ as bottom levels
* M_H, = M_H, , triv the trivial functor,
* x_U = 1_∈ M_H, (U) the source bottom class,
* L̃ = K × D the layer extension
of degree q - 1,
* ℌ̃_c = ℌ_std, c (Frob) ∈𝒞_ ( K̃\G̃ / K̃ ) where Frob : = ( ϖ^-1 C).
This setup generalizes the one studied in <ref>.
There exists a zeta element for
(x_U , ℌ̃ _c, L̃ )
for all c ∈∖ 2.
For each k_2 = 0 , …, m and i an integer such that 0 ≤ i ≤ m - k_2, let g_i, k_2 : = (1 , τ_i, ϖ^-2 k_2 ) ∈G̃ and J_i, k_2 : = { ( k_1 , k_2 ) | i ≤ k_1≤ m , k_1∈}. For each i, k_2 as above, let
d_i,k_2 : = [ H ∩ g_i,k_2K̃ g_i,k_2^-1 : H ∩ g_i,k_2L̃ g_i , k_2^-1 ] .
By Lemma <ref>(iii), d_i,k_2 = [ H _ τ_i∩τ_i K τ_i^-1 : ν^-1(D) ]. We therefore write d_i for d_i,k_2. Since ν ( H ∩τ_i K τ_i^-1 ) = C for i =0,…, m-1, we have d_i = q - 1. Now if (h_1 , h_2 ) ∈ H_τ_m, then h_1 - h_2∈ϖ·Mat_m× m(_F ). Thus, ν ( H_τ_m ) ⊂ D (if fact, equal) and H_τ_m = ν^-1(D). This implies that d_m = 1. To summarize,
d_0 = … = d_m-1 = q - 1, d_m = 1 .
Next, for each (i, k_2 ) as above and j = (k_1,k_2 ) ∈ J_i,k_2, denote h_ j : = ( ϖ^ k f_0 , ϖ^λ_j ) ∈ H and σ_j = ι_ν ( h_j ) · g_i,k_2 = ( ϖ^k f_0 , ϖ^λ_j , ϖ^- k ) ∈G̃
where k in these expressions denotes k_1 + k_2.
Denote by J the disjoint union of J_i,v for all possible i, v as above. By Proposition <ref>, Proposition <ref> and Lemma <ref>(a),
ℌ̃_c = ∑ _ j ∈ J b_j ( U σ_jK̃ )
where b_j∈[q^-1] for j = ( k_1 , k _2 ) ∈ J_i,k_2 is given by ( - 1 ) ^ k q ^ - k (2m-k +c)/2 and k = k_1 + k _2 as before.
In particular, b _j≡ (-1)^kq-1. It is then clear that
H \ H ·(ℌ̃_c ) / K̃ = { g_i,k_2 | 0 ≤ k_2≤ m , 0 ≤ i ≤ m - k_2}.
Let 𝔥_i,k_2 denote the ( H , g_i,k_2 )-restriction of ℌ̃_c. By Corollary <ref> and Lemma <ref> (ii),
( 𝔥_i,k_2^t) = ∑_ j ∈ J_i,k_2 c_j [ U σ_jK̃ ] _ *
≡∑_ k_1 = i ^m (-1)^k_1+k_2m - i m - k_1m-i k_2q-1
= (-1)^k_2m-i k_2· (-1)^i (1-1)^m-i = 0
for all i, k_2 as above such that i < m.
Since d_m = 1, the criteria of Corollary <ref> is satisfied.
For m = 2, the coefficients ∑_j ∈ J_i,k_2 c_j [U σK̃ ] _ * as follows
* 1-q^-c+3/2(q+1)+q^-(c+2) for g_0,0,
* q^-(c+2)-q^-3 c+1/2 for g_1,0,
* q^-(c+2) for g_2,0
* (q+1)(q^-(c+2)(q+1)-q^-3/2(c+1)-q^-(c+3)/2) for g_0,1,
* q^-(c+2)-q^-3/2(c+1)(q+1)+q^-2 c for g_1,2,
* q^-(c+2)-q^-3/2(c+1) for g_1,1.
When c=1, the sets g_0,1K̃, g_1,2K̃, g_1,1K̃ do not contribute to the support of the zeta element, since their corresponding coefficients all vanish. An induction argument shows that for c= 1, the zeta element is only supported on g_i,0K̃.
The normalization at c= 1 is relevant for the setting <cit.> (corresponding to the L-value at s = 1/2), and the coefficients of the zeta element we obtain match exactly with those of test vector specified Theorem 7.1 of loc.cit.
More precisely, the coefficient denoted `b_i' in Theorem 7.1 (2) of loc.cit. is the coefficient for g_i,0 computed in the proof above multiplied with q/q -1·μ_H(U) /μ_H(V_i) (after replacing ℓ in loc.cit. with q). Note also that what we denote by V_i here is denoted `V_1,i' in loc.cit. One of the chief advantages of the approach here is that one does not need to compute the measures μ_H(V_i) in Definition <ref> which seem to have far more complicated formulas.
Notice that the g_m,0 (equivalently, τ_m in the decomposition Proposition <ref>) only arises from a single Hecke operator K ϱ^m K. By Corollary <ref>, we see that a zeta element exists only if the degree d_m is 1.
In Theorem <ref>, this was guaranteed by the choice of ν and T. If say, ν is replaced by the product of determinants of H_1, H_2, then no zeta elements exist. So in a sense, one can only hope to make `anticyclotomic' zeta elements in this setting.
§ BASE CHANGE -FACTOR OF
In this section,
we study the inert case of the embedding discussed in <ref>. We first collect some generalities on the unitary group GU_4. Let E / F be separable extension of of degree 2, Γ : =Gal(E / F), γ∈Γ the non-trivial element. Let
J=[ 1_2; 1_2 ]
where 1_2 denotes the the 2 × 2 identity matrix. Then J = γ(J)^t is Hermitian. We let 𝐆 = GU_4 be the reductive group over F given whose R points for a F-algebra R are given by
𝐆 (R) ={g ∈GL_4(E ⊗ R) |γ(^tg) J g = sim (g) J where sim (g) ∈ R ^ ×}.
Then 𝐆 is the unique quasi-split unitary similitude group of split rank 3 (see <cit.>). It's derived group is a special unitary group whose Tits index
is ^2 A_3,2^(1) (see <cit.>). The mapping 𝐆→_m, g ↦sim (g) is referred to as the similitude. The determinant map : 𝐆→Res_E/F_m then satisfies γ∘ · = sim^4. For R an E-algebra, we let
γ_R: E ⊗ R → E ⊗ R, x ⊗ r ↦γ(x) ⊗ r
the map induced by γ and
i_R: E ⊗ R → R × R
the isomorphism x ⊗ r ↦(x r, γ(x) r), where x ∈ E, r ∈ R. We let π_1, π_2: E ⊗ R → R the projections of i_R to the first and second component respectively. We have an induced action γ_R: GL_4(E ⊗ R) → GL_4(E ⊗ R) and an induced isomorphism i_R: GL_4(E ⊗ R) →GL_4(R) ×GL_4(R) given by (g_i, j) ↦(π_1(g_i, j), π_2(g_i, j).
Under the identification i_R, the group 𝐆(R) ⊂GL_4(E ⊗ R) is identified with the subgroup of elements (g, h) ∈GL_4(R) ×GL_4(R) such that
(^t h,^t g) ·(J, J) ·(g, h)=(r J, r J).
We thus have functorial isomorphisms
ψ_R:(c_R: pr_1∘ i_R): 𝐆(R) ∼⟶𝔾_m×GL_4(R) via which we identify 𝐆_E𝔾_m×GL_4 (as group schemes
over E) canonically.
The symbols F, _F , ϖ , = _F , q = q_F have the same meaning as in <ref>. We let E / F denote an unramified quadratic extension and set q_E = | |_E = q^2 where _E is the residue field of E. We denote by [_F ], [_E] a fixed choice of representatives in _F, _E of elements of _F, _E respectively. We let 𝐆 be the group defined above and denote
G=𝐆(F) , G_E=𝐆(E) ψ= E^××GL_n(E), K_Eψ=_E ^ ××_4 ( _E ), K = K_E∩𝐆(F).
For a ring R, we let ℋ_R,
ℋ_R, E denote the Hecke algebras ℋ_R(K \ G / K), ℋ_R(K_E\ G_E / K_E) over R respectively. For simplicity, we will denote
(K σ K ) ∈ℋ_R simply by (K σ K ). Similarly for ℋ_R,E.
§.§ Desiderata
Let 𝐀 =𝔾_m^3, dis : 𝐀→𝐆 be the map
(u_0, u_1, u_2) ↦[ u_1 ; u_2 ; u_0/u_1 ; u_0/u_2 ]
which identifies 𝐀 with the maximal split torus of 𝐆. Let 𝐌 be the normalizer of 𝐀. Then ψ : 𝐌_E𝔾_m, E^5 and we consider _m,E^5 as a maximal torus of 𝐆_Eψ=𝔾_m, E×GL_4, E via (u_0, …, u_4) ↦(u_0, diag(u_1, …, u_4)). We will denote A : = 𝐀(F), M : = 𝐌(F). We have X^*(𝐌)=ℤ e_0⊕⋯⊕ℤ e_4, X_*(𝐌)=
ℤ f_0⊕⋯⊕ℤ f_4, where f_i, e_i are as in <ref>. The Galois action Γ on X_*(𝐌), X^*(𝐌), is as follows:
γ· e_i= e _ 0 if i = 0
e _ 0 - e _ i + 2 if i = 1 , … , 4 γ· f_i =
f_0+⋯+f_4 if i=0
-f_i+2 if i=1, …, 4
where e_i=e_i-4, f_i=f_i-4 if i>4. For i=0,1,2, let
* ϕ_i: 𝔾_m→𝐀 by sending u to the j-th component,
* ε_i: 𝔾_m→𝐀, dis(u_0, u_1, u_2) ↦ u_i
Then X^*(𝐀)=ℤε_0⊕ℤε_1⊕ℤε_2, X_*(𝐀)=ℤϕ_0⊕ℤϕ_1⊕ℤϕ_2. Let res : X^*(𝐌) → X^*(𝐀), cores : X_*(𝐀) → X_*(𝐌) be the maps obtained by restriction and inclusion respectively. Then
res(e_i)= ε_i if i=0, 1,2
ε_0-ε_i-2 if i=3,4
cores(ϕ_i)= f_0+f_3+f_4 if j=0
f_i-f_i+2 if j=1,2
We let Φ_E denote the set of absolute roots of 𝐆_E as in <ref> for n = 4 and Φ_F denote the set of relative roots obtained as restrictions of Φ_E to 𝐀. Then Φ_F={±(ε_1-ε_2), ±(ε_1+ε_2-ε_0), ±(2 ε_1-ε_0), ±(2 ε_2-ε_0)}, which constitutes a
root system of type C_2.
We choose β_1=e_1-e_2. β_2=e_2-e_4 and β_3=e_4-e_3
as simple roots and let Δ_E={β_1, β_2, β_3}. In this ordering, the half sum of positive roots is
δ = 1/2( 3 e_1 + e_2 - e_4 - 3 e_3 )
and
β_0=e_1-e_3 is the highest root. The set Δ_E and β_0 are invariant under Γ, and the labeling is chosen so that (absolute) local Dynkin diagram (with the bar showing the Galois orbits) is the diagram on the left
[fold, scale = 1.7, extended, edge length = 0.7, labels=0,1,2,3]Aooo [scale = 1.7, labels*=1,2,1, extended, edge length = 0.7, labels=0,1,2]Coo
The set of corresponding relative simple roots is therefore Δ_F={α_1, α_2} where α_1=ε_1-ε_2, α_2 = 2 ε_2 - ε_0.
With this ordering, the highest root is α_0=2 ε_1-ε_0. The associated simple coroots are α_0 ^ ∨ = ϕ_1, α_1^∨ = ϕ_1 - ϕ_2, α_2^∨ = ϕ_2 and we denote by Q^∨ their span in Λ. Executing the recipe provided in 1.11 of <cit.> on the absolute diagram above, we find that the local index or relative local Dynkin diagram (see 4 of op.cit.) is the diagram on the right above. Here, the indices below the diagram correspond to the affine roots -α_0+1, α_1, α_2 and the indices above the diagram are half the number of roots of a semi-simple group of relative rank 1 whose absolute Dynkin-diagram is the corresponding Galois orbit in the diagram on the left. The endpoints of the diagram on the right, and in particular the one labelled 0, are hyperspecial and hence so is the subgroup K by construction. The diagrams above can be found in the fourth row of the table on p. 62 of op. cit.
For λ = a_0ϕ_0 + a_1ϕ_1 + a_2ϕ_2∈ X_*(𝐀), ⟨λ , δ⟩ can be computed by pairing λ with res(δ) = -2 ε _0 + 3 ε_1 + ε_2 and equals - 2 a_0 + 3 a_1 + a_2. Note also that
2 ·res(δ) = 2 (ε_1 - ε_2 ) + 2 ( ε_1 + ε_2 - ε_0 ) + ( 2 ε _ 1 - ε_0 ) + ( 2 ε_2 - ε_0 )
is a weighted sum of the positive roots in Δ_F, with the weights given by the degree of the splitting field of the corresponding root.
From now on, we denote by Λ the cocharacter lattice X^*(𝐀) and denote by t the translation action of Λ on Λ⊗. An element λ = a_0ϕ_0 + a_1ϕ_1 + a_2ϕ_2∈Λ will be denoted by (a_0 ,a_1 , a_2 ) and ϖ ^ λ denotes the element λ(ϖ) ∈ A. Let s_i, i = 0,1,2 denote the simple reflections associated α_i.
The action of s_i on Λ is given explicitly as follows:
* s_1 acts as a transposition ϕ_1↔ϕ_2,
* s_2 acts by sending ϕ_0↦ϕ_0+ϕ_2, ϕ_1↦ϕ_1, ϕ_2↦ - ϕ_2
* s_0=s_1 s_2 s_1 acts by sending ϕ_0↦ϕ_0 + ϕ_1, ϕ_1↦ - ϕ_1, ϕ_2↦ϕ_2.
As before, we let e^λ (resp., e^W λ ) denote the element in the group algebra [ Λ ] corresponding to λ (resp., the formal sum over W λ ). Let S_aff = { s_1 , s_2 , t( α_0 ^∨ ) s_0} and W, W_aff and W_I denote the Weyl, affine Weyl, Iwahori Weyl groups respectively. We consider W_aff as a group of affine transformations of Λ⊗. We have
* W ≅ ( / 2 ) ^2⋊ S_2,
* W_aff= t ( Q ) ^∨⋊ W the affine Weyl group
* W_I = A / A^∘⋊ W Λ⋊ W,
The pair (W_aff, S_aff) is a Coxeter system of type C̃_2 and we consider W_aff⊂ W_I via v. Then W_I = W_aff⋊Ω. Given λ∈Λ, the minimal possible length of elements in t(λ) W is obtained by a unique element. This length is given by
ℓ_min(λ) = ∑_λ∈Φ_λ^1 | ⟨λ , α⟩ | + ∑_α∈Φ_λ^2 ( ⟨λ , α⟩ - 1 )
where Φ_λ^1 = {α∈Φ_F^+ | ⟨λ , α⟩≤ 0 }, Φ_λ ^2 = {α⟩∈Φ_F^+ | ⟨λ , α > 0 }. When λ is dominant, the first sum is zero, and the length is then also minimal among elements of W t(λ) W. Consider the following elements in the normalizer N_G(A):
w_0= 0.9[ 1/ϖ ; 1 ; ϖ ; 1 ], w_1= 0.9[ 1 ; 1 ; 1; 1 ],
w_2= 0.9[ 1 ; 1; 1 ; 1 ], ρ = 0.9[ 1; 1 ; ϖ ; ϖ ] .
The classes of w_0, w_1, w_2 represent t(α_0^∨) s_0, s_1, s_2 in W_I and ρ represents t(-ϕ_0) s_2 s_1s_2, which is a generator of Ω≅ℤ. The conjugation action of ρ switches w_0, w_2 and keeps w_1 fixed, inducing an automorphism of the extended Coxeter diagram [Coxeter, labels = 0,1,2 , extended,]C2.
Let ξ∈_E ^ × be an element of trace 0 i.e., ξ + γ (ξ ) = 0.
Let x_1 : Res_E/F_a→𝐆 and x_i : _a→𝐆 for i = 0 , 2 be the root group maps
x_0: u ↦0.9[ 1 ; 1 ; ϖξ u 1 ; 1 ], x_1: u ↦0.9[ 1 u ; 1 ; 1 ; - u̅ 1 ], x_2: u ↦0.9[ 1 ; 1 ξ u; 1 ; 1 ],
where u̅ : = γ(u). We let _w_i = _w_2 := _F, _w_1 : = _E and for i = 0,1,2, we denote by g_w_i : [ _w_i ] → G the map u ↦ x_i ( u ) w_i. If I denotes the Iwahori subgroup[note that K is hyperspecial, i.e., its Weyl group equals W] of K whose reduction modulo ϖ lies in the Borel of () determined by Δ_F, then I w_i I = _κ∈ [_w_i] g_w_i ( κ ) .
For w ∈ W_I such that w is the unique minimal length element in w W, choose a reduced word decomposition w = s_w,1 s_w,2⋯ s_w,ℓ(w)ρ _w where s_w , i ∈ S_aff and ρ_w∈Ω. Define
𝒳_w : ∏_ i = 1 ^ ℓ(w) [ _w_i] → G
(κ_1, …, κ_ℓ(w) ) ↦ g_s_w,1 ( κ_1 ) ⋯ g_s_w,ℓ(w) ( κ_ℓ(w) ) ρ_w
where we have suppressed the dependence on the the decomposition of w in the notation. By Theorem
<ref>, the image of 𝒳_w is independent of the choice of decomposition.
§.§ Base change Hecke polynomial
Let y_i : = e^ϕ_i∈ [ Λ ], so that [ Λ ] = [ y_0 ^ ± , y_1 ^ ± , y_2 ^ ± ] and let ℛ_q : = [ q ^±1/2], ℛ_q^2 : = [ q^-1 ]. The abelian group homomorphism 1+γ: X_*(𝐌) → X_*(𝐌) given by f ↦ f+γ· f has image in Λ=X_*(𝐌)^Γ and hence induces a map e^1+γ on ℛ_q^2-algebras
ℋ_ℛ_q^2( G_E ) [r, "𝒮_E"] [d, "BC", swap] ℛ_q^2[X_*(𝐌)]^W_E [d, "e^1+γ"]
ℋ_ℛ_q( G ) [r, "𝒮_F"] ℛ_q[Λ]^W_F
corresponding to which we have what is called the base change map
BC : ℋ_ℛ_q^2 ( G_E ) →ℋ_ℛ_q(G) .
The Satake polynomial that we need to consider here is the base change of the Satake polynomial of 𝐆_E associated with the standard representation considered in <ref>.
This polynomial is
𝔖_bc(X)=(1-y y_1 X)(1-y y_1^-1 X)(1-y y_2 X)(1-y y_2^-1 X) ∈ℤ[Λ]^W_F[X] .
where y = y_0 ^ 2 y _ 1 y _ 2.
We have a componentwise embedding ^ L 𝐆 _E↪ ^ L 𝐆_F.
Given an unramified L-parameter φ : 𝒲_F→ ^ L 𝐆_F, let t̂⋊Frob_F ^-1 : = φ( Frob_F ^ -1 ), where Frob_F∈𝒲_F denotes a lift of the arithmetic Frobenius, we have
φ(Frob_E ^-1 ) = ( t̂⋊Frob_F ^-1 ) ^2 = t̂γ(t̂) ⋊Frob_E ^-1∈ ^ L 𝐆_E .
If we think of t̂ as the Satake parameters of an unramified representation π_F of 𝐆(F), then t̂γ( t̂ ) are the Satake parameters of an unramified representation π_E of 𝐆(E) which is called the base change of π_F. The base change map BC above can then also be characterized as in <cit.>.
We define ℌ_bc, c(X) ∈ℋ_ℛ[X] to be the image of ℌ_std, c(X) under the map BC for c any integer. Equivalently, ℌ_bc,c(X) is the unqiue polynomial such that 𝒮_F ( ℌ_bc,c(X) ) = 𝔖_bc (q^-c X ).
We have
* 𝒮_F(K ϖ^(2,2,1) K) = q ^3 e ^W(2,2,1) + ( q -1 ) ( q^2 + 1 ) e ^(2,1,1) ,
* 𝒮_F(K ϖ^(4,3,3)K) = q^4 e^W(4,3,3) + q ^ 3 ( q - 1 ) e ^W(4,3,2) + q ( q - 1 ) ( 1 + q + 2q^2) e ^(4,2,2) .
Note that since (δ) ∈ X^*(𝐀), the Satake transform of (K ϖ^λ K ) for any λ∈Λ all have coefficients in [q^-1][Λ]^W. The leading coefficients are obtained by Corollary
<ref> which also shows that the support of these transforms is on Weyl orbits of cocharacters that are succeeded by λ under ≽.
(a) Since (2,2,1) - (2,1,1) = α_1^∨ + α_2 ^ ∨, (2,2,1) ≽ (2,1,1) and it is easily seen that (2,1,1) is the only dominant cocharacter which (2,2,1) succeeds.
Thus
𝒮( K ϖ^(2,2,1) K ) = q^3 e^W(2,2,1) + b e ^(2,1,1)
for some b ∈[q^-1]. To obtain the value b, we use the decomposition recipe of Theorem <ref>. Note that ℓ_min(2,2,1) = 1 and that K w K = K ϖ ^ (2,2,1) K where w= w_0ρ^2. So we see from the the Weyl orbit diagram
(2,0,1)[r, "s_1"] (2,1,0)[r, "s_2"] (2,1,2)[r, "s_1"] (2,2,1)
that | K w_0ρ^2 K / K| = q + q ^3 + q ^4 + q^6. Of these, the number of cosets of shape a permutation of (2,2,1) is ∑ _ μ∈ W (2,2,1) q ^ ⟨λ + μ , δ⟩ = 1 + q^2 + q^4 + q ^ 6 by W-invariance of 𝒮 (see Corollary <ref>). Thus the number of cosets of shape (2,1,1) is
q + q^3 + q^4 + q^6 - ( 1 + q^2 + q^4 + q^6 ) = ( q -1 ) ( q^2 + 1 ) .
Since ⟨ (2,1,1) , δ⟩ = 0, the claim follows.
(b) Arguing as in part (a), we have
𝒮(K ϖ^(4,3,3)K) = q^4 e^W(4,3,3) + b_1 e ^W (4,3,2) + b_2 e ^(4,2,2)
for some b_1 , b_2∈[q^-1]. Here we need a more explicit description of the Schubert cells in order to find b_1, b_2. Observe that ℓ_min(4,3,3) = 3 and that K ϖ^(4,3,3) K = Kw K where w = w_0 w_1 w_0ρ^4. The Weyl orbit diagram of (4,3,3) is
(4,1,1) [r,"s_2"] (4,1,3) [r,"s_1"] (4,3,1) [r,"s_2"] (4,3,3) .
By Theorem <ref>, K ϖ^(4,3,3) K / K = _ i = 1 ^3im ( 𝒳_σ_i) where σ_0 = w, σ _1 = w_2σ_0, σ _2 = w_1σ_1, σ _3 = w_2σ_2.
Explicitly,
im ( _σ_0 ) = 0.85*([ ϖ ; ϖ ; x_1ϖ^2 a ϖ^2 ϖ^3 ; -a̅ϖ^2 x ϖ^2 ϖ^3 ]) K
a ∈ [_E] ,
x, x_1∈ξ [_F] ,
im ( _σ_1 ) = 0.85*([ ϖ ; -ϖ^2 a ϖ^3 x ϖ^2 + y ϖ; x_1 ϖ^2 ϖ^3 a ϖ^2; ϖ ]) K a ∈ [_E] ,
x, x_1 , y ∈ξ [_F]
im ( _σ_2 ) = 0.85*([ ϖ^3 a_1 ϖ -ϖ^2 a̅ x ϖ^2 + y ϖ ; ϖ ; ϖ ; ϖ^2 x_1 a ϖ^2 -ϖ a̅_1 ϖ^3 ]) K
a , a_1∈ [_E] ,
x, x_1, y ∈ξ [_F],
im ( _σ_3 ) = 0.85*([ ϖ^3 x ϖ^2 + y ϖ a_1 ϖ - a ϖ^2; ϖ^3 a ϖ^2 - a_1 ϖ x_1 ϖ^2 + y_1 ϖ; ϖ ; ϖ ]) K a , a_1∈ [_E] ,
x, x_1, y , y_1∈ξ [_F].
From the cells above, it is not hard to see that the shape of any coset in
* im(𝒳_σ_0) is
* (4,1,1) if x_1 = x = a = 0,
* (4,1,2) if x_1 = a = 0, x ≠ 0,
* (4,2,1) if x_1≠ 0 and a a̅ + x x_1ξ^2∈ϖ_F,
* (4,2,2) if either x_1 = 0, a ≠ 0 or x_1≠ 0 , a a̅ + x x_1ξ^2∉ϖ_F
* im(𝒳_σ_1) is (4,1,3) if x_1 = a = 0,
(4,2,2) if x_1 = 0, a ≠ 0 and (4,2,3)
if x_1≠ 0,
* im( 𝒳_σ_2) is (4,3,1) if x_1 = 0 and (4,3,2) if x_1≠ 0,
* im(𝒳_σ_3 ) is (4,3,3).
So in K ϖ^(4,3,3) K / K, there are exactly
q^6(q-1) cosets of shape (4,3,2). Since ℐ(ϖ^(4,3,2)K) = q^-3 e^(4,3,2),
b_1 = q^-3· q^6(q-1) = q^3(q-1)
by W-invariance of 𝒮. Thus the number of cosets in K ϖ^(4,3,3) K / K whose shape is in W (4,3,2) is ∑_μ∈ W (4,3,2) q^⟨μ , δ⟩ q^3(q-1) = (q-1)( 1 + q^2 + q^4 + q^6 ).
Since the nubmer of cosets of shape in W( 4,3,3) is ∑_μ∈ W (4,3,3) q ^⟨μ , δ⟩ = 1 + q^2 + q^6 + q^8 and | K ϖ^(4,3,3) K / K | = q^4 + q^5 + q ^7 + q^8, we see that
b_2 = q^4 + q^5 + q ^7 + q^8 - ( 1 + q^2 + q^6 + q^8 ) - (q-1)( 1 + q^2 + q^4 + q^6 )
= q ( q - 1 ) ( 1 + q + 2q ^2 )
We have
ℌ_bc,c(X) =( K )
-q^-(c+3)((K w_0ρ^2 K ) + (q^2+1)(1-q)(K ρ^2 K)) X
+q^-( 2c+4)( (K w_0 w_1 w_0ρ^4 K ) + ( 1 -q ) ( K w_0ρ ^4 K ) + (q^2+1)(1 - q + q^2) ( K ρ^4 K ) ) X^2
-q^-(3c+3)((K w_0ρ^6 K ) + ( q ^2 + 1 ) ( 1 - q ) ( K ρ ^6 K ) ) X^3
+q^-4 c ( K ρ^8 K ) X^4∈ℋ_[q^-1][X]
where the words appearing in each Hecke operator are of minimal possible length.
Since
𝔖 _ bc , c ( X ) = 1 - e^W (2,2,1 ) X + ( e ^W(4,3,3) + 2 e ^ W ( 4,2,2) ) X ^2 - e ^ W ( 6,4,3) X^3 + e ^ (8,4,4) X^4 ,
the result follows from Proposition <ref>.
§.§ Mixed coset structures
Let 𝐇 be the subgroup of 𝐆 generated by the maximal torus 𝐌, and the root groups corresponding to ±α_0, ±α_2. Then 𝐇 = GU_2×_μGU_2. Here GU_2 is the
reductive group over F whose R points for a F-algebra R are given by
GU_2(R) = { g ∈_2(E ⊗ R) | γ ( ^ t g ) J_2 g = μ(g) J_2 , μ(g) ∈ R ^ ×}
where J_2 = [ 1; 1 ] and the fiber product in 𝐇 is over the similitude character of the two copies of GU_2. Explicitly, we get the embedding
ι : → ( ( [ a b; c d ] ) , ( [ a_ 1 b_1; c _ 1 d_1 ] ) )
↦ ( [ a b; a_1 b_1; c d; c_1 d_1 ] )
We let H = 𝐇 (F ), U = H ∩ K = (_F). The Weyl group of W_H of H can be identified with the subgroup of W generated by s_0, s_2 and is isomorphic to S_2× S_2.
This embedding is isomorphic to the one obtained by localizing the global one in <ref> by a local change of variables that sends J in (<ref>) to diag(1,-1,-1,-1), which can be explicitly written by the formula given in <cit.>.
To describe the twisted restrictions arising from the Hecke polynomial ℌ_bc,c(X), we define the elements τ_0 = 1_G and
τ_1 =
0.9[ ϖ - 1; ϖ 1; 1 ; 1 ], τ_2 =
0.9[ ϖ^2 - 1; ϖ^2 1; 1 ; 1 ], τ_3 = 0.9[ ϖ^2 ϖ 1 - ϖ; ϖ 1 ; 1 ; - 1 ϖ ] .
If 2 ∈_F^×, then Hτ_i K are pairwise disjoint for i = 0 , 1, 2, 3.
If Hτ_i K = H τ_j K, there exists an h ∈ H such that τ_i^-1 h τ_j∈ K. Writing h as in (<ref>),
the matrices h τ_1, h τ_2, h τ_3, τ_1^-1 h τ_2 respectively have the form
0.9 [ aϖ * - a; * * *; cϖ * -c; * * * ], 0.9[ a ϖ ^ 2 * - a; * * *; c ϖ ^2 * -c; * * * ], 0.9[ a ϖ^2 * * - a ϖ; * * *; c ϖ^2 * * - c ϖ; * * * ], 0.9[ a ϖ * * d_1 - a ϖ; - c ϖ * * *; c ϖ ^ 2 * - c; * * d_1 ]
where * denotes an expression in the entries of h and an empty space means zero. It is then easily seen that first column in each of these matrices becomes an integral multiple of ϖ if we require it to lie in K, which is a contradiction. Moreover
τ ^-1_1 h τ _3 =
0.9([ a ϖ a+c_1 a + b + c_1 - d_1ϖ d_1 -a; -c ϖ a_1 -c a_1 - b_1 - c - d ϖ b_1 +c; c ϖ^2 c ϖ c+d -c ϖ; c_1 ϖ c_1 -d_1 d_1 ϖ ]) and τ_2^-1 h τ _3 =
0.9([ a a+c_1ϖ * d_1 - aϖ; -c * * *; c ϖ^2 c ϖ * -c ϖ; c_1 ϖ * d_1 ϖ ]) .
If τ_1 h τ_3∈ K, then a_1 - c, a_1 - b_1 - c - d, b_1 + c ∈_F and this implies that c - d ∈_F. Since c + d ∈_F as well and 2 ∈_F^×, we have c, d ∈_F. Similarly we can deduce that c_1, d_1∈_F. This forces all entries of h to be integral. But then the first column is an integral multiple of ϖ, a contradiction.
Finally, note that if τ_2 h τ_3^-1∈ K, then a , c , c_1, d_1∈_F and column expansion along the fourth row forces ( τ_2^-1 h τ_3) ∈ϖ_F, a contradiction.
For w ∈ W_I, let ℛ (w) denote U \ K w K / K. When writing elements of ℛ(w), we will only write the corresponding representative elements in G and it will be understood that these form a complete system of representatives. For g ∈ G, we denote H ∩ g K g^-1 simply by H_g. Observe that
[ 1; 1; - 1; - 1 ]∈ H _τ_1
is a lift of s_0 s_2∈ W_H. Therefore
U ϖ^λτ_1 K = U ϖ^s_0s_2(λ)τ_1K.
If 2 ∈_F^×, then
* ℛ (w_0ρ^2) = {ϖ^(2,2,1), ϖ^(2,1,2), ϖ^(1,1,0)τ_1, τ_3} ,
* ℛ ( w_0 w_1 w_0ρ^4 ) = {ϖ^(4,3,3), ϖ ^ (3,2,2) τ_1, ϖ^(2,1,1)τ_2 }.
Note that Lemma <ref> implies that U ϖ^λτ_i K ≠ U ϖ^μτ_j K for any λ , μ∈Λ if i ≠ j. Lemma <ref> implies that Uϖ^Λ K is in one-to-one correspondence with U ϖ^Λ U, and Cartan decomposition for distinguishes ϖ^(2,2,1) and ϖ^(2,1,2) in ℛ(w_0ρ^2). Thus all the listed elements represent distinct classes. It remains to show that these also exhaust all of the classes.
∙ w = w_0ρ^2. From the Weyl orbit diagram drawn in the proof of Proposition <ref> (and Theorem <ref>), we see that Kw K / K = im(𝒳_w) ⊔im (_w_1 w ) ⊔im ( 𝒳_w_2 w_1 w ) ⊔im ( _w_1 w_2 w_1 w ).
Thus to describe ℛ(w), it suffices to study the orbits of U on Schubert cells corresponding to the words σ _0 : = w_0ρ^2, σ _1 : = w_1σ _0 and σ _2 : = w_1 w_2σ _1. These cells are
im(𝒳_σ_0) = 0.9* ([ 1; ϖ ; x ϖ ϖ^2 ; ϖ ]) K x ∈ξ [_F], im( 𝒳_σ_1) = 0.9* ([ ϖ a; 1 ; ϖ ; x ϖ - a̅ϖ ϖ ^2 ]) K a ∈ [_E] ,
x ∈ξ [_F],
im ( 𝒳_σ_2 ) =
0.9* ([ ϖ^2 a_1 ϖ a a_1 + y+ x ϖ -ϖ a̅; ϖ a ; 1 ; -a̅ _1 ϖ ]) K a , a_1∈ [_E] ,
x, y ∈ξ [_F] .
For the σ _0-cell, one eliminates the entry x ϖ by a row operation and conjugates by w_α_0 : = ϖ^(0,1,0)w_0 to arrive at the representative ϖ^(2,2,1). For the σ_1-cell, one eliminates x ϖ and conjugate by w_2 to arrive at
0.9[ ϖ a; ϖ^2 - a̅ϖ ; ϖ ; 1 ]
If a = 0, we get the representative ϖ^(2,1,2). If not, then conjugating by diag(-a^-1,1,-a̅,1) ∈ M^∘ leads us to ϖ^(1,0,1)τ_1 and we have U ϖ^(1,0,1)τ_1 K = U ϖ^(1,1,0)τ_1 K.
As for the σ _2-cell, begin by eliminating y + ϖ x in the third column using a row operation.
If a_1 = 0 , a = 0, then we obtain the representatives ϖ^(2,2,1). If a_1 = 0, a ≠ 0, we can conjugate diag( a̅^-1 , 1, a , 1 ) ∈ M ^ ∘ to obtain the representative ϖ^(1,1,0)τ_1. Finally, if a_1≠ 0, we can conjugate by diag(a_1 ^-1 , 1, a̅_1 , 1 ) to arrive at the matrix
0.9[ ϖ^2 ϖ u - ϖu̅; ϖ u ; 1 ; - 1 ϖ ]
where u = a / a̅_1∈_E. We can assume u ∈_F by applying row and column operations. If u = 0 at this juncture, we can conjugate by w_2 and diag(1,1,-1,-1) to obtain the reprsentative ϖ^(1,1,0) τ_1, and if u ≠ 0, then conjugating by diag(1,1,u,u) gives us the representative τ_3. So altogether, we have
K w_0ρ^2 K = U ϖ^(2,2,1) K ⊔ U ϖ^(2,1,2) K ⊔ U ϖ^(1,1,0)τ_1 K ⊔ U τ_3 K.
∙ w = w_0 w_1 w_0ρ^4. The Schubert cells for this word were all written in Proposition <ref>(b). Here we have to analyze the U-orbits cells corresponding to words σ_0 and σ_2-
in the notation used there. We record the reduction steps for the σ_2-cell, leaving the other case for the reader.
Begin by eliminating the entries x_1 ϖ^2 and x
ϖ^2 + y ϖ using row operations. Conjugating by w_2 makes the diagonal ϖ^(4,3,3) and puts the entry a_1ϖ - ϖ^2a̅ and its conjugate on the top right anti-diagonal. A case analysis of whether a, a_1 are zero or not gives us ϖ^(4,3,3), ϖ^(3,2,2)τ_1 and ϖ^(2,1,1)τ_2 as possibilities.
§.§ Zeta elements
Let U_1 be the F-torus whose R points over a F-algebra R are given by U_1 (R ) = { z ∈ E ^ × | z γ(z) = 1 }. Then U_1(F) ⊂_E^× is compact. There is a homomorphism of F-tori given 𝒩 : Res_E/ F_m→U_1 given by z ↦ z / γ(z) with kernel _m. An application Hilbert's Theorem 90 gives us that 𝒩 is surjective, inducing isomorphism _E ^ × / _F ^ × = E ^ × / F ^ ×U _1(F).
Denote T = C : = U _1(F), D = 𝒩 ( _F ^ × + ϖ_E ), and define
ν : H → T, (h_1 , h_ 2 ) ↦ h_2 / h_1 .
Fix an integral domain containing [ q ^-1 ]. For the zeta element problem, we take
* G̃ : = G × T the target group,
* ι̃ : = ι×ν : H →G̃,
* M_H, = M_H, 𝒪, triv the trivial functor,
* U and K̃ : = K × C as bottom levels,
* x_ U = 1 _∈ M_H, ( U ) as the source bottom class,
* L̃ = K × D as the layer extension degree d = q + 1,
* ℌ̃ _c = ℌ_bc,c(Frob) ∈𝒞_ ( K̃\G̃ / K̃ ) the Hecke polynomial where Frob= (C).
If 2 ∈_F^×, there exists a zeta element for (x_U , ℌ̃_c, L̃ ) for all c ∈∖ 2.
For i = 0 , 1, 2 , 3, let g_i = ( τ_i ,1_T ) ∈G̃ span a zeta element. Using centrality of ρ^2 and that c is odd, we see that
ℌ̃_c≡ (1 - ρ^2)^4 (K̃) - (1 - ρ^2) ^2 ( K̃w_0ρ^2K̃) + (K̃ w_0 w_1 w_0ρ^4K̃) q+1
where we view w_i , ρ etc., as elements of G̃ with 1 in the T-component. It follows from Proposition <ref> that
H \ H ·(ℌ̃_c) / K̃= { H g_iK̃ | i = 0,1,2,3 } .
For i = 0 ,1 , 2 ,3, let 𝔥_i∈𝒞_[q^-1](U\ H / H_g_i) denote the (H, g_i)-restriction for ℌ̃_c (where H_g_i = H ∩ g_iK̃ g_i^-1) and d_i = [H_g_i : H ∩ g_i L g_i^-1 ]. Then
𝔥_0 ≡ (1 - ρ^2)^4 ( U ) - ( 1 - ρ^2)^2 ( U ϖ^(2,2,1) U ) + ( U ϖ^(2,1,2) U ) ) + ( U ϖ^(4,3,3) U ) ,
𝔥_1 ≡ - ( 1- ρ^2) ^2 ( Ũϖ^(1,1,0) H_g_1) + ( U ϖ^(3,2,2) H_g_1 ) ,
𝔥_2 ≡ (U ϖ^(2,1,1) H_g_2) ,
𝔥_3 ≡ - ( 1 - ρ^2) ^ 2 ( U H_g_3) .
modulo q + 1. Since ρ^2 is central, we see that
(𝔥_0^t) ≡ [ U ϖ^(4,3,3) U]_* = (q+1) ^2≡ 0 q+1
(𝔥_3^t) ≡ 0 q+1 .
Now since d_i | (q+1) for all i, we see that d_0 | ( 𝔥_0^t ) and d_3 | ( 𝔥_3^t ). Next observe that H_g_i = H ∩τ_i K τ_i^-1 for all i. If we write h ∈ H as in (<ref>), we see that for i = 1 , 2,
τ_i h τ_i^-1 =
0.9([ a c_1 b + c_1ϖ^i d_1 - a ϖ^i; -c a_1 a_1 - d ϖ^i b_1 + c ϖ^i; c ϖ d -c; c_1 ϖ c_1 d_1 ])
If now h ∈ H_τ_i, then the matrix above lies in K and thus all its entries must be in _F. It is then easily seen that H_τ_1, H_τ_2⊂ U and that ν(h) ∈ 1 + ϖ_E⊂ D. So d_1 = d_2 = 1 and d_1|(𝔥_1^t), d_2 | (𝔥_2^t) holds trivially. We have therefore established that
d_i | (𝔥_i^t) for i = 0 , 1 , 2 , 3
and the claim follows by Corollary <ref>.
The value of c in our normalization that is relevant to the setting of <cit.> is 1 since (q_E)^1/2 = q. Note that for even c, no zeta element exists in this setup.
§ SPINOR -FACTOR OF
In the final section, we study the zeta element problem for the embedding discussed in <ref>.
The symbols F, 𝒪_F, ϖ, , q and [] have the same meaning as in Notation <ref>. Let 𝐆 be the reductive over F whose R points for a F-algebra R are { g ∈_4 ( R ) | ^t g J_4 g = sim(g) J_4 for sim(g) ∈ R ^ ×} where J_4 = ( [ 1_ 2; - 1 _2 ] )
is the standard symplectic matrix. The map g ↦(g) is referred to as the similitude character. We let
G = 𝐆(F)
, K = G ∩_4 ( _F ) .
For a ring R, we let ℋ_R = ℋ_R(K \ G / K) denote the Hecke algebra of G of level K with coefficients in R with respect to a Haar measure μ_G such that μ_G(K)=1. For convenience, we will sometimes denote (K σ K ) ∈ℋ_R simply as (K σ K ).
§.§ Desiderata
Let 𝐀 = _m^3, dis : 𝐀→𝐆 be the map (u_0 , u_1 , u_2 ) ↦diag ( u_1 , u_2 , u_0 u_1 ^-1 , u_0 u_2 ^-1 ). Then dis identifies 𝐀 with the maximal torus in 𝐆. We let A = 𝐀 ( F ) and A ^ ∘ = A ∩ K denote the unique maximal compact open subgroup. For i = 0 , 1 , 2, let ϕ _ i, ε_i be the maps defined in <ref>. As before, we let
Λ = ϕ_0⊕ϕ_1⊕ϕ_2
denote the cocharacter lattice. The conventions for writing elements of Λ as introduced in <ref> are maintained.
The set Φ of roots of relative to 𝐀 is the set denoted Φ_F in <ref>. The half sum of positive roots is
δ = 2 ε_1 + ε_2 - 32ε_0
We let α_1 = ε_1 - ε_2, α_2 = 2 ε_2 - ε_0 as our choice of simple roots. Then α_0 = 2 ε_1 - ε_0 the highest root. The groups W, W_aff, W_I, Ω, the set S _ aff are analogous to the ones defined in <ref>
We let ℓ : W_I→ denote the length function on W_I The minimal length of elements in t(λ) W ⊂Λ⋊ W ≃ W_I can be computed using the formula
(<ref>). Set
w_0 = 0.9[ 1 /ϖ; 1; ϖ; - 1 ], w_1 = 0.9[ 1 ; 1 ; 1; 1 ] , w_2 = 0.9[ 1 ; 1; - 1 ; 1 ], ρ =
0.9[ 1; 1 ; ϖ ; ϖ ] .
These represent the elements t(α_0^∨ ) s_0, s_1, s_2, t(-ϕ_0) s_2 s_1 s_2 in W_I.
We let w_α_0 : = w_1 w_2 w_1 = ϖ^ϕ_1 w_0∈ N_G(A), which is a matrix representing the reflection s_α_0. For i = 0 , 1 ,2, let x_i : _a→𝐆 be the root group maps
x_0: u ↦0.9[ 1 ; 1 ; ϖ u 1 ; 1 ], x_1: u ↦0.9[ 1 u ; 1 ; 1 ; - u 1 ], x_2: u ↦0.9[ 1 ; 1 u; 1 ; 1 ]
and let g_i : [ ] → G be the map κ↦ x_i ( κ ) w_i. If I denotes the Iwahori subgroup of K whose reduction modulo ϖ lies in the Borel of () determined by Δ = {ε_1 - ε_2, 2 ε_2 - ε_0}, then I w_i I = _κ∈ [_w_i] g_w_i ( κ ) . For w ∈ W_I such that w is the unique minimal length element in w W, choose a reduced word decomposition w = s_w, 1 ⋯ s_w, ℓ(w) ρ_w, where s_w,i∈ S_aff, ρ_w∈Ω, a reduced word decomposition. As usual, define
𝒳_w : [ ] ^ ℓ(w) →G / K
( κ_1 , …, κ_ℓ(w) ) ↦g_s_w, 1 ( κ_1 ) ⋯g_s_w , ℓ(w) ( κ_ℓ(w) ) ρ_w K
Then im(𝒳_w ) is independent of the choice of decomposition
of w by Theorem <ref>.
§.§ Spinor Hecke polynomial
The dual group of 𝐆 is GSpin_5 which has a four dimensional representation called the spin representation. The highest coweight of this representation is ϕ_0 + ϕ_1 + ϕ_2 (see <ref> for arithmetic motivation) which is minuscule. By Corollary <ref>, the coweights are 2ϕ_0 + ϕ_1 + ϕ_2/2±ϕ_1/2±ϕ_2/2. The Satake polynomial is therefore
𝔖_spin ( X ) = ( 1 - y_0 X ) ( 1 - y_0 y_1 X ) ( 1- y_0 y_2 X ) ( 1 - y_0 y_1 y_2 X ) ∈ [ Λ ] ^ W [X]
where y_i = e ^ ϕ_i∈ [ Λ]. Let ℛ = [ q ^ ±1/2 ], and 𝒮 : ℋ_ℛ ( K \ G / K ) →ℛ [ Λ] ^W denote the Satake isomorphism (<ref>). For c ∈ - 2, the polynomial ℌ_spin, c (X) is defined so that 𝒮 ( ℌ_spin,c ( X ) ) = 𝔖_spin(q^-c/2 X ).
For c ∈∖ 2,
ℌ_spin,c(X) = (K) - q ^ - c+3/2 (K ρ K ) X
+ q ^-(c+2) ( ( K w_0ρ ^2 K ) + ( q^2 + 1 ) ( K ρ^2 K ) ) X^2
- q ^ - 3c+ 3/2 ( K ρ^3K ) X^3 + q^-2c ( K ρ ^ 4 K ) X^4∈ℋ_[q^-1]( K \ G / K ) [X] .
where the words appearing in each Hecke operator are of minimal possible length.
We have
𝔖_spin(X) = 1 - e^W(1,1,1) X + ( e ^ W(2,2,1) + 2 e ^W(2,1,1) ) X ^ 2 - e ^ W(3,2,2) X^3 + e ^ (4,2,2) X^4 .
The lengths of the cocharacters appearing as exponents in the coefficients of 𝔖_spin(X) is computed using the formula (<ref>) and the corresponding words are easily found. The leading coefficient (see Definition <ref>) of K ϖ^λ K for λ∈Λ^+ is q ^ - ⟨λ , δ⟩ (Corollary <ref>) shifted by an appropriate power of q^-c/2, which are easily computed. The coefficient of the non-leading term (K ρ^2 K) in the monomial X^2 is computed as follows. Consider the Weyl orbit diagram
(2,0,1)[r, "s_1"] (2,1,0)[r, "s_2"] (2,1,2)[r, "s_1"] (2,2,1)
of (2,2,1).
From (<ref>)
and Theorem <ref>, we see that | K w_0ρ^2 K / K | = q + q^2 + q^3 + q^4.
Since the leading coefficient of the Satake transform of ( K w_0ρ^2 K ) is q^⟨ (2,2,1), δ⟩ = q^2, the number of cosets in K w_0ρ^2 K / K whose shape lies in the W orbit of (2,2,1) is
∑ _μ∈ W (2,2,1) q ^ ⟨ (2,2,1) + μ , δ⟩ = 1 + q + q ^3 + q ^4 .
Thus the required coefficient is q^-c multiplied with 2 - q^-2( q + q ^2 + q^3 + q^4 - ( 1 + q + q^3 + q^4 ) ) = q^-2 ( q^2 + 1 ).
The formula for ℌ_spin,c is again well known, e.g., see <cit.> or <cit.> where c is taken to be - 3. We have however included a proof for completeness and to provide a check on our computations.
The dual group of 𝐆 also has a 5 dimensional representation called the standard representation. Its highest coweight is ϕ_1 and it's Satake polynomial is
𝔖_std(X) = ( 1- X )( 1 - y_1^-1 X ) ( 1 - y_1 X ) ( 1 - y_2 ^-1 X ) ( 1- y_2 X ) .
Cf. the polynomial 𝔖_bc(X) of <ref>. See <cit.> for a discussion of this L-factor.
§.§ Mixed coset decompositions
Let 𝐇 be the subgroup of 𝐆 generated by 𝐀 and the root groups of ±α_0, ±α_2. Then 𝐇≅_2×__2, the fiber product being over the determinant map.
Explicitly, we get an embedding
ι : → ( ( [ a b; c d ] ) , ( [ a_ 1 b_1; c _ 1 d_1 ] ) )
↦ ( [ a b; a_1 b_1; c d; c_1 d_1 ] )
Set H = 𝐇(F), U = H ∩ K, W_H = ⟨ s_0 , s_2⟩≅ S_2× S_2 the Weyl group of H and Φ_H := {±α_0, ±α_2} the set of roots of 𝐇.
For convenience in referring to the components of H, we let H_1, H_2 denote _2(F) (so that H = H_1×_F^× H_2) and _i : H → H_i for i =1,2 denote the natural projections onto the two component groups of H. To describe the twisted H-restrictions of the spinor Hecke polynomial, we introduce the following elements in G:
τ_0 = 0.9[ 1 ; 1 ; 1 ; 1 ], τ_1 =
0.9[ ϖ 1; ϖ 1; 1 ; 1 ],
As in <ref>, we will need to know the strucuture of H_τ_1 = H ∩τ_1 K τ_1^-1,
Let = [ 1; 1 ]∈_2(F) and define
: _2(F) ↪ H
h
↦ ( h, h
) .
Let 𝒳^∘ = (_2(_F), 𝒳 = (_2(F) ) and
J be the compact open subgroup of H_τ_1 whose reduction modulo ϖ lies in the diagonal torus of 𝐇().
H _ τ_1 = 𝒳 ^ ∘ J ⊊ U. In particular, H K and H τ_1 K are disjoint.
Let h = ( h_1 , h_2 ) ∈ H and say h_i : = ( [ a_i b_i; c_i d_i ] )
where a_i, b_i, c_i , d_i∈ F. Then h ∈ H_τ_1 implies that
a_1, a_2 , c_1, c_2 , d_1 , d_2∈_F and a_1 - d_2 ,
a_2 - d_1 , b_1 - c_2, b_2 - c_1∈ϖ_F .
It follows that 𝒳 , J ⊂ H_τ_1⊂ U.
In particular,
H_τ_1⊃𝒳 J.
For the reverse inclusion, say h = (h_1, h_2 ) ∈ H_τ_1. Since (h_1) ∈𝒳⊂ H_τ_1, we see that (h_1', h_2') := (h_1^-1) · h lies in H_τ_1. By construction, we have h_1 ' = 1_H_1. The conditions of the membership (1_H_1 , h_2 ' ) ∈ H_τ_1 force are
(1_H_1 , h_2' ) ∈ J. For the second claim, note that H_τ_1≠ U since A^∘⊄H_τ_1 and invoke Lemma <ref>.
For w ∈ W_I, let ℛ(w) denote the double coset U \ K w K / K. As before, we l only write the representative elements when describing ℛ(w) and these representatives are understood to be distinct.
We have
* ℛ ( ρ ) = {ϖ^(1,1,1) , τ_1},
* ℛ ( w _0ρ ^ 2 ) = {ϖ ^(2,2,1) , ϖ^(2,1,2) , ϖ^ (1,1,0)τ_1}.
Since H K and H τ_1 K are disjoint, H_τ_1⊂ U and U \ H / U ≃ W_H\Λ, the listed elements represent distinct classes in their respective double coset spaces. To show that they represent all classes, we study the orbits on K w K / K using Theorem <ref>.
∙ Let w = ρ. We have K w K / K = _ σim ( 𝒳_σ ) for σ∈{ w, w_2 w, w_1 w_2 w, w_2 w_1 w }. To obtain the mixed representatives, we need to analyze the U-action on the cells corresponding to the words σ_0 = ρ and σ_1 = w_1 w_2ρ. The first is a singleton and gives ϖ^(1,1,1) (after conjugating by w_α_0 w_2). As for σ_1, we have
im(
𝒳_σ_1 ) = 0.9* [ ϖ a y ; 1 ; 1 ; -a ϖ ] K a , y ∈ []
We can eliminate y by a row operation from U, and conjugating by w_2 gives us a matrix with diagonal ϖ ^(1,1,1). If a = 0, we obtain ϖ^(1,1,1) and if a ≠ 0, we conjugate by diag(1, 1, a , a) to obtain τ_1.
∙ Let w = w_0ρ ^2. From diagram (<ref>), we have K w K / K = _σim ( 𝒳_w ) for σ∈{ w, w_1 w, w_2w_1 w, w_1 w_2 w_1w } and it suffices to analyze the cells corresponding to σ_0 = w, σ_1 = w_1 w, σ_2 = w_1 w_2 w_1 w. These cells are as follows:
im(𝒳_σ_0) = 0.9* ([ 1; ϖ ; x ϖ ϖ^2 ; ϖ ]) K x ∈, im( 𝒳_σ_1) = 0.9* ([ ϖ a; 1 ; ϖ ; x ϖ -
a ϖ ϖ ^2 ]) K
a, x ∈ [] ,,
im ( 𝒳_σ_2 ) =
0.9* ([ ϖ^2 a_1 ϖ a a_1 + y+ x ϖ a ϖ; ϖ a ; 1 ; - a_1 ϖ ]) K a , a_1, x , y ∈ [] .
The σ_0-cell obviously leads to ϖ^(2,2,1). For the σ_1-cell we can eliminate x ϖ, conjugate by w_2. If a = 0, we have ϖ^(2,1,2) at our hands and if not, then conjugating by diag(1,1,a,a) gives us ϖ^(1,0,1)τ_1. Now observe that since () ∈ H_τ_1 is a lift of s_0 s_2, we have
U ϖ^(1,0,1)τ_1 K = U ϖ^(1,1,0)τ_1 K .
Finally for the σ_2-cell, begin by eliminating aa_1 + y + ϖ x. Next note that conjugation by w_2 swaps a_1 and a. Using row and column operations, we can assume that a_1 = 0. If a = 0, we end up with ϖ^(2,2,1) and if not, then conjugation by diag(1,1,a,a) gives us ϖ^(1,1,0)τ_1.
§.§ Schwartz space computations
Let X : = F^2× F^2 considered as a totally disonnected topological spaces. We view elements of X as pairs of 2 × 1 column vectors. We let H_1× H_2 act on X on the right via
( u⃗ , v⃗ ) · (h_1, h_2) ↦ ( h_1 ^ - 1 u⃗ , h _ 2 ^ - 1 v) u⃗ , v⃗∈ F^2, h_1∈ H_1, h_2∈ H_2 .
Via the natural embedding H ↪ H_1× H_2, we obtain an action of H on X.
Let be an integral domain that contains [ q ^-1 ] and let 𝒮_X = 𝒮_X,𝒪 be 𝒪-module of
all functions ξ : X → which are locally constant and compactly supported on X. Then 𝒮_X has an induced left action 𝒮× H →𝒮 via (h, ξ ) ↦ξ ( ( - ) h ), which makes 𝒮 a smooth representation of H. Let Υ_H be the set of all compact open subgroups of H and
M_H, : 𝒫(H, Υ _ H
) →𝒪-Mod
denote the functor V ↦𝒮_X^V associated with 𝒮 (see Definition <ref>). For u, v, w , x ∈, let Y_u := ϖ^u _F⊂ F,
Y_u,v = Y_u× Y_v⊂ F^2 and X _u,v,w,x := Y_u,v× Y_w, x⊂ X. We denote
ϕ_(u,v,w,x) := ( Y_u,v,w,x ) , ϕ̅_(u,v,w,x) = ϕ_(-u,-v,-w,-x)
where ch(Y) denotes the characteristic function of Y ⊂ X. These belong to 𝒮. We also denote ϕ : = ϕ_(0,0,0,0) for simplicity. The element ϕ will serve as the source bottom class of the zeta element.
We have
* [ U ϖ^(1,1,1) U ]_* ( ϕ
) = ϕ̅_(1,1,1,1) + q ( ϕ̅_(1,1,0,0) + ϕ̅_(0,0,1,1) ) + q^2ϕ,
* [ U ϖ^(2,2,1) U ]_*( ϕ
) = ϕ̅_(2,2,1,1) + ( q - 1 ) ϕ̅_(1,1,1,1) + q^2ϕ̅_(0,0,1,1),
* [ U ϖ^(2,1,2) U ] _*( ϕ
) = ϕ̅ _(1,1, 2,2) + ( q - 1 ) ϕ̅_(1,1,1,1) + q ^2ϕ̅_(1,1,0,0).
If we denote U_1 = U_2 := _2( _F ) and pick any λ = (a_0, a_1, a_2 ) ∈Λ, we have
[ U ϖ ^λ U ]_* (ϕ_(u,v,w,x) ) = [U_1 t_1 U_1 ]_* (ϕ_(u,v) ) ⊗ [U_1 t_2 U_2]_* ( ϕ_(w,x) )
where t_i = diag (ϖ^ a_i, ϖ ^ a_0 - a_i ) for i = 1, 2 and ϕ_(a,b) : F^2→𝒪 denote the characteristic function of Y_a× Y_b for a,b, ∈. The resulting functions can be computed using the decomposition recipe of Theorem <ref>. See also
<cit.> for a more general result.
To facilitate checking the trace criteria for one of the twisted restrictions, we do a preliminary calculation. Let Mat_2 × 2 (F) be the F-vector space 2 × 2 matrices over F. We make the identification
: X Mat_2 × 2 ( F ) 1.1( ( [ u_1; u_2 ] ), ( [ v_1; v_2 ] ) )
↦ ( [ u_1 v_2; u_2 v_1 ] ) .
and define a right action
Mat_2 × 2(F) ×_2(F) →Mat_2 × 2(F) (h, M) ↦ h^-1 M
Then for all h ∈_2(F) and (u⃗, v⃗ ) ∈ X,
( (u⃗ , v⃗ ) · ( h ) ) = ( u⃗ , v⃗ ) · h
where is as in (<ref>) and the action on the right hand side is (<ref>). Let ψ∈𝒮_X denote the function such that ψ∘^-1 : Mat_2 × 2 (F) →𝒪 is the characteristic function of diag(ϖ, ϖ)^-1·_2 ( _F ).
Let
𝔥_1' : = q ( U H_τ_1 )
- ( U
ϖ^(1,1,0) H_τ_1
) + ( U ϖ ^ ( 2, 1,1) H_τ_1 ) ∈𝒞_(U \ H / H_τ_1 )
𝔥_1,*'(ϕ) = ψ.
By Lemma <ref>, U H_τ_1 = U 𝒳^∘, U ϖ^(2,1,1) H_τ_1 = U ϖ^(2,1,1)𝒳^∘ and U ϖ^(1,1,0)
H_τ_1
= U ϖ^(1,1,0)𝒳 ^ ∘,
where we used that ϖ^(1,1,0) J ϖ^-(1,1,0)⊂ U in the last equality.
Moreover ϖ^(1,1,0) = (diag(ϖ, 1) ) and ϖ^(2,1,1) = ( diag(ϖ , ϖ ) ) and U ∩𝒳 = 𝒳^∘. A straightforward analogue of Lemma <ref> implies that we have a bijection
𝒳^∘\𝒳^∘ h 𝒳^∘ U \ U (h) 𝒳^∘
𝒳^∘γ ↦ U (γ)
Therefore 𝔥_1,* '( ϕ ) =
( q (ϕ) - T_ϖ ^t·(ϕ) + S_ϖ^t·(ϕ) ) ∘ where T_ϖ, S_ϖ are the Hecke operators of _2(F) given by the characteristic functions of _2 ( _F )-double cosets of diag(1, ϖ ), diag(ϖ, ϖ ) respectively, T_ϖ^t, S_ϖ^t denotes their transposes and the action of these operators is via (<ref>)
Now (ϕ) is just the characteristic function of Mat_2× 2 ( _F ). A
straightforward computation shows that the function
q (ϕ) - T_ϖ ^t·(ϕ) + S_ϖ^t·(ϕ)
on Mat_2 × 2(F) vanishes on any matrix whose entries are not in ϖ^-1_F or whose determinant is not in ϖ^-2_F^×. The claim follows.
A very closely related computation appears in <cit.> in the context of Kato's Euler system, which is what inspired the choice of 𝔥_1 above.
§.§ Zeta elements
Following the discusion in <ref>, we introduce T = F ^ ×, C = _F^× and D = 1 + ϖ_F⊂ C. We let ν = sim∘ι : H → T be the map that sends (h_1 , h_2 ) to the common determinant of h_1 , h_2. For the zeta element problem, we set
* G̃ = G × T,
* ι_ν = ι×ν : H →G̃,
* U and K̃ : = K × C as bottom levels
* x_U = ϕ = ϕ_(0,0,0,0)∈ M_H,(U) as the the source bottom class
* L̃ = K × D as the layer extension
of degree q - 1,
* ℌ̃_c = ℌ_spin, c (Frob ) ∈𝒞_ (
K̃\G̃ / K̃ ) as the Hecke polynomial.
There exists a zeta element for (x_U , ℌ̃ _c , L̃
) for all c ∈ - 2.
Denote ϱ = (ρ , ϖ) ∈G̃. By Proposition <ref>, we see that
ℌ̃_c≡ ( 1 + 2 ϱ^2 + ϱ^4 ) ( K̃ ) - ( 1 + ϱ^2 ) ( K̃ϱK̃ ) + (K̃ w_0ϱ^2K̃) q-1
where we view w_0∈G̃ via 1_G×ν.
For i = 0, 1, let g_i = (τ_i, 1_T). By Proposition <ref>, we see that H \ H · ( ℌ̃ _c ) / K̃ = { H g_0K̃ , H g_1K̃} .
So it suffices to consider restrictions with respect to g_0 and g_1.
Let 𝔥_i denote the (H,g_i)-restriction of ℌ̃_c. Observe that
H_g_i = H ∩ g_iK̃ g_i^-1 = H ∩τ_i K τ_i^-1 ,
so that 𝔥_i∈𝒞_𝒪( U \ H / H _τ_i ). Let z = ( ( [ ϖ; ϖ ] ) , ( [ ϖ; ϖ ] ) ) ∈ H. Invoking Proposition <ref> again, we see that
𝔥_0 ≡ ( 1 + 2 z + z^2 ) ( U ) - ( 1 + z ) ( U ϖ^(1,1,1) U) + ( U ϖ^(2,2,1) U ) + ( U ϖ^(2,1,2) U )
𝔥_1 ≡𝔥_1'
modulo q- 1. Note that the action of z on ϕ in the covariant convention is by its inverse. To avoid writing minus signs, let us denote z_0 = z^-1. Then by Lemma <ref>,
𝔥_0,*(ϕ) ≡ ( 1 + 2z_0 + z_0^2) ϕ - ( 1 + z_0 ) ( z_0·ϕ + ϕ̅_(1,1,0,0) + ϕ̅_(0,0,1,1) + ϕ ) +
(
z_0·ϕ̅_(1,1,0,0) + ϕ̅_(0,0,1,1)) + ( z_0·ϕ̅ _(0,0,1,1) + ϕ̅_(1,1,0,0) )
= 0 q-1
On the other hand, 𝔥_1,*(ϕ) ≡𝔥_1,*'(ϕ) = ψ. It is easily seen that the stablizer of every point in supp(ψ) in H_τ_1 reduces to identity modulo ϖ. In particular, these stabilizers are contained in the subgroup H ∩ g_iL̃ g_i^-1 of H_τ_1. So by Theorem <ref>, ψ is in the image of the trace map
_* : M_H, 𝒪 (H ∩ g_iL̃ g_i ^-1 )
→ M_H, 𝒪(H_τ_1) .
We now invoke Corollary <ref>.
amsalpha
|
http://arxiv.org/abs/2409.03564v1 | 20240905142051 | Toricity in families of Fano varieties | [
"Lena Ji",
"Joaquín Moraga"
] | math.AG | [
"math.AG",
"14M25 (Primary), 14E30, 14J10, 14J45 (Secondary)"
] |
Toricity in families of Fano varieties]
Toricity in families of Fano varieties
L. Ji]Lena Ji
Department of Mathematics, University of Illinois Urbana-Champaign, 273 Altgeld Hall, 1409 W. Green Street, Urbana, IL 61801
[email protected]
J. Moraga]Joaquín Moraga
UCLA Mathematics Department, Box 951555, Los Angeles, CA 90095-1555, USA
[email protected]
§ ABSTRACT
Rationality is not a constructible property in families. In this article, we consider stronger notions of rationality and study their behavior in families of Fano varieties. We first show that being toric is a constructible property in families of Fano varieties.
The second main result of this article concerns an intermediate notion that lies between toric and rational varieties, namely cluster type varieties.
A cluster type ℚ-factorial Fano variety contains an open dense algebraic torus, but the variety does not need to be endowed with a torus action.
We prove that, in families of ℚ-factorial terminal Fano varieties, being of cluster type is a constructible condition. As a consequence, we show that there are finitely many smooth families parametrizing n-dimensional smooth cluster type Fano varieties.
[
[
September 9, 2024
=====================
§ INTRODUCTION
Let X → T be a family of projective varieties over ℂ. The subset
T_ rational{closed t ∈ T |𝒳_t is a rational variety, i.e., 𝒳_t is birational to projective space}
is (the set of closed points of) a countable union of locally closed subsets of T
(see <cit.>).
In smooth families, T_ rational is a countable union of closed subsets <cit.>; however, for families with singular members, these subsets are in general neither open nor closed <cit.>. Furthermore,
T_ rational⊆ T is not in general a constructible subset,
i.e., it is not a finite union of locally closed subsets (in the Zariski topology) <cit.>.
There are also families where T_ rational = T, but the generic fiber is irrational as a k(T)-variety (see, e.g., <cit.>).
From the perspective of the Minimal Model Program (MMP) it is natural to investigate the behavior of rationality in families of Fano varieties.
In the case of singular degenerations of Fano varieties, Birkar, Loginov, and Qu have shown that the dimension and log discrepancies of general fibers control the irrationality of the special fiber <cit.>.
One important class of rational varieties is the class of toric varieties, i.e., those that contain a dense algebraic torus whose action on itself extends to the whole variety.
Our first result is that being toric is a constructible condition in families of (klt) Fano varieties:
Let 𝒳→ T be a family of Fano varieties over ℂ.
Then, the set
T_ toric{closed t ∈ T |𝒳_t is a toric variety}
is a constructible subset of T.
It is expected that the previous theorem should hold without the Fano assumption.
We note that being toric is neither an open nor a closed condition (Examples <ref> and <ref>).
Being toric places strong restrictions on the geometry of a variety.
For instance, smooth toric Fano varieties do not deform <cit.> (see also <cit.>), so Theorem <ref> is automatic in the smooth case.
Thus, we consider an intermediate class: cluster type varieties (see Definition <ref>). This is a class of rational varieties generalizing toric varieties, and was introduced by Enwright, Figueroa, and the second author in <cit.>.
For example, smooth toric varieties are covered by tori, whereas smooth cluster type Fano varieties contain an open dense torus (see <cit.>).
We prove the following result about cluster type varieties in families of Fano varieties:
Let 𝒳→ T be a family of ℚ-factorial terminal Fano varieties over ℂ.
The set
T_ cluster{closed t∈ T |𝒳_t is a cluster type variety}
is a constructible subset of T.
The -factorial terminal assumption in the previous theorem is vital for our proof to work (see Remark <ref>).
Many properties of families of terminal Fano varieties do not hold if we drop the condition
on the singularities (see, e.g., <cit.>).
As a consequence of Theorem <ref> and a result of Kollár–Miyaoka–Mori <cit.>, we get the following corollary about cluster type smooth Fano varieties.
Let n be a positive integer.
Then, there are finitely many smooth projective morphisms f_i𝒳_i→ T_i parametrizing
n-dimensional smooth cluster type Fano varieties over ℂ.
By parametrizing, we mean here that every n-dimensional smooth cluster type Fano appears as the closed fiber of some f_i, and, moreover, every closed fiber of each f_i is in this class of varieties.
This notion is stronger than boundedness,
but weaker than moduli, since the same variety could appear as a fiber multiple times.
Furthermore, from the proof of Theorem <ref>, it follows that all fibers of each f_i in Corollary <ref> admit the same combinatorial structure, in the sense that a certain birational transformation to a toric pair can be performed in the whole family (see Lemma <ref>).
For smooth rational Fano varieties, it is not clear whether the analogous statement to Corollary <ref> should be true. In the toric case, the analogous statement was already known: smooth toric Fano varieties correspond to smooth reflexive polytopes (see, e.g., <cit.>), so in each dimension n there are only finitely many up to isomorphism.
Note that smooth cluster type Fanos do deform, so unlike in the toric case,
the parametrizing varieties T_i in Corollary <ref> are in general positive-dimensional.
For example, every smooth del Pezzo surface of degree ≥ 2 is of cluster type by <cit.> and <cit.>.
§.§ Outline
In Section <ref>, we begin by recalling definitions and preliminary results on singularities of pairs, toric log Calabi–Yau pairs, and cluster type log Calabi–Yau pairs. We prove several lemmas that we will apply in later sections.
Next, in Sections <ref> and <ref>, we prove versions of Theorems <ref> and <ref> for families of Fano varieties with a log Calabi–Yau pair structure. Then, in Section <ref>, we use our results from Sections <ref> and <ref> to prove the main theorems. Finally, in Section <ref>, we end the article with examples of different behaviors of toricity in families, and we pose some related questions.
§.§ Acknowledgements
We thank Stefano Filipazzi, János Kollár, and Burt Totaro for helpful conversations and comments.
Part of this work was carried out while the authors were visiting the Yau Mathematical Sciences Center at Tsinghua University, and we thank Caucher Birkar and Spring Ma for their hospitality.
§ PRELIMINARIES
We work over an algebraically closed field of characteristic 0.
We write Σ^n for the sum of the coordinate hyperplanes of ^n. Throughout, a Fano variety will mean a variety X such that -K_X is ample and X has klt singularities.
In this section, we introduce preliminary results on singularities of the minimal model program, toric and cluster type log Calabi–Yau pairs, families of pairs, and dlt modifications.
§.§ Singularities of the MMP
A pair (X,B) consists of a normal quasi-projective variety X and an effective -divisor B on X such that K_X+B is Q-Cartier. If {B_i | i ∈ I} are the prime components of B, then the strata of (X,B) are the irreducible components of the intersections ⋂_i ∈ J, J ⊂ I B_i.
The index of a pair (X,B) is the smallest positive integer m such that m(K_X + B) is Cartier.
Let (X,B) be a pair. Given a projective birational morphism π Y → X from a normal quasi-projective variety and a prime divisor E on Y, the log discrepancy of (X, B) at E is
a_E(X, B) 1 - _E(π^*(K_X + B)).
A pair (X,B) is terminal (resp. canonical, Kawamata log terminal (klt), lc canonical (lc)) if a_E(X, B) > 1 (resp. a_E(X, B) ≥ 1, a_E(X, B) >0 and ⌊ B ⌋ = 0, a_E(X, B) ≥ 0) for every exceptional divisor E over X.
Let (X,B) be a pair and f Y → X a birational morphism. The log pullback of B is the (not necessarily effective) Q-divisor B_Y defined by K_Y+B_Y = f^*(K_X+B) and f_* B_Y = B.
Let (X,B) be a log canonical pair. A divisor E over X is a log canonical place (resp. canonical place, non-canonical place, terminal place, non-terminal place) if a_E(X, B) = 0 (resp. a_E(X,B) = 1, a_E(X,B) < 1, a_E(X,B) > 1, a_E(X,B)≤ 1). The center of E on X is the closure of its image in X, and E is exceptional if its center on X is not a divisor.
Let ϕ Y X be a rational map of normal quasi-projective varieties. We say ϕ contracts a Weil divisor E ⊂ Y if the center of E on X is not a divisor. We say ϕ extracts a Weil divisor F ⊂ X if ϕ^-1 contracts F.
The map ϕ is a birational contraction if it is birational and does not extract any divisors.
An lc pair (X,B) is divisorially log terminal (dlt) if the coefficients of B are at most one and if there is an open subset U ⊂ X such that
* U is smooth and B|_U is an snc divisor, and
* if E is a divisor over X with a_E(X, B) = 0, then the center of E on X intersects U.
If (X, B) is an lc pair, a Q-factorial dlt modification of (X,B) is a projective birational morphism π Y → X from a Q-factorial normal variety Y such that Ex(π) is a divisor, π only contracts log canonical places, and the log pullback (Y, B_Y) is dlt. A Q-factorial dlt modification exists for any lc pair by <cit.>.
Let X be a normal quasi-projective variety.
* We say X is of Fano type if there exists an effective Q-divisor B such that (X, B) is a klt pair and -(K_X+B) is big and semiample.
* A log Calabi–Yau (log CY) pair is an lc pair (X,B) such that K_X + B ∼_ Q 0.
X is Fano type if and only if there exists an effective Q-divisor B on X such that (X,B) is a klt pair and -(K_X+B) is ample <cit.>.
If X is projective, then by <cit.> an lc pair (X,B) is log CY if and only if K_X + B ≡ 0.
Two pairs (X_1, B_1) and (X_2, B_2) are crepant birational equivalent, written (X_1, B_1)≃_ cbir (X_2, B_2), if there exist proper birational morphisms f_i Y → X_i such that the log pullbacks of B_i on Y are equal.
§.§ Toric log Calabi–Yau pairs
Now we define toric and cluster type log Calabi–Yau pairs.
A pair (X,B) is said to be toric
if X is a toric variety
and B is an effective torus-invariant divisor.
Recall that if X is a normal toric variety and B is the reduced sum of the torus-invariant divisors, then K_X + B ∼ 0 <cit.> and the pair (X, B) is lc <cit.>.
In the case of log CY pairs, the following definitions generalize the notion of being toric.
Let (X,B) be a log CY pair.
* We say that (X,B) is toric
if X is a normal toric variety and B is the reduced
sum of the torus-invariant divisors.[For log CY pairs, this condition is equivalent to (X,B) being a toric pair as in Definition <ref>.]
* We say that (X,B) is of cluster type if there exists a crepant birational
map ϕ (^n,Σ^n) (X,B) such that 𝔾_m^n ∩ Ex(ϕ)
contains no divisors on X.
* We say that (X,B) is log rational if (X,B)≃_ cbir (T,B_T) for T a normal projective toric variety
and B_T the reduced sum of the torus-invariant divisors;
this condition is equivalent to (X,B) ≃_ cbir (^n,Σ^n).
In a similar vein, we say that an algebraic variety X is of cluster type (resp. log rational) if it admits a boundary divisor B for which (X,B) is a cluster type
(resp. log rational) log CY pair.
Every toric log CY pair (X,B) is of cluster type, as it admits a torus-equivariant
crepant birational map (^n,Σ^n) (X,B) that is an isomorphism
on 𝔾_m^n.
Thus, the following implications hold for a log CY pair (X, B):
(X,B) is toric(X,B) is of cluster type(X,B) is log rational.
The reverse implications do not hold; see Example <ref> and <cit.>.
If (X,B) is a log CY pair of index one, then (X,B) admits a toric
model if and only if the pair (X,B) has birational
complexity zero <cit.>.
Since crepant birational equivalences preserve the indices of log CY pairs (see, e.g., <cit.>),
any log rational log CY pair necessarily has index one:
If (X,B) is a log CY pair that is log rational,
then K_X+B∼ 0. In particular, B is a reduced Weil divisor.
We now state several results about toric log CY pairs that we will use in the proofs of Theorems <ref> and <ref>. First, we recall some invariants of pairs that
Brown–McKernan–Svaldi–Zong <cit.> used to characterize toric log CY pairs:
Let (X,B) be a pair.
* A decomposition Σ of B is a finite formal sum ∑_i=1^k α_i B_i = B where each α_i ≥ 0 and B_i is a reduced (not necessarily irreducible) effective divisor.
* The Picard rank of Σ is ρ(Σ)(span{B_i | 1 ≤ i ≤ k}) where the span is inside _ Q X X⊗_ Z Q. The norm of Σ is |Σ| ∑_i=0^k α_i. The complexity of the decomposition (X,B;Σ) is c(X,B;Σ) X + ρ(Σ) - |Σ|.
If (X,B) is a toric log CY pair and Σ is the decomposition of B given by the sum of its prime divisors, then c(X,B;Σ) = 0 (see, e.g., <cit.>).
Let (X,B) be a log CY pair and Σ a decomposition of B. Then c(X,B;Σ) ≥ 0.
If c(X,B;Σ) < 1, then there exists a divisor D ≥⌊ B ⌋ such that (X, D) is a toric log CY pair, and all but ⌊ 2 c(X,B;Σ) ⌋ components of D are elements of the set {B_i | 1 ≤ i≤ k}.
In particular, (X, ⌊ B⌋) is a (not necessarily log CY) toric pair.
The following lemma is well known to the experts (see, e.g., <cit.>).
Let (X, B) be a projective toric log CY pair, and
let π X Y be a birational contraction
to a projective variety.
Then (Y, π_* B) is a toric log CY pair,
and π is a toric birational map.
The following result, which shows that certain birational modifications of toric log CY pairs are toric, will be useful in the proofs of both Theorems <ref> and <ref>.
Let (X,B) be a projective toric log CY pair,
let p Y X be a birational contraction that only contracts non-canonical places of (X,B),
and let (Y,B_Y) be the log pullback of (X,B) to Y.
Then (Y,B_Y) is a toric log CY pair, and Y X is a toric projective birational morphism.
The pair (X,B) has index one by <cit.>,
so the assumption on p implies it only contracts log canonical places of (X,B).
In particular, we may find a -factorial dlt modification (Z,B_Z)→ (X,B)
such that Z Y is a birational contraction.
Let Σ (resp. Σ_Z) be the decomposition of B (resp. B_Z) given by the sum of its prime divisors.
Then c(X,B;Σ)=0 since (X,B) is toric. Since
ρ(Σ_Z)=ρ(Σ)+ρ(Z/X)
and |Σ_Z|=|Σ|+ρ(Z/X),
we get
c(Z,B_Z;Σ_Z)=0,
so (Z,B_Z) = (Z, ⌊ B_Z ⌋) is a toric log CY pair by Theorem <ref>.
Applying Lemma <ref> to Z Y X, we conclude that (Y,B_Y) is a toric log CY pair, and hence that Y X is a toric birational contraction.
§.§ Families of varieties and pairs
Next, we will consider relative versions of Definition <ref>. A projective contraction is a morphism f X → T of normal quasi-projective varieties such that f_* O_X = O_T. A fibration is a projective contraction whose general fiber is positive dimensional.
Let (X,B = ∑α_i B_i) be a pair and f X → T a projective contraction. For a closed point t∈ T, the log fiber is (X_t, B_t ∑_B_i ⊅X_tα_i B_i|_X_t).
If X_t is normal and is not contained in any prime divisor in the support of B, then (X_t, B_t) is a pair.
A family of varieties is a flat projective contraction f X → T of normal quasi-projective varieties.
A family of pairs is a family of varieties X → T and a pair (X, B) such that the log fiber over any closed point is a pair.
* A family of Fano varieties is a family of varieties such that the fiber over every closed point is a Fano variety.
* A Fano fibration is a projective contraction X → T such that -K_X is ample over T.
* A Fano type morphism is a projective contraction X → T such that there exists a
Q-divisor B on X with (X,B) a klt pair, B big over Z, and K_X + B ∼_T, 0.
* A family of log CY pairs is a family of pairs such that the log fiber over any closed point is a log CY pair.
* A log CY fibration is a projective contraction X→ T such that there exists a -divisor B on X with (X,B) an lc pair and K_X+B∼_T, 0.
Note that a family of Fano varieties is not necessarily a Fano fibration, but it becomes a Fano fibration after a finite étale base change, since the geometric generic fiber is Fano. On the other hand, a Fano fibration is not necessarily a family of Fano varieties over T, but it is over some dense open subset of the base. The same properties hold for log CY pairs.
Let X→ T be a Fano type morphism.
Let Y be a normal quasi-projective variety and Y→ X a projective birational morphism
that only contracts non-terminal places of X.
Then, the composition Y→ T is a Fano type morphism.
Let Δ be a boundary on X such that
(X,Δ) is klt and -(K_X+Δ) is big and semiample over T.
The log pullback Δ_Y of Δ is effective
as Y→ X only
contracts non-terminal places of (X,Δ),
so (Y,Δ_Y) is a klt pair.
Since -(K_Y+Δ_Y) is big and semiample over T,
the morphism Y→ T is of Fano type.
The following lemma is well known (see, e.g., <cit.>).
Let X→ T be a Fano type morphism, and
let (X,B) be a pair that is log CY over T, i.e., K_X + B ∼_T, Q 0.
Let Y be a normal quasi-projective variety and Y→ X a projective birational morphism
that only contracts non-canonical places of (X,B).
Then, the composition Y→ T is a morphism of Fano type.
For a rational map π X Z of normal varieties, we define the pullback of divisors as follows. Let X be a normal variety that resolves π, with projections p_1X→ X and p_2X→ Z. Let D_Z be a Q-Cartier divisor on Z, and assume m D_Z is Cartier. Then we define the Q-divisor
π^* D_Z 1/mp_1_* p_2^* (mD_Z),
where p_2^* (mD_Z) is the Weil divisor associated to the Cartier divisor corresponding to m D_Z under the isomorphism
H^0(Z, O_Z(mD)) H^0(Z, p_2_* p_2^* O_Z(mD_Z)) = H^0(X, p_2^* O_Z(mD_Z)).
If D_X is a prime divisor on X, we define its pushforward π_* D_X to be its strict transform on Z. We extend π_* linearly to Weil divisors on X. If π is a birational contraction, then the equality π_* π^* D = D holds as Q-divisors on Z.
Furthermore, if π is a birational contraction, then π_* induces a homomorphism X → Z.
We now introduce some notions of class groups in families.
Let f X→ T be a fibration of Q-factorial normal varieties.
We say that f has surjective class group restrictions (resp. injective class group restrictions, isomorphic class group restrictions)
if the restriction homomorphism
Cl_ Q(X/T) ( X / f^* T) ⊗_ Z Q → Cl_ Q(X_t) (X_t) ⊗_ Z Q
is surjective (resp. injective, an isomorphism) for every closed point t∈ T.
The property of injective class group restrictions descends under birational contractions:
Let ϕ X → T and ψ Z → T be fibrations of Q-factorial varieties
such that the fibers X_t and Z_t are normal for every t ∈ T.
Assume there is a birational contraction
π X Z over T.
If ϕ has injective class group restrictions,
then ψ also
has injective class group restrictions.
Let D be a Q-divisor on Z
for which D|_Z_t∼_ 0 for some t∈ T.
The pullback π^*D to X satisfies
π^*D|_X_t∼_ 0,
so π^*D∼_,T 0
because X→ T has injective class group restrictions.
Since π is a birational contraction, we conclude
D = π_*π^*D ∼_,T 0.
The following properties of class groups of fibers of Fano type morphisms will be useful.
Let f X → T be a Fano type morphism of Q-factorial varieties.
* Let t_0∈ T be a closed point in the smooth locus of T. There is a smooth open neighborhood t_0∈ U⊂ T such that for any divisor D on X over T, if D|_X_t_0≡ 0, then D is Q-linearly trivial over an open neighborhood of t_0.[That is, there exists an open neighborhood U ∋ t_0 such that D|_X_U∼_ Q 0 in (X_U/U) X_U / f|_X_U^* U.]
* There exists a nonempty open subset V ⊂ T such that
f|_X_V X_V → V has injective class group restrictions.
* If f has surjective class group restrictions, then f has isomorphic class group restrictions
over a nonempty open subset of T.
First, we note that for any divisor D on X over T that is numerically trivial on X_t_0, there is an open neighborhood U_D∋ t_0 over which D|_X_U_D≡ 0. Indeed, over any affine open subset of T, we may run a D-MMP for X that terminates with a good minimal model by <cit.>, and the assumption that D is numerically trivial on X_t_0 implies that this MMP is trivial on an open neighborhood of t_0. Since f is of Fano type and D is numerically trivial over this open neighborhood, it is Q-linearly trivial over this open neighborhood.
For (<ref>), since X→ T is Fano type, _ Q(X/T) ≅_ Q(X/T)/≡_ Q, T is finitely generated (see <cit.>).
Thus, after replacing T by an open neighborhood of t_0 finitely many times, the conclusion holds.
For (<ref>), let t_0∈ T, and let t_0 ∈ U_0 be the open neighborhood from (<ref>). Then (_ Q(X/T) →_ Q(X_t)) ⊆(_ Q(X/T) →_ Q(X_t_0)) for any t∈ U_0. If the containment is strict for some t_1∈ U_0, apply (<ref>) to find an open neighborhood t_1 ∈ U_1 ⊂ U_0. Repeating this process, we obtain a sequence (_ Q(X/T) →_ Q(X_t_i+1)) ⊆(_ Q(X/T) →_ Q(X_t_i)) of subspaces of the finite-dimensional Q-vector space _ Q(X/T). This must stabilize after a finite number of steps, so after shrinking T a finite number of times, we obtain the open subset V in (<ref>).
Finally, (<ref>) is immediate from (<ref>).
§.§ Properties of cluster type log Calabi–Yau pairs and dlt modifications
Now we focus on log CY pairs of cluster type and prove several results that we will use in the proof of Theorem <ref>.
The next lemma is one of the reasons for the terminality assumption the theorem.
Let (X,B) be a pair of index one,
and let (X,Δ) be a terminal pair.
For every ϵ∈ (0,1),
the exceptional non-terminal places of (X,(1-ϵ)B+ϵΔ)
are log canonical places of (X,B).
If E is an exceptional non-terminal place
of (X,(1-ϵ)B+ϵΔ), then
a_E(X,(1-ϵ)B+ϵΔ) =
(1-ϵ)a_E(X,B) + ϵ a_E(X,Δ) ≤ 1.
As a_E(X,Δ)>1, we have ϵ a_E(X,Δ)>ϵ
and so (1-ϵ)a_E(X,B)< 1-ϵ.
Thus, a_E(X,B)<1.
Since (X,B) has index one, we get a_E(X,B)=0,
so E is a log canonical place of (X,B).
Next, we show that a dlt modification of a cluster type log CY pair is also of cluster type.
Let (X,B) be a log CY pair of cluster type.
Let ϕ Y X be a birational map
that only extracts log canonical places of (X,B),
and only contracts non-terminal places of (X,B).
Let (Y,B_Y) be the log pullback of (X,B) to Y.
Then (Y,B_Y)
is a log CY pair of cluster type.
First, the assumption that ϕ only contracts non-terminal places implies that B_Y is effective, so (Y, B_Y) is a pair.
Let π (^n,Σ^n) (X,B) be a crepant birational map
such that 𝔾_m^n ∩ Ex(π) has codimension at least two on the torus.
By the assumption that ϕ only extracts log canonical places, the divisorial part of the exceptional locus of ϕ^-1 X Y
is contained in supp(B).
We conclude that
ϕ^-1∘π (^n,Σ^n) (Y,B_Y) is a crepant birational map for which
𝔾_m^n ∩ Ex(ϕ^-1∘π) has codimension at least two in the torus.
So (Y,B_Y) is of cluster type.
The property of being a cluster type log CY pair also descends under dlt modifications.
Let (Y,B_Y) be a log CY pair
of cluster type.
If ϕ Y X is a birational contraction that only contracts
log canonical places of (Y,B_Y),
then (X,ϕ_* B_Y) is a log CY pair of cluster type.
Write B ϕ_* B_Y.
First note that ϕ (Y,B_Y) (X, B) is crepant birational by the negativity lemma.
Let ψ (^n,Σ^n) (Y,B_Y) be a crepant birational map
such that 𝔾_m^n∩ Ex(ψ)
has codimension at least two in 𝔾_m^n.
Then, as ϕ only contracts log canonical places of (Y,B_Y), the composition
ϕ∘ψ (^n,Σ^n) (X,B) is a crepant birational map
such that 𝔾_m^n ∩ Ex(ϕ∘ψ) has codimension at least two in 𝔾_m^n.
Next, we construct dlt modifications with certain desirable properties. The nice dlt modifications constructed in the following lemmas will be useful in giving another characterization of cluster type log CY pairs (Lemma <ref>) and in proving Theorem <ref>. First, we need some definitions.
Let (X,B) be a dlt pair
and X^ snc be the largest open subset of X
on which the pair is simple normal crossing.
Let B^ snc be the restriction of B to X^ snc.
* A formally toric blow-up
of (X^ snc,B^ snc)
is a blow-up Y^ snc→ X^ snc that is a formally toric morphism over any closed point of (X^ snc,B^ snc) (see <cit.> for more details).
We say that a formally toric blow-up is a formally toric dlt blow-up
if the log pullback of (X^ snc,B^ snc) to Y is a dlt pair.
* A formally toric dlt blow-up of (X,B) is a dlt modification
(Y,B_Y)→ (X,B) that restricts to a formally toric blow-up
of (X^ snc,B^ snc).
We say that a formally toric blow-up
is a blow-up of a stratum if it is induced by the blow-up of the
reduced scheme structure of a stratum of (X^ snc,B^ snc).
<cit.> shows
that if (X,B) is a dlt pair, then every formally toric dlt blow-up of
(X^ snc,B^ snc) extends to a formally toric blow-up
of (X,B).
Thus, every stratum of (X,B) induces a formally toric blow-up of the stratum, and this formally toric dlt blow-up of (X,B) is unique up to a small birational transformation over the base.
Let (X,B) be a log canonical pair,
and let E⊂ X be an effective Q-Cartier Q-divisor that
has no common components with B.
Then there is a Q-factorial dlt modification ϕ (Y,B_Y)→ (X,B) such that the strict transform ϕ^-1_*E of E on Y
contains no log canonical centers of (Y,B_Y). Furthermore, the pair (Y,B_Y+ ϵϕ_*^-1 E) is dlt for sufficiently small ϵ > 0.
If moreover (X,B) is a dlt pair, then ϕ may be obtained by a sequence of formally toric blow-ups of strata of (X,B).
Consider the pair (X,B+ϵ E), which is not necessarily log canonical.
By linearity of log discrepancies with respect to the boundary divisor,
for ϵ>0 small enough, every non-lc center of (X,B+ϵ E)
is a log canonical center of (X,B).
By <cit.>
there exists a Q-factorial dlt modification
ϕ (Y,B_Y+ϵϕ^-1_* E)→ (X,B+ϵ E) where
B_Y is the sum of
ϕ^-1_* B and the reduced exceptional divisor of ϕ (see <cit.>).
Then (Y, B_Y) is dlt, and
by the previous considerations, we have
ϕ^*(K_X+B)=K_Y+B_Y.
Since (Y,B_Y+ϵϕ^-1_* E)
is log canonical, we know that ϕ^-1_* E contains no log canonical centers of (Y,B_Y).
Now, assume in addition that (X,B) is dlt.
By the previous paragraph,
there exists a dlt modification ϕ_0 (Y_0,B_Y_0)→ (X,B)
such that ϕ_0^-1_*E
contains no log canonical centers of (Y_0,B_Y_0).
We first argue that the dlt modification (Y_0,B_Y_0) induces a formally toric dlt blow-up
(Y_0^ snc,B_Y_0^ snc)→ (X^ snc,B^ snc).
Locally at every closed point, the pair (X^ snc,B^ snc) has local complexity zero <cit.>.
The morphism (Y_0^ snc,B_Y^ snc_0)→ (X^ snc,B^ snc) only contracts
divisors with coefficient one, so the complexity of this morphism is zero at every closed point of X^ snc (see <cit.>).
Hence, <cit.> implies that the dlt modification
(Y_0^ snc,B_Y^ snc_0)→ (X^ snc,B^ snc) is formally toric over the base.
That is, ϕ_0 is a formally toric dlt blow-up
along some ideal sheaf whose support is contained in supp(B^ snc).
Now we will construct Y as be a higher model of Y_0^ snc that is obtained from (X,B) by a sequence of formally toric dlt blow-ups along strata.
We first construct this model over formal neighborhoods in X^ snc.
Here the morphism Y_0^ snc→ X^ snc
corresponds to unimodular fan refinements of unimodular cones
of (X^ snc,B^ snc) <cit.>.
Every refinement of a unimodular cone σ is refined by a sequence of stellar subdivisions of σ <cit.>.
Hence, over formal neighborhoods in X^ snc, there is a formally toric dlt blow-up
(Y^ snc,B_Y^ snc)→ (X^ snc,B^ snc)
that is obtained by successively blowing up strata of (X^ snc, B^ snc)
<cit.>
and such that there is a projective birational
morphism Y^ snc→ Y_0^ snc.
Furthermore, additional blow-ups along strata of (X^ snc,B^ snc) result in a higher model that is still dlt and formally toric over (X^ snc,B^ snc).
Now, consider the locally closed decomposition X^ snc = _i ∈ I Z_i defined by the strata of B. For each i∈ I, the above procedure produces the same sequence of blow-ups over any closed point of the locally closed subset Z_i. Since the index set I is finite,
we may apply the above procedure to each Z_i (performing additional blow-ups along strata if necessary) to get a sequence of blow-ups of strata of (X^ snc,B^ snc), which yields a formally toric dlt blow-up (Y^ snc,B_Y^ snc)→ (X^ snc,B^ snc) that factors through a projective birational
morphism Y^ snc→ Y_0^ snc.
By Remark <ref>, this extends to a formally toric dlt blow-up
(Y,B_Y)→ (X,B).
By construction, the induced birational map
ψ Y Y_0 is a birational contraction
and Ex(ψ) contains no log canonical centers of (Y,B_Y).
Then, the strict transform of E on Y contains no log canonical centers of (Y,B_Y).
We can now give the following equivalent characterization of cluster type log CY pairs.
A log CY pair (X,B)
is of cluster type if and only if
there exists a Q-factorial dlt modification (Y,B_Y)→ (X,B)
with a crepant birational contraction
π (Y,B_Y) (^n,Σ^n).
Furthermore, if E Ex(π)∖ supp(B_Y), we may assume that
(Y,B_Y+ϵ E) is a dlt pair for ϵ > 0 small enough.
First, assume that (X,B) is of cluster type.
Then there exists a crepant birational map
(^n,Σ^n) (X,B)
that only contracts log canonical places.
In particular, there exists a Q-factorial dlt modification (Z,B_Z)→ (X,B)
such that the composition p Z^n is a birational contraction. Let E_Z Ex(p)∖ supp(B_Z).
Applying Lemma <ref> to (Z, B_Z) and E_Z
yields a dlt modification q (Y,B_Y)→ (Z,B_Z) such that (Y,B_Y+ ϵ q^-1_*E_Z) is dlt for ϵ>0 small enough.
Denote the composition by π p ∘ q Y P^n.
Then q^-1_*E_Z= Ex(π)∖ supp(B_Y),
so (Y, B_Y) → (X,B) is the desired dlt modification.
Now assume (X,B) admits a Q-factorial dlt modification ϕ (Y,B_Y)→ (X,B)
with a crepant birational contraction π (Y,B_Y) (^n,Σ^n).
Then (X,B) is of cluster type by Lemma <ref>.
Furthermore, if E = Ex(π)∖ supp(B_Y), then by Lemma <ref> we may replace (Y,B_Y) with a higher -factorial dlt modification
such that (Y,B_Y+ϵ E) is dlt for ϵ>0 small enough.
The next lemma will be used to ensure that certain linear combinations of pairs are terminal on higher dlt modifications.
Let (X,B) be a dlt pair of index one.
Let (X,Δ) be a terminal pair, and assume that the prime components of Δ are Q-Cartier.
Let (Z,B_Z)→ (X,B) be any dlt modification.
Then for any sufficiently small ϵ > 0, there exists a Q-factorial dlt modification (Y,B_Y)→ (Z,B_Z)
such that the log pullback of
(X,(1-ϵ)B+ϵΔ) to Y
is a terminal pair.
Let D be the (reduced) sum of the prime components of Δ
that are not components of B.
By Lemma <ref>, there is a Q-factorial dlt modification ϕ (W,B_W)→ (Z,B_Z)
such that
ϕ^-1_* D
contains no log canonical centers of (W,B_W).
For sufficiently small ϵ > 0,
every divisor contracted by W→ X
is a non-terminal place of (X,(1-ϵ)B+ϵΔ),
so the log pullback (W,Γ_W) of (X,(1-ϵ)B+ϵΔ) to W is a pair; furthermore, the pair (W,Γ_W) is klt.
By Lemma <ref>,
every exceptional non-terminal place of (X,(1-ϵ)B+ϵΔ)) is a log canonical place of (X,B).
In particular, every exceptional non-terminal place of (W,Γ_W) is a log canonical place of (W,B_W),
and the log canonical centers of (W,B_W) are precisely its proper strata.
By construction, the components of Γ_W that are not contained in B_W are precisely the components of ϕ^-1_* D.
So, on a neighborhood of
the generic point of every proper stratum of (W,B_W),
the support of Γ_W is contained
in the support of B_W.
Since (W,B_W) is log smooth along the generic point of each stratum,
we conclude that (W,Γ_W) is log smooth around every non-terminal center.
Therefore, we may obtain a terminal model of (W,Γ_W)
by successively blowing up strata of (W,B_W) (see <cit.>).
Hence, the composition (Y,B_Y)→ (W,B_W)→ (Z,B_Z) is a Q-factorial dlt modification
for which the log pullback of
(X,(1-ϵ)B+ϵΔ) is terminal.
Finally, we will apply the following two technical lemmas to certain fibrations in Section <ref> to show that, after shrinking the base, the outcomes of certain MMPs will have nice fibers.
Let (Y,Γ) be a terminal pair and f (Y,Γ)→ T a log Calabi-Yau fibration.
Let U U_1 ∩ U_2 ∩ U_3 be the intersection of the following dense open subsets of T:
* the smooth locus U_1 ⊂ T,
* an open set U_2 over which all log fibers (Y_t,Γ_t) are terminal and Γ_t = Γ|_Y_t, and
* the complement U_3 of the boundary divisor and the moduli divisor induced by the canonical bundle formula
on T by (Y,Γ).
Let E be an effective -Cartier Q-divisor on Y_U
that contains no fiber of f.
Then for δ > 0 small enough and
for any sequence Y_U W of steps of the (K_Y_U+Γ_U+δ E)-MMP over U,
the following conditions hold:
* the fibers of Y_U→ U are normal,
* the fibers of W→ U are normal,
* for every t∈ U, the induced birational map Y_t W_t is a contraction, and
* if Γ_W is the pushforward of Γ to W, then the log fibers of (W,Γ_W)→ U are terminal.
Let d be the dimension of U, and let
t∈ U be a closed point.
Then (<ref>) holds by <ref>; furthermore, if H_1,…,H_d are general hyperplanes on U containing t, the pair
(U,H_1+…+H_d;t) is log smooth and has t as a log canonical center.
Consider the pair
(Y,Γ+f^*H_1+…+f^*H_d).
This pair is log canonical over a neighborhood of t by <ref> and the canonical bundle formula (see, e.g., <cit.>).
The pair (Y_t,Γ_t) is terminal by <ref>,
so Y_t is a minimal log canonical center of (<ref>). Further (Y_t,Γ_t) is the pair induced by adjunction to the minimal log canonical center:
indeed, if (Y_t,Γ_t) is the pair obtained by adjunction of (<ref>) to Y_t, then (Y_t,Γ_t) is log CY and Γ_t≥Γ_t, so Γ_t=Γ_t.
In what follows, we will show that there is an open neighborhood of t such that, for sufficiently small δ > 0 depending on t, the conclusion holds over this neighborhood of t. Since U is compact, we can find δ that works for all t∈ U.
Let E_t denote the restriction of E to Y_t.
By assumption, E contains no log canonical centers of (<ref>) over a neighborhood of t.
Therefore, for δ>0 small enough, the pair
(Y,Γ+f^*H_1+…+f^*H_d+δ E)
is also log canonical over a neighborhood of t,
and the pair (Y_t,Γ_t+δ E_t), which is obtained by adjunction
to the minimal log canonical center,
is terminal.
Let ϕ Y_U W be a sequence
of steps of the (K_Y_U+Γ_U+δ E)-MMP over U.
Then, we have a commutative diagram
(Y_U,Γ_U+f^*H_1+…+f^*H_d+δ E)@–>[r]^-ϕ[d]_-f
(W,Γ_W+f'^*H_1+…+f'^*H_d+δ E_W) [ld]^-f'
T
where Γ_W (resp. E_W) is the pushforward
of Γ_U (resp. E) to W.
The pair
(W,Γ_W+f'^*H_1+…+f'^*H_d+δ E_W)
is log canonical over a neighborhood of t,
and W_t is a minimal log canonical center.
By <cit.>, this implies that (<ref>) holds.
It remains to show (<ref>) and (<ref>). First, by adjunction, the induced birational map
ϕ_t (Y_t, Γ_t+δ E_t) (W_t,Γ_W_t+δ E_W_t)
between minimal log canonical centers of (<ref>) and (<ref>)
is (K_Y_t+Γ_t+δ E_t)-negative,
so the negativity lemma applied to a resolution of ϕ_t shows that
a_F_t(W_t,Γ_W_t+δ E_W_t) ≥
a_F_t(Y_t,Γ_t+δ E_t)
for any prime divisor F_t.
We now show (<ref>).
By contradiction, suppose ϕ_t is not a birational contraction, so it extracts some prime divisor F_t. As F_t is exceptional over Y_t, the inequality (<ref>) implies that
F_t appears in Γ_W_t+δ E_W_t
with negative coefficient,
contradicting the fact that (W_t,Γ_W_t+δ E_W_t)
is a pair.
This proves (<ref>).
Finally, the inequality (<ref>) and the terminality of (Y_t,Γ_t+δ E_t) imply that (W_t,Γ_W_t+δ E_t) is terminal and hence that (W_t,Γ_W_t) is terminal, so (<ref>) holds.
Let (Y,B) be a dlt pair and f (Y,B)→ T a log Calabi–Yau fibration.
Let U U_1 ∩ U_2 ∩ U_3 be the intersection of the following dense open subsets of T:
* the smooth locus U_1 ⊂ T,
* an open set U_2 over which the log fibers of (Y,B)→ T are dlt and B_t = B|_Y_t, and
* the complement U_3 of the support of the boundary divisor and the moduli divisor induced by the canonical bundle formula on T by (Y,B).
Let E be an effective ℚ-Cartier Q-divisor E on Y_U that does not contain any fiber of f
and does not contain any irreducible component of the restriction of
any log canonical center of (Y,B) to a fiber of f.
Then for δ > 0 small enough and
for any sequence
Y_U W of steps of the (K_Y_U+B_U+δ E)-MMP over U,
the normalizations of the log fibers of (W,B_W)→ U are lc.
Let d be the dimension of U, and let t∈ U be a closed point.
As in the proof of Proposition <ref>, we may work on an open neighborhood of t.
Using <ref>,
let (U,H_1+…+H_d;t) be a log smooth pair
with t a log canonical center,
as in the proof of Lemma <ref>.
By the canonical bundle formula
and <ref>, the pair
(Y,B+f^*H_1+…+f^*H_d)
is log canonical over a neighborhood of t,
and Y_t is a log canonical center.
Since (Y,B) and (Y_t, B_t) are dlt, the lc centers of (Y_t, B_t) are irreducible components of
restrictions of lc centers of (Y,B).
So, by assumption, E contains no lc centers of (Y_t,B_t).
Then inversion of adjunction <cit.> and the assumption that Y_t⊄E imply that E contains no
log canonical centers of (Y,B+f^*H_1+…+f^*H_d) near t.
Thus, for sufficiently small δ > 0,
the pair
(Y,B+f^*H_1+…+f^*H_d+δ E)
is log canonical over a neighborhood of t.
Let Y_U W be a sequence
of steps of the (K_Y_U+B_U+δ E)-MMP
over U,
and let f' W→ U be the induced fibration.
Let B_W (resp. E_W)
be the pushforward of B (resp. E) to W.
Then, over a neighborhood of t, the pair
(W,B_W+f'^*H_1+…+f'^*H_d+δ E_W)
is log canonical,
and hence so is
(W,B_W+f'^*H_1+…+f'^*H_d).
Since the log fiber (W_t,B_t) is
the pair obtained by adjunction
of (<ref>) to W_t,
we conclude by <cit.> that
its normalization is lc.
§ TORIC LOG CALABI–YAU PAIRS IN FAMILIES
In this section, we prove that in a family of log Calabi–Yau pairs,
the property of being toric is constructible. Later, in Section <ref>, we will use this result to prove Theorem <ref>.
Let f (𝒳,ℬ) → T be a family of log
Calabi–Yau pairs of index one, and assume that X_t is of Fano type for every closed point t∈ T. Then, the set
T_ toric{closed t ∈ T | (𝒳_t,ℬ_t) is a toric log CY pair}
is a constructible subset of T.
First, we prove that for certain nice fibrations,
the existence of a toric log fiber implies that every log fiber is toric. We will use the characterization of toric pairs proven by <cit.>, which uses complexity (see Definition <ref>).
Let X→ T be a projective contraction of Q-factorial normal quasi-projective varieties,
(X,B) a pair, and t_0∈ T a closed point.
Assume the following conditions are satisfied:
* X_t_0 = X_t for t in T,
* B is reduced and contains no fibers of X → T,
* the log fibers of (X,B)→ T are log canonical pairs,
* the restriction of B to B|_X_t_0 induces a bijection between prime components,
* X → T has injective class group restrictions (Definition <ref>), and
* (X_t_0, B_t_0) is a toric log CY pair.
Then all log fibers of (X,B) → T are toric log CY pairs.
By assumption (<ref>), the restriction of K_X+B to X_t_0 is -linearly trivial.
By (<ref>), we conclude that K_X+B is -linearly trivial on every closed fiber X_t.
In particular, by (<ref>) and (<ref>), the log fiber (X_t,B_t) is a log CY pair for every t∈ T.
Now write b for the number of prime components B_t_0,1,…,B_t_0,b of B_t_0, and consider the decomposition Σ_t_0∑_i=1^b B_t_0, i of B_t_0. Since (X_t_0, B_t_0) is a toric pair, we have the equality b = X_t_0 + ρ(Σ_t_0) (see, e.g., <cit.>).
Now, for each 1≤ i ≤ b, let Γ_i be the prime component of B restricting to B_t_0, i (using (<ref>)). Let t∈ T be a closed point,
and consider the decomposition of B_t given by Σ_t ∑_i=1^b Γ_i|_X_t = B_t.
Then |Σ_t| = b, X_t = X_t_0 by (<ref>), and ρ(Σ_t) = ρ(Σ_t_0) by (<ref>), so
c(X_t, B_t; Σ_t) = c(X_t_0, B_t_0; Σ_t_0) = 0.
Hence (X_t, B_t) = (X_t, ⌊ B_t ⌋) is a toric log CY pair by Theorem <ref>.
Let (X,B)→ T be a log Calabi–Yau fibration of index one over T, i.e., K_X+B∼_T 0. Assume X→ T is Fano type.
Then there is an open subset U of T such that if the log fiber (X_t,B_t) is a toric CY pair for some closed t∈ U, then every log fiber over
U is a toric log CY pair.
Let T'→ T be a finite Galois cover such that for the pullback (X_T', B_T'), the prime components of B_T' restrict to prime divisors on general fibers of X_T'→ T'. Let (X”, B”)→ (X_T', B_T') be a Q-factorial dlt modification, which exists by <cit.>. We get a commutative diagram
(X,B) [d] (X_T', B_T') [d] [l] (X”,B”) [l] [ld]^-ψ
T T' [l]_-π.
Let U'⊂ T' be a nonempty open subset over which (X”,B”) → U' satisfies conditions (<ref>), (<ref>), and (<ref>) of Proposition <ref>. By shrinking U' if necessary, we may further assume that (X”,B”) → (X_T', B_T') induces dlt modifications on log fibers over U'.
Lemma <ref> shows that X”|_U'→ U' is Fano type. We may assume by Lemma <ref>(<ref>) that Proposition <ref>(<ref>) holds over U'.
Let U⊂ T be an open subset whose preimage in T' is contained in U'. We may assume π is étale over U.
Then ψ (X”, B”) → U' satisfies conditions (<ref>)–(<ref>) of Proposition <ref>.
Now assume (X_t, B_t) is toric log CY for some closed t∈ U, and let t' ∈ T' be a preimage of t. Then (X'_t', B'_t') is toric log CY. A dlt modification of a toric log CY pair is also toric log CY by Lemma <ref>. Hence, the log CY pair (X”_t', B”_t') is toric.
All log fibers of ψ over U' are toric log CY pairs by Proposition <ref>, so all fibers of (X_U', B_U') → U' are toric by Lemma <ref>. Hence, all log fibers of (X,B) → T over U are toric.
We proceed by induction on T. If T = 0 there is nothing to show. Next, assume T ≥ 1.
As the geometric generic fiber is an index one log Calabi–Yau pair on a Fano type variety, we may take a dominant finite étale morphism T' → T such that the base change (𝒳_T', ℬ_T') → T' is a log CY fibration and 𝒳_T'→ T' is a Fano type morphism.
Let U' ⊂ T' be the open subset of Theorem <ref>, and let U ⊂ T be its image.
Then T'_toric∩ U' is a constructible subset of T' by Theorem <ref>, so its image T_toric∩ U in T is also constructible. Since T_toric∩ (T∖ U) is constructible by induction, this completes the proof.
§ CLUSTER TYPE LOG CALABI–YAU PAIRS IN FAMILIES
In this section, we prove that in a family of terminal log Calabi–Yau pairs,
the property
of being cluster type is constructible.
In Section <ref>, we will use this to prove Theorem <ref>.
Let f (𝒳,ℬ)→ T
be a family of log Calabi–Yau pairs of index one,
and assume that X_t is a -factorial terminal Fano variety for every closed point t∈ T.
Then, the set
T_ cluster{closed t∈ T | (𝒳_t,ℬ_t)
is a log CY pair of cluster type}
is a constructible subset of T.
To prove Theorem <ref>, we will use the characterization of cluster type pairs in Lemma <ref>,
which shows that (X,B) is cluster type if and only if there is a Q-factorial dlt modification (Y, B_Y) that admits a crepant birational contraction ϕ (Y, B_Y) ( P^n, Σ^n).
Theorem <ref> will follow from Theorem <ref>. The idea of the proof of Theorem <ref> is as follows.
Assume the log fiber over t_0 is cluster type.
Then, after taking an appropriate étale cover of T
and suitable higher dlt modifications of (Y_t_0, B_Y,t_0), we may spread out ϕ_t_0 to a family of crepant birational contractions ϕ_t (Y_t, B_t) (W_t, B_W, t) over an open neighborhood of a preimage of t_0. Using Proposition <ref>, we show that the log CY pairs (W_t, B_W, t) are toric. Hence, the log fibers of (𝒳,ℬ) are cluster type over an open neighborhood of t_0.
In order to spread out the birational contraction ϕ_t_0, we will need to extend certain divisors over a family.
An important ingredient for this is the following lemma
on invariance of sections
for suitable dlt modifications of Fano type morphisms.
This lemma extends <cit.>.
Let X→ T be a Fano type morphism of Q-factorial varieties.
Let (X,B) be a
dlt pair that is log CY of index one over T.
Assume that for every closed point t∈ T, each proper stratum of (X,B) restricts to a proper stratum of (X_t,B_t).
Assume there is a boundary Δ for which (X,Δ) is terminal
and log CY over T.
Then, there exists a non-empty affine open subset U⊆ T
satisfying the following condition:
for any dlt modification π (Y,B_Y)→ (X,B),
any Cartier divisor D on Y, and every closed point t∈ U,
we have a surjective homomorphism
H^0(Y,𝒪_Y(mD)) ↠ H^0(Y_t,𝒪_Y_t(mD|_Y_t)),
for every m sufficiently divisible (independent of t).
Write Δ∼_T,Δ'+A
where A is ample over T
and the pair (X,Δ') is terminal.
By shrinking T, we may assume that no fibers of f are contained in Δ, Δ', or A.
Let ψ X'→ X be a log resolution
of (X, supp(Δ)+ supp(B)). By the assumption on the strata of (X,B), after shrinking T, we may assume that T = 0 and that the following hold for every closed point t∈ T:
* X'_t→ X_t
is a log resolution of
(X_t, supp(Δ|_X_t)+ supp(B|_X_t)), and
* every dlt modification
(Y,B_Y)→ (X,B) restricts
to a dlt modification
(Y_t,B_Y_t)→ (X_t,B_t).
Now let π (Y, B_Y) → (X,B) be a dlt modification,
and let D be a Cartier divisor on Y. We may write D∼_π^*(D_X)+E_Y
where D_X is a -Cartier divisor on X
and E_Y is π-exceptional.
The log canonical places of (X,B) are horizontal over T by (<ref>), so
after replacing X' by a further blow-up along log canonical
places of (X,B), we obtain a commutative diagram as follows, where
ϕ is a birational contraction and ψ still satisfies properties (<ref>) and (<ref>) (without further shrinking T):
(Y,B_Y) [d]_-π
(X,B) [d] X' [ld][l]^-ψ@–>[ul]_-ϕ
T.
Fix a rational number ϵ>0 small enough so that all π-exceptional divisors are non-canonical places of (X,Γ_ϵ (1-ϵ)B+ϵΔ').
Write
ψ^*(K_X+Γ_ϵ) + F_ϵ = K_X'+D'_ϵ
where D'_ϵ and F_ϵ are effective snc divisors
without common components, and F_ϵ is ψ-exceptional.
Since every π-exceptional divisor
is a non-canonical place of (X,Γ_ϵ),
we may assume that F_ϵ is ϕ-exceptional.
Note that (X', D'_ϵ) is a klt pair by construction.
By Lemma <ref>,
every exceptional non-terminal place of (X,Γ_ϵ) is a log canonical place of (X,B).
Therefore, every non-terminal center of (X,Γ_ϵ) is a stratum of
(X,⌈ D'_ϵ⌉).
By the assumption on the strata of (X,B), every proper stratum of (X',⌈ D'_ϵ⌉) is horizontal over T and restricts to a proper stratum on each closed fiber.
Therefore, by successively performing blow-ups
of the strata of (X',⌈ D'_ϵ⌉), we may assume that (X',D'_ϵ) is a terminal pair (see, e.g., <cit.>).
Let δ>0 be a rational number small enough so that
A_ϵ,δϵ A +δ D_X
is ample
and (X', D'_ϵ,δ D'_ϵ +δϕ^* E_Y) is terminal.
By <cit.>, for every sufficiently divisible m and for every closed point t∈ U, the homomorphism
H^0(X',𝒪_X'(m(K_X'+D'_ϵ,δ
+ψ^*A_ϵ,δ
)))
→
H^0(X'_t,
𝒪_X'_t(m(K_X'+D'_ϵ,δ
+ψ^* A_ϵ,δ)|_X'_t))
is surjective.
Since ϕ is a birational contraction, we have π^* = ϕ_* ∘ψ^*, so
ϕ_*(m(K_X'+D'_ϵ,δ+ψ^* A_ϵ,δ))∼_δ m D.
Now let m be divisible enough, and let
Γ_t ∈ H^0(Y_t,𝒪_Y_t(δ m D|_Y_t))
= H^0(Y_t, 𝒪_Y_t(m(π^*(K_X+Γ_ϵ + A_ϵ,δ) + δ E_Y)|_Y_t)) .
Pulling back Γ_t via ϕ_t X'_t Y_t, we obtain
ϕ_t^*Γ_t ∈ H^0(X'_t, 𝒪_X'_t( m (K_X'+D'_ϵ,δ+ψ^* A_ϵ,δ-F_ϵ)|_X'_t)),
so we get
ϕ_t^*Γ_t +mF_ϵ|_X'_t∈
H^0(X'_t, 𝒪_X'_t( m (K_X'+D'_ϵ,δ+ψ^* A_ϵ,δ)|_X'_t)).
By the surjectivity of (<ref>), the section
ϕ_t^*Γ_t+mF_ϵ|_X'_t is the restriction of a section
Γ_X'∈ H^0(X',𝒪_X'(m(K_X'+D'_ϵ,δ+ψ^* A_ϵ,δ))).
By (<ref>), we conclude
ϕ_*(Γ_X')∈ H^0(Y,𝒪_Y(δ mD))
restricts to Γ∈ H^0(Y_t,𝒪_Y_t(δ mD|_Y_t)).
Let f X→ T be a fibration.
We say that f satisfies the extension
of prime divisors property if for every closed fiber X_t
and every effective prime divisor P_t⊂ X_t,
there an effective divisor P⊂ X
for which supp(P)|_X_t= supp(P_t).
Next, we show that if a log CY fibration satisfies certain nice properties,
then after taking an étale cover of the base,
we can find a nice dlt modification with the property that any higher dlt modification has the extension of prime divisors property.
The proof of the following proposition uses a result of Totaro <cit.> on deformation invariance of the divisor class group for certain families of Fano varieties, after an étale cover of the base.
Let X be a terminal variety,
f X→ T a Fano fibration,
and (X,B) an lc pair that is log CY of index one over T.
Assume that every fiber of f is -factorial.
After replacing T by a dense open subset,
there exists a commutative diagram:
(X,B)[d]_-f (X_U,B_U)[d]^-f_U[l]_-ϕ (Z,B_Z) [l] [dl]^-f_Z
T U[l]
satisfying the following conditions:
* U→ T is a dominant finite étale morphism, the pair (X_U,B_U) is obtained by base change, and B_U contains no fibers of f_U,
* (Z,B_Z)→ (X_U,B_U) is a -factorial dlt modification that induces -factorial dlt modifications on log fibers over closed points,
* the fibration f_Z is of Fano type,
* there is a boundary Δ_Z on Z such that
(Z,Δ_Z) is terminal and log CY over U, Δ_Z contains no fibers of f_Z, and all log fibers of (Z,Δ_Z)→ U are terminal,
* every stratum of (Z,B_Z) restricts to a stratum on each closed log fiber of f_Z,
* every -factorial dlt modification ϕ_Y (Y,B_Y)→ (Z,B_Z)
induces a Q-factorial
dlt modification on every closed fiber of (Z,B_Z)→ U, and
* for any -factorial dlt modification ϕ_Y (Y,B_Y)→ (Z,B_Z), the fibration f_Z∘ϕ_Y has isomorphic class group restrictions and satisfies the extension of prime divisors property.
As X is terminal and is Fano over T,
then after possibly shrinking T, we can find a boundary Γ such that (X,Γ) is terminal and log Calabi–Yau over T.
After further shrinking T, we may assume that neither Γ nor B contains any fiber of X→ T, and that the fibers of f are Fano varieties with terminal singularities.
We first construct U→ T and (X_U,B_U) as in (<ref>).
Since each fiber of X→ T is -factorial and terminal, it has rational singularities <cit.>
and is smooth in codimension 2.
Moreover, by Kawamata–Viehweg vanishing, every fiber has acyclic structure sheaf,
so we may apply <cit.> to find a dominant finite étale morphism T'→ T from a smooth variety
such that the base change f' X' X_T'→ T' has the property
that Cl(X'/T') maps surjectively to the class group of each closed fiber.
Let (X',B') (resp. (X',Γ')) be
the log pull-back of (X,B) (resp. (X,Γ)) to X'.
Note that f' is still a Fano type fibration,
(X',B') is log Calabi–Yau of index one over T',
and (X',Γ') is terminal and log Calabi–Yau over T'.
By <cit.> applied to any dlt modification of (X',B'), we can find a dominant finite étale morphism
U→ T' from a smooth variety
such that if f_U (X_U,B_U)→ T”'
is the base change,
then the dual complex of (X_U,B_U)
restricts to the dual complex of the log fibers.
In particular, each prime component of B_U
remains prime after restriction to any closed fiber.
We let (X_U,Γ_U) be the log pullback of (X',Γ') to X_U.
The morphism f_U is Fano type,
(X_U,B_U) is log CY of index one over U,
and (X_U,Γ_U) is terminal and log CY over U.
Further, (X_U/U) maps surjectively to the class group of each closed fiber of f_U.
Now we turn to constructing the dlt modification
(Z,B_Z)→ (X_U,B_U) in (<ref>).
By Lemma <ref>,
for sufficiently small ϵ>0
we can find a -factorial dlt modification
(Z,B_Z)→ (X_U,B_U)
such that
the log pullback (Z,Δ_Z)
of (X_U,(1-ϵ)B_U+ϵΓ_U)
is terminal.
We argue that the closed fibers of Z→ U remain -factorial.
For a closed point t∈ U, the prime exceptional divisors of Z_t→ X'_t
are restrictions of the prime exceptional divisors of Z→ X'.
Indeed, this holds as the prime exceptional divisors of Z→ X' are components of B_Z and hence correspond to vertices of the dual complex 𝒟(Z,B_Z).
The prime exceptional divisors
of Z→ X' are -Cartier.
Thus, the exceptional prime divisors
of Z_t→ X'_t are -Cartier, and so Z_t is -factorial.
We conclude that the -factorial
dlt modification (Z,B_Z)→ (X_U,B_U) induces -factorial dlt modifications on log fibers over closed points. This proves (<ref>).
Moreover, as the exceptional prime divisors over closed fibers are restrictions of the exceptional prime divisors of Z → X_U, the composition f_Z Z → U has surjective class group restrictions.
We have two pairs on the Q-factorial variety Z, namely:
* (Z,B_Z), which is dlt and log CY of index one over U, and
* (Z,Δ_Z), which is terminal and log CY over U.
The morphism f_Z is Fano type by Lemma <ref>.
After possibly shrinking U again, we may assume that U is smooth and affine,
all the log fibers of (Z, Δ_Z)→ U are terminal,
and f_Z has isomorphic class group restrictions (by Lemma <ref>(<ref>)).
The following conditions are satisfied:
* the pair (Z, B_Z) is -factorial,
dlt, and log CY of index one over U,
* the morphism f_Z is of Fano type and has isomorphic class group restrictions,
* the pair (Z,Δ_Z) is terminal
and log CY over U,
* each stratum of (Z,B_Z) restricts to a stratum of the log fiber over each closed point,
showing that the assumptions of Lemma <ref> hold, so after shrinking U again, we furthermore have
* for every t∈ U, every -factorial dlt modification ϕ_Y (Y,B_Y) → (Z,B_Z),
and every divisor D on Y, we have a surjective homomorphism
H^0(Y,𝒪_Y(mD)) ↠ H^0(Y_t,𝒪_Y_t(mD|_Y_t)),
for every m sufficiently divisible.
Now we verify the conditions in the proposition statement. We've already seen that (<ref>) and (<ref>).
Conditions <ref> and <ref> imply (<ref>) and (<ref>), respectively.
Conditions <ref> and <ref>
ensure that (<ref>) and (<ref>) hold.
Thus, it remains to show (<ref>).
First, we argue that f_Z∘ϕ_Y has isomorphic class group restrictions.
For surjectivity, the class group of every fiber Y_t is generated by the pullbacks of divisors from Z_t
and the prime exceptional divisors of Y_t→ Z_t.
By <ref> and <ref>,
the prime exceptional divisors of Y_t→ Z_t are restrictions of the prime exceptional divisors of Y→ Z.
Since Y is Q-factorial, this shows Y_t is also Q-factorial.
For injectivity,
if D_t is a -Cartier divisor on Y that is -linearly trivial on Y_t,
then ϕ_Y,t_*D_t is -linearly trivial on Z_t.
The pushforward ϕ_Y_*D is -linearly trivial over U by <ref>,
so D ∼_U, E for some ϕ_Y-exceptional divisor E.
Since E_t is -linearly trivial on Y_t and exceptional over Z_t, the negativity lemma implies that E_t is the trivial divisor.
Hence, since E is horizontal over the base by <ref>, we conclude that E is trivial, so D is -linearly trivial over U.
It remains to show that f_Z∘ϕ_Y satisfies the extension of prime divisors property.
Let E_t⊂ Y_t be an effective prime divisor.
Since f_Z∘ϕ_Y has isomorphic class group restrictions,
there is a divisor D on Y for which D|_Y_t∼_ Q E_t,
or, in other words, 0≤ mE_t ∈ H^0(Y_t,𝒪_Y_t(mD|_Y_t)) for some m.
By condition <ref>, after replacing m by a sufficiently divisible multiple, there exists
0≤ F ∈ H^0(Y,𝒪_Y(mD))
such that F|_Y_t=mE_t.
In particular, F is an effective divisor
for which supp(F)|_Y_t= supp(E_t).
Let X be a terminal variety,
f X→ T a Fano fibration,
and (X,B) an lc pair
that is log CY of index one over T.
Assume that every fiber of f is -factorial.
There is a non-empty open subset V⊂ T such that if (X_t,B_t) is of cluster type for some t∈ V, then there exists an open neighborhood t∈ V_0 ⊆ V over which every fiber is of cluster type.
Consider the commutative diagram
provided by Proposition <ref>:
(X,B)[d]_-f (X_U,B_U)[d]^-f_U[l]_-ϕ (Z,B_Z) [l] [dl]^-f_Z
T U.[l]
By Proposition <ref>(<ref>) there exists a boundary Δ_Z such that (Z,Δ_Z) is terminal and log Calabi–Yau over U and Δ_Z contains no fibers of f_Z.
Furthermore, (Z,B_Z)→ (X_U,B_U) induces -factorial dlt modifications on closed fibers by Proposition <ref>(<ref>).
After possibly shrinking U, we may assume that the following conditions are satisfied:
* U is a smooth affine variety,
* f_Z has equidimensional fibers,
* for every ϵ∈ [0,1], the boundary divisor and the moduli divisor induced by the canonical bundle formula by (Z,(1-ϵ)B_Z+ϵΔ_Z) on U are trivial, and
* no fiber of Z → U is contained in B_Z or Δ_Z, every log fiber of (Z,B_Z)→ U is dlt,
and every log fiber of (Z,Δ_Z)→ U is terminal.
Let V ⊂ T be an open subset contained in the image of U;
we may further assume V is affine.
We will show that V is the desired open subset in the proposition statement.
Thus, assume that (X_t_0,B_t_0) is of cluster type
for some t_0∈ V,
and let s_0∈ U be a pre-image of t_0.
Then the dlt log CY pair (Z_s_0,B_Z,s_0)
is of cluster type by Lemma <ref> and Proposition <ref>.
By Lemma <ref>,
we have a commutative diagram
(Y_s_0,B_Y,s_0)[d]_-ϕ_Y,s_0@–>[rd]^-π_s_0
(Z_s_0,B_Z,s_0) @–>[r] (^d,Σ^d),
where ϕ_Y,s_0 (Y_s_0,B_Y,s_0)→ (Z_s_0,B_Z,s_0)
is a Q-factorial dlt modification
and
π_s_0 (Y_s_0,B_Y,s_0) (^d,Σ^d)
is a crepant birational contraction.
Furthermore, if E_s_0 Ex(π_s_0)∖ B_Y,s_0,
then (Y_s_0,B_Y,s_0+ϵ E_s_0)
is dlt for ϵ>0 small enough.
By Lemma <ref>,
we may pass to a higher dlt modification to assume that ϕ_Y,s_0 is
obtained by
blow-ups of strata of (Z_s_0,B_Z,s_0).
The strata of (Z_s_0,B_Z,s_0)
are precisely the restrictions of the strata
of (Z,B_Z) to the log fiber over s_0
by condition (<ref>).
Hence, there exists a Q-factorial dlt modification (Y,B_Y)→ (Z,B_Z),
obtained by a sequence of
blow-ups of strata of (Z,B_Z),
such that the log fiber over s_0
of (Y,B_Y)→ U is precisely
(Y_s_0,B_Y,s_0).
We will show that, over a neighborhood U_0 of s_0, the log fibers of (Y, B_Y) are cluster type CY pairs. To do this, we will construct a birational model Y W on which we may apply Proposition <ref>.
We show that Y W induces
fiberwise birational contractions to toric log CY pairs. We construct W as follows.
By Proposition <ref>(<ref>), there exists an effective divisor E_Y on Y such that supp(E|_Y_s_0)= supp(E_s_0). Note that E_Y has no common components with B_Y.
By Lemma <ref>, after possibly replacing (Y,B_Y) with a higher dlt modification,
we may assume that (Y,B_Y+δ E_Y)
is dlt for δ>0 small enough.
In particular, over an open neighborhood s_0 ∈ U_0 ⊂ U, the effective divisor E_Y contains no fiber of Y_U_0→ U_0 and no intersection of a log canonical center of (Y,B_Y) with a fiber of Y_U_0→ U_0.
By Lemma <ref>, Lemma <ref>, and Proposition <ref>(<ref>), after possibly replacing (Y,B_Y)
with a higher -factorial dlt modification, we may assume that for ϵ > 0 small enough, the log pull-back
(Y,Γ_Y) of (Z,(1-ϵ)B_Z+ϵΔ_Z) to Y
is terminal and the log fibers of (Y,Γ_Y) → U are terminal.
Then, for δ>0 small enough, the pair
(Y,Γ_Y+δ E_Y) is terminal
and its restriction to closed fibers of Y_U_0→ U_0 are terminal.
By Lemma <ref>, the morphism f_Z∘ϕ_Y is Fano type.
Hence, we may run a (K_Y+Γ_Y+δ E_Y)-MMP over U_0
that terminates with a good minimal model W→ U_0.
Let B_W (resp. E_W) be the pushforward of B_Y, U_0 (resp. E_Y, U_0) to W.
Next, we will show that (W, B_W) satisfies the hypotheses of Proposition <ref> over some open neighborhood of s_0.
First, W → U_0 has injective class group restrictions by Proposition <ref>(<ref>) and Lemma <ref>.
Next, we apply Lemma <ref> to (Y,Γ_Y)
and Lemma <ref> to (Y,B_Y) over U_0.
The assumptions on Lemma <ref> hold by <ref>, the choice of ϵ above, and <ref>.
The assumptions on Lemma <ref> hold by <ref>, Proposition <ref>(<ref>), and <ref>.
By Lemma <ref>(<ref>) and (<ref>), we conclude that W_s is normal for every closed s∈ U,
and the corresponding birational maps Y_s W_s
are birational contractions. In particular Y_U_0 W is a birational contraction; furthermore, by <ref>, W → U_0 has equidimensional fibers.
By Lemma <ref>, the pair (W_s,B_W, s) is log canonical for every s∈ U_0.
Next, since Y_U_0 W is a birational contraction and contracts precisely E_Y, U_0, the negativity lemma implies that (W, B_W) is a log CY pair and (Y_U_0, B_Y, U_0) (W, B_W) is crepant birational. Furthermore, over each closed point s∈ U_0, the same argument and conclusions hold for the log fibers.
Over the point s_0,
since E_s_0 is π_s_0-exceptional,
the composition (W_s_0,B_W,s_0) (Y_s_0,B_Y,s_0) (^d,Σ^d) is a crepant birational contraction that only contracts log canonical places of (^d,Σ^d).
So (W_s_0,B_W,s_0) is a toric log CY pair by Lemma <ref>.
Thus, we have a Q-factorial pair (W,B_W) that is log CY
of index one and satisfies the following conditions:
* W_s = W_s_0 for every s∈ U_0,
* the divisor B_W is reduced and contains no fibers of W→ U_0,
* the log fibers of (W,B_W)→ U_0 are log canonical pairs,
* the restriction of B_W to B_W,s_0 induces a bijection between prime components,
* the fibration W → U_0 has injective class group restrictions, and
* (W_s_0,B_W,s_0) is a toric log CY pair.
By Proposition <ref>, all fibers of (W,B_W)→ U_0 are toric log CY pairs.
Let t_0 ∈ V_0 ⊂ V be an open subset whose preimage in U is contained in U_0. We will show that the log fiber (X_t, B_t) is a cluster type log CY pair for any closed point t∈ V_0.
If s∈ U_0 is a preimage of t,
then we have a Q-factorial dlt modification
(Y_s,B_Y,s)→ (Z_s, B_Z,s) → (X_s, B_s)
by Proposition <ref>(<ref>) and (<ref>).
By construction of U_0, this admits a crepant birational contraction (Y_s,B_Y,s) (W_s,B_W,s) to a toric log CY pair. Thus, Lemma <ref> shows that (X_s, B_s) ≅ (X_t, B_t) is of cluster type.
We proceed by Noetherian induction.
If T=0, then there is nothing to prove, so assume T≥ 1.
Take a dominant finite étale morphism T' → T such that the base change (𝒳_T', ℬ_T') is a log CY pair of index one over T'.
Let V'⊆ T' be the open subset
provided by Theorem <ref>,
and let V ⊆ T be its image.
Then T'_ cluster∩ V'
is a constructible subset by Theorem <ref>, and Noetherian induction,
so its image T_ cluster∩ V in T is also constructible.
By Noetherian induction, we conclude that T_ cluster is a constructible subset of T.
The terminal and Q-factorial assumptions on the fibers are necessary for our proof of Theorem <ref>, in particular for our method of spreading out the crepant birational contraction from (Y_s_0, B_Y, s_0) to a toric log CY pair in the proof of Theorem <ref>.
We use the terminality assumption to apply Lemma <ref>. We use the terminality and Q-factoriality assumptions to apply <cit.> in the proof of Proposition <ref>.
§ PROOFS OF MAIN THEOREMS
Now we turn to proving the two main theorems of this article, using Theorems <ref> and <ref>. We will need the following lemmas.
Let (𝒳,ℬ)→ T be a family of log CY pairs.
Then there is a locally closed stratification of T such that the log fibers over each stratum have the same index.
By Noetherian induction, it suffices to show that there exists a non-empty open set of T over which all log fibers have the same index.
As the geometric generic log fiber is log CY, we may find a dominant finite étale morphism
T'→ T such that
Pic(T')=0 and
the base change
(𝒳_T',ℬ_T') → T' is a log CY fibration.
Thus, over a non-empty open subset U' ⊂ T', we may find the smallest positive integer m for which m(K_𝒳_T'+ℬ_T')|_𝒳_U' = m(K_𝒳_U'+ℬ_U')∼ 0.
In particular, the index of each log fiber of (𝒳_U',ℬ_U')→U' divides m.
Let 𝒴_U'→𝒳_U' be the index one cover of K_𝒳_U'+ℬ_U'.
By construction, the log fibers of (𝒳_U',ℬ_U')→U' with index m are precisely the connected fibers of 𝒴→U'.
Hence, there is an open subset of U' over which all log fibers of (𝒳_U',ℬ_U')→U' is exactly m.
Let (𝒳,ℬ)→ T be a family of pairs.
Then the subset of T parametrizing log fibers with log canonical singularities is constructible.
Let T'→ T be a dominant finite étale morphism such that the base change (𝒳_T', ℬ_T') admits a log resolution 𝒳' →𝒳_T' over T' and such that ℬ_T' does not contain any fibers of 𝒳_T'→ T'.
By Noetherian induction, it suffices to show that the condition holds over some non-empty open subset of the base.
There is a non-empty open subset U⊂ T' over which all the exceptional divisors of 𝒳' are horizontal over U.
By possibly shrinking U, we may assume that 𝒳'→𝒳_T' induces a log resolution on log fibers of (𝒳_T',ℬ_T')→ T over closed points of U.
Therefore, over U, either all or none of the log fibers of (𝒳,ℬ)→ T are log canonical.
This finishes the proof.
By Noetherian induction,
it suffices to show that T_ toric∩ U is constructible for some open dense affine subset U⊆ T.
By <cit.>, there is a dense open subset U⊂ T such that for m divisible enough, the map
H^0(𝒳,-mK_𝒳)→
H^0(𝒳_t,-mK_𝒳_t)
is surjective (and nonzero) for every closed point t∈ U.
Fix such an m,
let
ℬ⊂𝒳_U× |-m K_𝒳_U|
be the universal divisor, and consider the family π (𝒳_U × |-m K_𝒳_U|, 1/mℬ) → U × |-m K_𝒳_U|.
The fiber of π over (t,p) is (𝒳_t, 1/m B_p|_𝒳_t), where ℬ_p ∈ |-m K_𝒳| is the effective divisor corresponding to p.
Let W⊆ U × |-m K_𝒳_U| be the open subset consisting of points (t,p) for which B_p does not contain the fiber 𝒳_t, and write (𝒴_W,ℬ_W) → W for the restriction of the family π to W.
Then (𝒴_W,ℬ_W) → W is a family of pairs.
Let W_ toric⊂ W be the locus parametrizing the log fibers that are toric log CY pairs.
Since a variety X is toric if and only if it admits a boundary B for which (X,B) is a toric log CY pair,
the intersection T_ toric∩ U is equal to the image of W_ toric under the projection W ⊂ U × |-mK_𝒳_U| U. Thus, it suffices to show that W_ toric⊂ W is a constructible subset.
By Lemma <ref>, W_ toric is contained in the set W_1 parametrizing log canonical log fibers of index one, and W_1 ⊂ W is constructible
by Lemmas <ref> and <ref>.
Over the normalization of each irreducible component of W_1, the pair
(𝒴_W,ℬ_W) induces a family of log CY pairs of index one for which every closed fiber 𝒴_t is Fano.
By Theorem <ref>, we conclude that the subset
W_ toric⊆ W_1 ⊆ W
parametrizing toric log CY fibers is constructible,
as it is a constructible subset of a constructible subset. This completes the proof.
The proof is identical to the proof of Theorem <ref>, using Theorem <ref> instead of Theorem <ref>.
The n-dimensional smooth Fano varieties form a bounded family by <cit.>; that is, there exist finitely many surjective projective morphisms g_i𝒵_i → S_i of quasi-projective varieties such that any n-dimensional smooth Fano variety is isomorphic to a closed fiber of g_i for some i (see, e.g., <cit.>).
Furthermore, by standard arguments, one may replace each S_i by a constructible subset to assume every closed fiber is an n-dimensional smooth Fano variety, i.e., that the g_i are parametrizing families for n-dimensional smooth Fano varieties. Then the corollary is immediate from Theorem <ref>.
§ EXAMPLES AND QUESTIONS
In this section, we collect some examples
of the behavior of toricity and cluster type
in families of Fano varieties.
We propose some questions for further research.
The first two exmaples show that the property of being a toric variety is neither open nor closed.
Many smooth Fano varieties admit singular toric degenerations.
For instance, the Mukai–Umemura threefold V_22 admits a degeneration to a Gorenstein terminal toric Fano 3-fold (see, e.g., <cit.>). In this degeneration, all the fibers except the central fiber are non-toric.
[<cit.>]
The projective plane ℙ^2 admits non-toric, quasismooth degenerations.
Namely, let (a,b,c) and (a,b,d) be two pairs of Markov triples for which
d=3ab-c, the so-called adjacent Markov triples.
Consider the weighted hypersurface
V(x_1x_2+x_3^c+x_4^d)⊂(a^2,b^2,d,c) .
By <cit.>, this
is a non-toric, quasismooth, rational, Fano 𝔾_m-surface of
Picard rank one,
which is a degeneration of ^2.
Furthermore, by <cit.>, all the non-toric,
normal, rational projective
𝔾_m-surfaces which are degenerations of ^2 arise in this way.
Hence, the property of being a toric variety is not closed in families of Fano varieties.
The following example shows that the property of being cluster type
is not open in families of Fano varieties.
We give a family of canonical Fano threefolds such that the special fiber is of cluster type but not toric, and nearby fibers are irrational and hence not of cluster type.
Let X ⊂ P^4 be the singular cubic threefold defined by
f(x_0,…,x_4) (x_0 + x_1 + x_2 + x_3 + x_4)^3 - x_0^3 - x_1^3 - x_2^3 - x_3^3 - x_4^3.
Let g(x_0,…,x_4) be the equation of a general cubic, and
consider the family of Fano threefolds
𝒳{ ([x_0:…:x_4],t) | (1-t)f(x_0,…,x_4)+tg(x_0,…,x_4)=0}⊂^4 ×𝔸^1
over 𝔸^1.
The general fiber of 𝒳→𝔸^1 is a smooth cubic threefold, so it is irrational.
Now we show that the special fiber 𝒳_0 = X is of cluster type but not toric. Since X is isomorphic to the Segre cubic
{ [x_0:…:x_5] | x_0+…+x_5 = x_0^3+…+x_5^3=0}⊂ℙ^5,
it has 10 nodal singularities and
its automorphism group is S_6; in particular, X is not a toric variety.
It remains to construct a boundary B on X such that (X,B) is a log CY pair of cluster type.
There is a small resolution π Y→ X
which is isomorphic to the blow-up ϕ Y →^3
at 5 general points
{p,q,r,s,t}⊂ P^3 <cit.>.
Let H_1 (resp. H_2) by the hyperplane in ^3
spanned by {p,q,r} (resp. {p,s,t}).
Since {p,q,r,s,t} are in general position, the line ℓ_1 H_1 ∩ H_2 only contains one point of the set {p,q,r,s,t}, namely p.
Next, let u_1 and u_2 be two general points in ℓ_1,
and let H_3 (resp. H_4) be the hyperplane in ^3 spanned by {u_1,q,s}
(resp. {u_2,r,t}).
By construction, the pair
(^3,H_1+H_2+H_3+H_4) is log Calabai–Yau,
and each point of the set {p,q,r,s,t} is contained in precisely two of the hyperplanes
{H_1,H_2,H_3,H_4}.
In particular, if we write
ϕ^*(K_^3+H_1+H_2+H_3+H_4)=K_Y+B_Y,
then the divisor B_Y is effective.
Then (X,B π_* B_Y) is a log CY pair,
and the exceptional set of
the crepant birational map
π∘ϕ^-1 (^3,H_1+H_2+H_3+H_4) (X,B)
is precisely the 10 lines passing through pairs of points in the set {p,q,r,s,t}.
In particular, the intersection 𝔾_m^3∩ Ex(π∘ϕ^-1) contains no divisors,
so the pair (X,B) is of cluster type.
This shows that the Fano variety X is of cluster type.
The following example shows that the property of being cluster type is not closed in families of Gorenstein canonical Fano surfaces.
Let X_0 be the Gorenstein del Pezzo surface of Picard rank one with a single D_5 singularity.
By <cit.>, we know that X_0 admits a -Gorenstein smoothing 𝒳→𝔸^1.
Since the anticanonical volume of X_0 is 4 (see, e.g., <cit.>),
the general fiber of 𝒳→𝔸^1 is a smooth del Pezzo surface of degree 5.
Every smooth del Pezzo surface of degree 5
admits a 1-complement of coregularity zero <cit.>
and so it is of cluster type by <cit.>.
On the other hand, the central fiber 𝒳_0 is not of cluster type,
as cluster type surfaces only admit A-type singularities (see <cit.>).
The following example shows that the property of being a (possibly non-normal) toric variety is not locally closed in families of slc varieties.
Consider the trivial family π_2𝒴^2×𝔸^1_t→𝔸^1_t.
Now, let 𝒟 be a divisor on 𝒴 for which 𝒟_0 is the reduced sum of the coordinate hyperplanes
and 𝒟_t is a smooth cubic curve for every t≠ 0.
Let 𝒳 be the deformation of 𝒴 to the normal cone of 𝒟.
For each t∈𝔸^1, we let ψ_t 𝒳_t →𝔸^1_s
be the induced deformation of 𝒴_t to the normal cone over 𝒟_t.
Then, the threefold 𝒳 admits a morphism
ψ𝒳→𝔸^2 whose fiber over (s,t) is ψ_t^-1(s).
By construction, all the fibers of ψ over 𝔾_m×𝔸^1_t are isomorphic to ^2 and hence toric varieties.
The fibers over {0}×𝔾_m are cones over smooth cubic curves and so they are not toric.
The fiber over (0,0) is the non-normal cone over the reduced sum of the hyperplane coordinates of ^2, which is a toric variety.
We conclude that the subset of 𝔸^2
parametrizing toric fibers of ψ𝒳→𝔸^2 is
(𝔾_m×𝔸^1) ∪ (0,0)
which is not a locally closed subset. However, it is a constructible subset.
The previous examples motivate the following question.
Is log rationality a constructible condition in families of Fano varieties?
Rationality specializes in smooth families <cit.>.
Thus, the following question seems to be reasonable
to study log rationality in families.
Let (𝒳,ℬ)→ C be a log smooth family for which the generic fiber is log rational. Are closed fibers log rational?
The previous question is closely related to the log rationality conjecture proposed
by Ducat and Enwright, Figueroa, and the second author (see <cit.> and <cit.>).
Finally, our proof of Theorem <ref> requires that the fibers of the family are Q-factorial and terminal (see Remark <ref>). However, we are not aware of any counterexamples to Theorem <ref> when these assumptions on the singularities are removed.
Is the property of being cluster type a constructible condition in families of Fano varieties with worse than terminal Q-factorial singularities?
habbvr
|
http://arxiv.org/abs/2409.02160v1 | 20240903180000 | Light-Ray Wave Functions and Integrability | [
"Alexandre Homrich",
"David Simmons-Duffin",
"Pedro Vieira"
] | hep-th | [
"hep-th"
] |
9.0in
|
http://arxiv.org/abs/2409.03111v1 | 20240904223432 | What is Normal? A Big Data Observational Science Model of Anonymized Internet Traffic | [
"Jeremy Kepner",
"Hayden Jananthan",
"Michael Jones",
"William Arcand",
"David Bestor",
"William Bergeron",
"Daniel Burrill",
"Aydin Buluc",
"Chansup Byun",
"Timothy Davis",
"Vijay Gadepally",
"Daniel Grant",
"Michael Houle",
"Matthew Hubbell",
"Piotr Luszczek",
"Lauren Milechin",
"Chasen Milner",
"Guillermo Morales",
"Andrew Morris",
"Julie Mullen",
"Ritesh Patel",
"Alex Pentland",
"Sandeep Pisharody",
"Andrew Prout",
"Albert Reuther",
"Antonio Rosa",
"Gabriel Wachman",
"Charles Yee",
"Peter Michaleas"
] | cs.NI | [
"cs.NI",
"cs.CR",
"cs.CY",
"cs.SI"
] |
What is Normal? A Big Data Observational Science Model of Anonymized Internet Traffic
Research was sponsored by the Department of the Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of the Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. Use of this work is controlled by the human-to-human license listed in Exhibit 3 of https://doi.org/10.48550/arXiv.2306.09267
Jeremy Kepner^1, Hayden Jananthan^1, Michael Jones^1, William Arcand^1, David Bestor^1, William Bergeron^1,
Daniel Burrill^1, Aydin Buluc^2, Chansup Byun^1, Timothy Davis^3, Vijay Gadepally^1, Daniel Grant^4, Michael Houle^1,
Matthew Hubbell^1, Piotr Luszczek^1,5, Lauren Milechin^1, Chasen Milner^1, Guillermo Morales^1, Andrew Morris^4,
Julie Mullen^1, Ritesh Patel^1, Alex Pentland^1, Sandeep Pisharody^1, Andrew Prout^1, Albert Reuther^1, Antonio Rosa^1,
Gabriel Wachman^1, Charles Yee^1, Peter Michaleas^1
^1MIT, ^2LBNL, ^3Texas A&M, ^4GreyNoise, ^5University of Tennessee
September 2024
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Understanding what is normal is a key aspect of protecting a domain. Other domains invest heavily in observational science to develop models of normal behavior to better detect anomalies.
Recent advances in high performance graph libraries, such as the GraphBLAS, coupled with supercomputers enables processing of the trillions of observations required.
We leverage this approach to synthesize low-parameter observational models of anonymized Internet traffic with a high regard for privacy.
Internet traffic, anonymized analysis, streaming graphs, traffic matrices, network models
§ INTRODUCTION
Anomaly detection and signature detection both play important roles in detecting adversarial activities on the Internet and both approaches are increasingly being enabled by big data and machine learning techniques <cit.>.
A core challenge to creating effective anomaly detection systems is the development of adequate models
<cit.>
of typical activity
The concept of normality It is one of the main steps to build a solution to detect network anomalies. The question “how to create a precise idea of normality?” is what has driven most researchers into creating different solutions through the years. This can be considered as the main challenge related to anomaly detection and has not been entirely solved yet.
<cit.>
Other domains (land, sea, undersea, air, and space) rely on detailed observational science models of their environment to understand what is normal
<cit.>.
Accordingly, reproducible observations of cyberspace
<cit.>
have been recommended as a core foundation for the science of cyber-security
The highest priority should be assigned to establishing research protocols to enable reproducible [observations].
<cit.>
Significant early results from analyzing the Internet helped establish the emerging field of Network Science
<cit.>.
Improving these results requires ever larger data sets. A priority for expanded observation of the Internet is the need to maintain a high regard for privacy.
The Center for Applied Internet Data Analysis (CAIDA) based at the UC San Diego Supercomputer Center operates the largest Internet telescope in the world and has pioneered trusted data sharing best practices that combine anonymization <cit.>
with data sharing agreements. These data sharing best practices include the following principles <cit.>
* Data is made available in curated repositories
* Standard anonymization methods are used where needed
* Recipients register with the repository and demonstrate a legitimate research need
* Recipients legally agree to neither repost a corpus nor deanonymize data
* Recipients can publish analysis and data examples necessary to review research
* Recipients agree to cite the repository and provide publications back to the repository
* Repositories can curate enriched products developed by researchers
In the broader networking community (commercial, federal, and academia) anonymized source-to-destination traffic matrices with standard data sharing agreements have emerged as a data product that can meet many of these requirements (see Figure <ref>)
<cit.>.
Focusing on anonymized source and destination addresses has helped alleviate privacy concerns because the non-anonymized addresses of Internet packets are already handled by many entities as part of the normal functioning of the Internet.
While an anonymized traffic matrix provides very little information about individual communications on a network, the ability collect trillions of observations over years across the Internet provides a unique opportunity for developing high-precision, low-parameter observational models of anonymized Internet traffic while maintaining a high regard for privacy. These data are particularly useful for addressing the fundamental observational science question for determining what is normal in a domain: Given two observers at different locations and/or times what can they both expect to see?
The organization of the rest this paper is as follows. First, some fundamental network quantities are presented along with how these quantities can be readily computed from anonymized traffic matrices. Second, the statistical properties of these network quantities are examined from of largest available network data sets in the world (CAIDA, MAWI, GreyNoise, ...). Next, these statistical results are synthesized into a low-parameter model of anonymized Internet traffic. Finally, the papers concludes with a summary and a discussion of potential future directions.
§ TRAFFIC MATRICES AND NETWORK QUANTITIES
Network traffic data can be viewed as a traffic matrix where each row is a source and each column is a destination (see Figure <ref>). A primary benefit of constructing anonymized traffic matrices with high performance math libraries, such as, the GraphBLAS<cit.>, is the efficient computation of a wide range of network quantities via matrix mathematics that enable trillions of events to be readily processed with supercomputers
<cit.>.
Figure <ref> illustrates essential quantities found in all streaming dynamic networks. These quantities are all computable from anonymized traffic matrices created from the source and destinations found in Internet packet headers.
The network quantities depicted in Figure <ref> are computable from anonymized origin-destination traffic matrices. It is common to filter network packets down to a valid subset of packets for any particular analysis. Such filters may limit particular sources, destinations, protocols, and time windows. At a given time t, N_V consecutive valid packets are aggregated from the traffic into a hypersparse matrix A_t, where A_t(i,j) is the number of valid packets between the source i and destination j. The sum of all the entries in A_t is equal to N_V
∑_i,j A_t(i,j) = N_V
All the network quantities depicted in Figure <ref> can be readily computed from A_t using the formulas listed in Table <ref>. Because matrix operations are generally invariant to permutation (reordering of the rows and columns), these quantities can readily be computed from anonymized data. Furthermore, the anonymized data can be analyzed by source and destination subranges (subsets when anonymized) using simple matrix multiplication. For a given subrange represented by an anonymized hypersparse diagonal matrix A_r, where A_r(i,i) = 1 implies source/destination i is in the range, the traffic within the subrange can be computed via: A_r A_t A_r. Likewise, for additional privacy guarantees that can be implemented at the edge, the same method can be used to exclude a range of data from the traffic matrix
A_t - A_r A_t A_r
One of the important capabilities of the award-winning SuiteSparse GraphBLAS<cit.> library is direct support of hypersparse matrices where the number of nonzero entries is significantly less than either dimensions of the matrix <cit.>.
If the packet source and destination identifiers are drawn from a large numeric range, such as those used in the Internet protocol, then a hypersparse representation of A_t eliminates the need to keep track of additional indices and can significantly accelerate the computations <cit.>.
§ INTERNET STATISTICAL PROPERTIES
When considering what is normal in a particular domain from an observational science perspective a core question to address is
Q Given two observers at different locations and/or times what can they both expect to see?
Answering this question sets the foundation for observational reproducibility that is essential for scientific understanding. To simplify the investigation in the specific context of Internet traffic the above question can be decomposed into several narrower questions that can be explored individually
Q1 Given a sample of N_V Internet packets what are the expected values of various network quantities?
Q2 What is the probability of seeing a specific value of a network quantity?
Q3 Having seen a source of Internet packets what is the probability of seeing that source at a later time?
Q4 What is the probability that two observers will see the same source at a given time?
Q1 deals with number of packets or the size of a packet window in a given sample of network data. Q2 focuses on the probability distributions obtained from the histograms of the network quantities. Q3 deals with the temporal self-correlations within a specific Internet traffic sensor while Q4 deals with the temporal cross-correlations of separate Internet traffic sensors.
Exploring these questions requires big data. The subsequent analysis draws from the following Internet traffic data sets which are among the largest available for scientific research
* CAIDA Telescope: over 40 trillion mostly malicious packets collected on an Internet darkspace over several years <cit.>
* MAWI: several billion mostly benign packets collected at multiple sites as part the day-in-the-life of the Internet project <cit.>
* GreyNoise: hundreds of millions of mostly malicious web interactions collected over several years from thousands of honeypot systems spread across the Internet <cit.>
* Enterprise gateway: over 100 billion mostly benign packets collected at a large organization <cit.>
§.§ Sample Window Size
One of the first questions encountered when analyzing Internet traffic is how many samples (packets) to collect and at what level of granularity. It is common to filter network packets down to a valid subset of packets for any particular analysis so that at a given time t, N_V consecutive valid packet have been collected for analysis in a traffic matrix. Statistical fluctuations between samples are significantly reduced if N_V is held fixed and the sample time window is allowed to vary.
As N_V increases, the network quantities in Figure <ref> and Table <ref> will all increase. How will the network quantities increase as a function of N_V? For small values of N_V starting at 1 the network quantities may increase linearly. For sufficiently large values of N_V the packets may fill the entire allowed range of sources and/or destinations of the network sensor and the network quantities may level off. Exploring this question with the various large data sets indicates that for intermediate values of N_V the network quantities are often proportional to
N_V^γ
where 0 ≤γ≤ 1. Figure <ref> illustrates a specific example derived from 100 billion packets collected at a large enterprise gateway <cit.>. These scaling relationships are broadly observed with the specific values of the parameters being site specific but stable over time <cit.>.
§.§ Probability Distributions
Perhaps one of the most significant early results from analyzing the Internet, which helped establish the emerging field of Network Science, was the observation that many network quantities follow a power-law or heavy-tail distribution <cit.>. In terms of Internet traffic, an example would be that a few destinations on the Internet receive packets from many sources while most destinations receive packets from a few sources. This question is readily explored by the looking at the histograms or probability distributions of network quantities computed from anonymized traffic matrices. The availability of larger data sets have allowed the observations of these probability distributions to become more precise <cit.>. Specifically, the probability of a particular network quantity having a value or degree d is often well-described by the Zipf-Mandelbrot distribution
1/(d + δ)^λ
where typically -1 ≲δ≲ 3 and 1 ≲λ≲ 3. Given sufficient observations, δ and λ can be determined with high-precision. Figure <ref> illustrates the Zipf-Mandelbrot behavior observed from billions of packets from the MAWI data set <cit.>. Similar to the window size scaling relationships, the Zipf-Mandelbrot distribution is broadly observed with the specific values of the parameters being site specific but stable over time.
Historically, it is worth noting that initial interest in these distributions focused on the power-law parameter λ, as this parameter described the behavior of the largest and most popular sources on the Internet <cit.>. More recently δ has emerged as a way of describing large numbers of less popular sources that may be collectively involved in adversarial network activity.
§.§ Temporal Self-Correlations
If an observer sees a source on the Internet what is the probability that the source will be seen again at a later time? This is the essential question that self-correlations seek to answer. The network traffic data sets can be used to address this questions by measuring the probability of seeing a source again at time t. Figure <ref> illustrates these probabilities for the CAIDA and GreyNoise data sets over months and years <cit.>. Intriguingly the source self-correlations are well approximated by the modified Cauchy distribution
β/β + t^α
where typically 0 < α≲ 1 and β > 0. These parameters are site specific and differ significantly between benign and malicious data. The modified Cauchy distribution can be characterized by the time it takes for the probability to drop to one half
t_ half = β^1/α
In the case of the GreyNoise benign data this timescale is years while the malicious data has much shorter times scales of days, hours, and minutes.
§.§ Temporal Cross-Correlations
Similar to self-correlations, it is likewise possible to explore the probability that a second observer will see a source at the same or a different time. If the self-correlations of two network sensors are observed to follow a modified Cauchy distribution it is not surprising that their cross-correlations in time are also observed to follow a modified Cauchy distribution <cit.>. Perhaps more fundamental is the probability that a source seen by one observer will even be seen by another observer. Figure <ref> plots the probability of a source seen by the CAIDA telescope also being seen in the same month by the GreyNoise honeyfarm. This probability is strongly dependent upon the number of packets the source has sent to the CAIDA telescope and is well approximated by the formula
log_2(d)/log_2(N_V^1/2)
for d < N_V^1/2. Simply put, if a source emits a lot of packets it is more likely to be seen.
§ MODEL SYNTHESIS
The empirically motivated models from the previous section allow the core question to be refined around the variables that directly impact the observability of network traffic. These variables include the window size N_V, the number of packets d observed from a source, and the time t between observations. The more pricess question then becomes
Q Given a window with N_V incoming packets, what is the probability of a source sending d packets being observed by a second observer at time t
Based on prior observations, the empirical formula for this probability can be hypothesized to be proportional to
N_V^γ 1/(d + δ)^λ β/β + t^α log_2(d)/log_2(N_V^1/2)
where γ, δ, λ, β, and α are site specific parameters that tend to be stable over time.
The above expression is an empirically motivated formula. Ideally, theoretical models derived from underlying first-principles will be found which are then approximated by the above formula under appropriate conditions. While such a theoretical model does not yet exist, certain logical deductions can be made about the reasonableness of the terms in the above formula.
By definition network quantities grow monotonically with window size and the term N_V^γ is one the simplest formulas satisfying this condition. The power-law dependence of network quantities on their observed value d has been well-observed and the successful preferential attachment model remains a reasonable underlying theoretical framework for these observations <cit.>. Temporal self-correlation and cross-correlation measurements require continuous long-duration coeval observations from multiple observers and are subsequently rarer. The correlation function is defined to have a peak value of 1 at t=0 and it seems intuitive that the probability of seeing a source again would slowly drop-off over time. The modified Cauchy distribution satisfies these conditions. Finally, it is also intuitive that the probability of an observer seeing a source is related to the ratio of the number of packets from the source and the size of the observation window.
§ CONCLUSIONS AND FUTURE WORK
Modern Internet telescopes and high-performance sensors are capable of collecting trillions of observations. Supercomputers and high performance graph analysis libraries, such as the GraphBLAS, allow these big data observations to be analyzed to develop high-precision, low-parameter observational models of Internet traffic. These models provide detailed predictions on the visibility of Internet sources of a given intensity over time and the likelihood such sources will be seen by an observer at a different location. For a given location the parameters of the model tend to be stable over time. Using these models, it is possible to predict in detail many statistical properties of Internet traffic seen at a given location and time. These predictions can assist in correctly placing network sensors by comparing what is expected with what is observed, ensuring zero trust configurations are maintained by revealing when networks have changed, and detecting anomalies due to malicious activity.
Going forward there should be an expansion of Internet observatories like CAIDA and MAWI. The globe currently depends upon a small dedicated community to operate and maintain current network observatories. These lookouts are our only means for obtaining consensus empirical answers to critical questions. These capabilities should be significantly expanded. Furthermore, the underlying network science at scale needs enhancement. Understanding of the underlying processes in any field is discovered by painstaking science. Early efforts on small data sets revealed significant new discoveries and established the field of Network Science <cit.>. Current observations are a much larger and are calling out for scientific exploration.
§ ACKNOWLEDGMENTS
The authors wish to acknowledge the following individuals for their contributions and support:
Daniel Andersen, LaToya Anderson, Sean Atkins, Chris Birardi, Bob Bond, Alex Bonn, Koley Borchard, Stephen Buckley, Aydin Buluc, K Claffy, Cary Conrad, Chris Demchak, Phil Dykstra, Alan Edelman, Garry Floyd, Jeff Gottschalk, Dhruv Gupta, Thomas Hardjono, Chris Hill, Charles Leiserson, Kirsten Malvey, Chad Meiners, Adam Michaleas, Sanjeev Mohindra, Heidi Perry, Christian Prothmann, Steve Rejto, Josh Rountree, Daniela Rus, Mark Sherman, Scott Weed, Adam Wierman, Marc Zissman.
ieeetr
|
http://arxiv.org/abs/2409.03183v1 | 20240905021934 | Bypassing DARCY Defense: Indistinguishable Universal Adversarial Triggers | [
"Zuquan Peng",
"Yuanyuan He",
"Jianbing Ni",
"Ben Niu"
] | cs.CL | [
"cs.CL",
"cs.AI",
"I.2.7"
] |
DasAtom: A Divide-and-Shuttle Atom Approach to Quantum Circuit Transformation
Yunqi Huang, Dingchao Gao, Shenggang Ying, and Sanjiang Li^*
Yunqi Huang and Sanjiang Li are with Centre for Quantum Software and Information (QSI), Faculty of Engineering and Information Technology, University of Technology Sydney, NSW 2007, Australia. Dingchao Gao and Shenggang Ying are with Institute of Software, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Beijing, China
Corresponding author (E-mail: [email protected])
September 5, 2024
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Neural networks (NN) classification models for Natural Language Processing (NLP) are vulnerable to the Universal Adversarial Triggers (UAT) attack that triggers a model to produce a specific prediction for any input.
DARCY borrows the "honeypot" concept to bait multiple trapdoors, for effectively detecting the adversarial examples generated by UAT.
Unfortunately, we find a new UAT generation method, called IndisUAT, which produces triggers (i.e., tokens) and uses them to craft the adversarial examples whose feature distribution is indistinguishable from that of the benign examples in a randomly-chosen category at the detection layer of DARCY.
The produced adversarial examples incur the maximal loss of predicting results in the DARCY-protected models.
Meanwhile, the produced triggers are effective in black-box models for text generation, text inference, and reading comprehension.
Finally, the evaluation results under NN models for NLP tasks indicate that the IndisUAT method can effectively circumvent DARCY and penetrate other defenses.
For example, IndisUAT can reduce the true positive rate of DARCY's detection at least 40.8% and 90.6%, and drop the accuracy at least 33.3% and 51.6% in the RNN and CNN models, respectively.
IndisUAT reduces the accuracy of the BERT's adversarial defense model by at least 34.0%, and makes the GPT-2 language model to spew racist outputs even when conditioned on non-racial context.
§ INTRODUCTION
Textual Neural Networks (NN) classification models used in Natural Language Processing (NLP) are vulnerable to be fooled and forced to output specific results for any input by attackers with adversarial examples carefully crafted by perturbing original texts <cit.>.
It is noticeable that adversarial examples have successfully cheated the NN classification models in a large number of applications, such as fake news detection <cit.>, sentiment analysis <cit.>, and spam detection <cit.>.
The early methods of adversarial example generation are instance-based search methods, which search adversarial examples for specific inputs, but they can be easily identified by spelling detection and semantic analysis.
The current methods mainly rely on learning models that learn and generate adversarial examples for various unknown discrete textual inputs, e.g.,
HotFlip <cit.>, Universal Adversarial Triggers (UAT) <cit.>, and MALCOM <cit.>.
The learning-based methods are attractive, since 1 they have high attack success rates and low computational overhead; 2 they are highly transferable from white-box models to black-box models, even if they have different tokenizations and architectures; and 3 they are usually effective to fool other models, e.g., reading comprehension and conditional text generation models.
UAT <cit.>, as one of powerful learning-based attacks, can drop the accuracy of the text inference model from 89.94% to near zero by simply adding short trigger sequences (i.e., a token or a sequence of tokens) chosen from a vocabulary into the original examples. Besides, the adversarial examples generated by UAT for a Char-based reading comprehension model are also effective in fooling an ELMO-based model.
To defend against UAT attacks, DARCY <cit.> has been firstly proposed.
It artfully uses the "honeypot" concept and searches and injects multiple trapdoors (i.e., words) into a textual NN for minimizing the Negative Log-Likelihood (NLL) loss. A binary detector is trained for identifying UAT adversarial examples from the examples by using the binary NLL loss.
Therefore, adversarial examples can be detected when the features of the adversarial examples match the signatures of the detection layer where the trapdoors are located.
The literature <cit.> introduced two methods to attack DARCY.
The first one sorts triggers and uses the l+1-th trigger instead of top-l (l=20) triggers to construct an adversarial example, which prevents the detection of DARCY on a couple of trapdoors.
The second method uses the trapdoor information estimated by a reverse engineering approach to construct an alternative detection model, and carefully generates triggers that can circumvent the detection.
However, both methods activate the detection layer of DARCY and fail to circumvent DARCY that injects a normal number of trapdoors, e.g., more than 5 trapdoors.
In this paper, we design a novel UAT generation method, named Indistinguishable UAT (IndisUAT).
The IndisUAT attack is a black-box and un-targeted attack
that can effectively circumvent DARCY's detection.
The tokens (i.e., words, sub-words, or characters) in the trigger sequences are updated iteratively to search the trigger sequences whose signatures are mismatched with the trapdoors' signatures, so that the trigger sequences do not activate the detection layer of DACRY where the trapdoors are located. Meanwhile, the searched trigger sequences increase the probability that the prediction results stay away from the ground truth.
Fig. <ref> shows an example of IndisUAT.
IndisUAT has the following distinguished features:
* IndisUAT effectively circumvents DARCY, since IndisUAT estimates the feature distribution of benign examples in the view of DARCY's detection layer, and produces adversarial examples to match the feature distribution estimates.
* IndisUAT generate adversarial examples that incur the maximal loss of predicting results in the DARCY-protected models, so that the success rate of the IndisUAT attack is high.
* Extensive experiments show that IndisUAT drops the true positive rate of DARCY's detection at least 40.8% and 90.6%, and drops the accuracy at least 33.3% and 51.6% in RNN and CNN models, respectively;
IndisUAT works for both CNN and BERT models defended by adversarial methods, as IndisUAT results in the decrease of the accuracy at least 27.5% and 34.0%, respectively;
IndisUAT can be migrated from the classification to other NLP tasks (e.g., text generation and QA question answering).
The IndisUAT code will be available after this paper is published.
§ BACKGROUND
§.§ Related work
Adversarial Attacks in NLP.
The concept of adversarial examples was first introduced by <cit.>. Later, <cit.> found that even minor perturbations of target answers can have negative impacts on reading comprehension tasks. Thus, many generation methods of adversarial examples were proposed for different attack levels (i.e., character-level, word-level, sentence-level, and multi-level) and in different models (e.g., DNN models and pre-trained models).
For example, Textfooler <cit.> in BERT <cit.> and TextBugger <cit.> for multi-level attacks can significantly change the outputs of DNN models.
However, these methods are not universal (input-agnostic), which means that they have poor transferability.
To improve the transferability, <cit.> propose the UAT attack that is an universal attack method for many NLP tasks such as text classification, reading comprehension, text generation, and text inference.
The UAT attack is independent of the victim classification models and the position of triggers, and it only needs original data and a model that has similar effects on a victim classification model to generate word-level and character-level triggers.
Thus, the UAT attack is highly transferable and resource-efficient. Subsequently, <cit.> added a semantic information processing step during the UAT generation to make UAT more consistent with the natural English phrases.
However, the UAT attacks can effective be detected by DARCY.
Defenses Against Adversarial Attacks in NLP.
Many defense methods <cit.> have been proposed to prevent adversarial attacks by adding noisy words into inputs of models in NLP. The amount of the added noisy data determines the robustness of the trained models. However, if too much noise data is injected into the inputs, the output of the model is discovered to get worse. Subsequently, adversarial training
methods <cit.> add noises into the embedding layer of a model instead of the inputs and do not need the injection of extra adversarial examples. They maximize the disruption to the embedding layer and minimize the corresponding loss by the addition of the noises during the training process.
Thus, the adversarial training methods can avoid the over-fitting issue and improve the generalization performance of the model.
Unfortunately, they usually fail to protect the models against pervasive UAT attacks.
<cit.> recently proposed DARCY, an defense method that first traps UAT and protects text classification models against UAT attacks.
DARCY artfully introduces the honeypot concept and uses a backdoor poisoning method to generate trapdoors. The trapdoors are mixed with original data and trained together to get a detector model that can capture adversarial examples. DARCY is currently the most effective defense method against UAT attacks.
§.§ Analysis of DARCY's detection
The detection performance of DARCY is outstanding due to the following reasons:
1 the pertinent adversarial examples drop into trapdoors and activate a trapdoor when the feature of the adversarial example matches the signature of the trapdoor, so that the adversarial examples can be captured;
2 the signature of each trapdoor is different from that of benign examples in the target category, and the signatures are also different between trapdoors to guarantee a low false-positive rate and the effectiveness of trapdoors; and
3 the detector is built from a single network, and its detection rate increases with the number of trapdoors.
In IndisUAT, the features of the trigger-crafted adversarial examples are similar to those of the benign examples. Therefore, these adversarial examples do not activate the trapdoors located on the DARCY's detection layer. At the same time, the adversarial examples for a randomly-chosen target class are far away from the original ground truth and close to the target class, so as to achieve the purpose of the attack.
§ INDISTINGUISHABLE UAT
§.§ Detection Layer Estimation
The IndisUAT attacker can perform the following steps to estimate the distribution of outputs corresponding to benign examples on the detection layer of DARCY.
(1) Randomly select the candidate examples from the benign examples detected by DARCY to form a set, i.e., D_f^L, where L is the randomly-chosen target class. For each example-label pair (x_i, y_i) ∈ D_f^L, example x_i ∉D^L and label y_i ∉ L, where |D^L_f|=N, D^L is a dataset belonging to L.
(2) Feed the chosen data D^L_f into ℱ_g, where ℱ_g is the binary detector trained in Sec. <ref>.
(3) Estimate the feature distribution of the outputs on the detection layer for benign examples that do not belong to the class L, i.e., ℱ_g^tgt∼ [E[ℱ_g(x_1)], ⋯, E[ℱ_g(x_N)]], where E[ℱ_g(x_i)] is the expected output of ℱ_g with an input x_i ∈ D_f^L.
§.§ Generation of Candidate Triggers
The IndisUAT attacker can perform the following steps to generate candidate triggers.
(1) Set the vocabulary set 𝒱 as described in Sec. <ref>. Set the length of a trigger (a sequence of words) N, an initial token t_init∈𝒱, the number of candidate triggers k, and the threshold of the cosine similarity τ.
A trigger T_L^* is initialized on line 1, Alg. <ref>.
(2) For each batch in D^L_f, run the HotFlip method <cit.> on line 3 of Alg. <ref> to generate the candidate tokens that are as close as possible to the class L in the feature space. The technical details are presented in Sec. <ref>.
(3) For each candidate token, replace T_L^*[0] with the candidate token on line 4 of Alg. <ref> by executing Alg. <ref>, and obtain an initial set of k candidate triggers.
For each i∈[1, N-1] and each initial candidate trigger, run Alg. <ref> to return a set of tuples and finally get a set T_cand. Each tuple contains a candidate trigger T_L^*, the loss for the target prediction ℒ, and the cosine similarity between detecting results before and after adding candidate trigger c^tgt.
The key steps in Alg. <ref> are as follows: 1 replace the id-th word of the trigger with a token to obtain a trigger T_L^* on line 4;
2 run the model F_θ with inputs T_L^* and original text examples in batch on line 5, and get ℒ= ℒ(ℱ_θ(x', L), ℱ_θ^tgt(x, L))=ℒ(ℱ_θ(x ⊕ T_L^*, L), ℱ_θ^tgt(x, L)) for each x∈ batch, where x' is an candidate adversarial example created by T_L^*; and
3 calculate the cosine similarity between detecting results before and after adding T_L^* to get c^tgt on lines 6-7.
§.§ Triggers Selection and Update
The IndisUAT attacker can perform the following steps to use a two-objective optimization and select triggers that can bypass DARCY's defense and successfully attack the class L.
(1) Filter out the candidate triggers satisfying c^tgt≥τ in each iteration on line 11, Alg. <ref>, and obtain the set of final remaining candidate triggers T_cand.
It indicates that the detecting results of adversarial examples generated by adding triggers in T_cand are similar to those of benign examples in D^L_f for the class L, so the adversarial examples can circumvent the DARCY's trapdoors.
(2) Build Eq. (<ref>) to select the desired triggers and adversarial examples as:
min_x'∈ D'{cos(ℱ_g^tgt(x'),ℱ_g^tgt(x))},
max_x'∈ D'^{ℒ(ℱ_θ(x, x'))},
s.t., x'=x ⊕ T_L^*∈ D', x ∈ D^L_f
T_L^*∈ T_cand.
In the first objective function, the cosine similarity is calculated as c^tgt on line 7, Alg. <ref>.
Since x' can be an adversarial example only if it is misclassified to L, the low similarity between detecting results of x outside the class L and x' indicates the higher attack success probability and detected probability.
Thus, the threshold τ strikes a balance between the likelihood of being detected by DARCY and the effectiveness of the IndisUAT attack. τ can be adaptively adjusted in each iteration.
In the second objective function, the loss of predicting results is calculated as ℒ on line 5, Alg. 2.
The maximal loss indicates that ℱ_θ misclassifies the selected x' to the class L with a high probability, thus the selected trigger T_L^* shows a strong attack.
(3) At each iteration in solving Eq. (<ref>), firstly update the embedding for every token in the trigger as shown in Eq. (<ref>), Sec. <ref>. Then, convert the updated embedding back to the corresponding tokens, and obtain a set of the tokens in triggers and a set of corresponding tuples to refresh T_cand.
Finally, find the tuple having maximal ℒ_j in T_cand to obtain the updated trigger T_L^*=cand_j^*, where j^* =max_(cand_j, ℒ_j, c_j^tgt)∈ T_cand (ℒ_j).
An example of IndisUAT is shown in Fig. <ref>.
§ PRINCIPLE ANALYSIS
IndisUAT searches and selects the adversarial examples indistinguishable to benign examples in the feature space without sacrificing their attack effects, so that IndisUAT deviates from the convergence direction of adversarial examples in original UAT method and keeps the adversarial examples away from DARCY's trapdoors.
Thus, the detection layer of DARCY is inactive to the adversarial examples generated by IndisUAT.
Fig. <ref> compares the downscaled feature distributions of original examples and adversarial examples before and after UAT
attack and IndisUAT attack.
The triggers generated by the UAT method result in an obvious difference between the benign examples and the adversarial examples for the detector.
Adversarial examples can be detected by DARCY due to the difference in Fig. <ref>.
The adversarial examples generated by IndisUAT deviate from those produced by UAT in the feature space, and merge with original examples.
Since there is no obvious dividing lines between the adversarial examples and the original examples as shown in Fig. <ref> and Fig. <ref>, the original model and DARCY have difficulty in distinguishing the adversarial examples from others.
Thus, the probability of detecting IndisUAT-crafted adversarial examples for DARCY is low.
The T-SNE <cit.> is used to generate the distribution results of the examples. More detailed analysis is analyzed in Sec. <ref>.
§ EXPERIMENTAL EVALUATION
§.§ Settings
Datasets and Threshold setting.
We use the same datasets as DARCY did, including Movie Reviews (MR) <cit.>, Binary Sentiment Treebank (SST) <cit.>, Subjectivity (SJ) <cit.>, and AG News (AG) <cit.>.
Their detailed information is shown in Table <ref>, Sec. <ref>.
We split each dataset into D_train, D_attack, and D_test at the ratio of 8:1:1.
All datasets are relatively class-balanced.
We set the threshold τ=0.8.
Victim Models. We attack the most widely-used models including RNN, CNN <cit.>, ELMO <cit.>, and BERT <cit.>.
Besides DARCY, adversarial training methods are used to defend adversarial attacks, including PGD <cit.>, FreeAt <cit.>, and FreeLb <cit.>.
We report the average results on D_test over at least 5 iterations.
Attack Methods. We compared IndisUAT's performance with three adversarial attack algorithms: (1) Textfooler <cit.> that preferentially replaces the important words for victim models;
(2) PWWS <cit.> that crafts adversarial examples using the word saliency and the corresponding classification probability;
and (3) TextBugger <cit.> that finds the important words or sentences and chooses an optimal one from the generated five kinds of perturbations to craft adversarial examples.
Baselines. For text classification tasks, we use the results from the original model and the DARCY's detector with 5 trapdoors as the benchmarks for the attacks on the original model and the detector model, respectively.
For other tasks, we use the results from the original model as benchmarks. For the original task, benchmark is the result improved by using a pre-training model.
Evaluation Metrics. We use the same metrics as DARCY <cit.> did, including Area Under the Curve (detection AUC), True Positive Rate (TPR), False Positive Rate (FPR), and Classification Accuracy (ACC).
The attacker expects a lower AUC, TPR, ACC, and a higher FPR.
§.§ Effect of IndisUAT on DARCY Defense
We choose the clean model as a baseline.
Table <ref> shows that IndisUAT circumvents the detection of DARCY with a high probability.
For the RNN and CNN models, IndisUAT has lower ACC than other attack methods.
IndisUAT incurs the ACC of the RNN model at least 33.3% on all datasets below the baseline, and meanwhile reduces the TPR of the DARCY's detector at least 40.8% on all datasets.
For the BERT model, the ACC drops at least 27.3%, and the detecting TPR drops at least 27.4% on all datasets after the IndisUAT attack.
The IndisUAT attack performs better for the CNN model, since it reduces the ACC of the CNN model at least 51.6% compared with the baseline, and the TPR of the detection of DARCY is reduced at least 90.6%.
Therefore, DARCY is more vulnerable when it protects the CNN model under the IndisUAT attack.
DARCY can strengthen its detecting ability through increasing the injected trapdoors.
However, the ACC of the models falls sharply as the number of trapdoors increases as shown in Fig. <ref>.
When 50 trapdoors are added into the CNN model, the ACC drops by 34.64%.
For the models with low ACC, the DARCY's detector is not able to distinguish the adversarial examples with a high accuracy.
Thus, it is technically unfeasible for DARCY to defend against the IndisUAT attack by adding unlimited trapdoors.
We discuss the effect of the number of injected trapdoors k on IndisUAT in Fig. <ref>.
We observe that k has an obviously milder impact on the BERT model than that on the RNN and CNN models.
Besides, the AUC, and the TPR are significantly lower than those of baseline in all cases. When k=20, the ACC of the BERT model decreases by 38.2% and 37.8% with DARCY on MR and SJ datasets, respectively.
The corresponding TPR decreases by 53.1% and 31.9%, respectively.
§.§ Effect of IndisUAT on Adversarial Defense
Table <ref> shows that the IndisUAT attack is at work for the adversarial defenses based on PGD, FreeAt, and FreeLb.
The ACC drops by 6.8% to 68.1% after adding the triggers generated by IndisUAT in all cases.
IndisUAT has the least impact on the result from the RNN model over the AG dataset, and its ACC only drops by 8.9% at most.
For the BERT model on the AG dataset, IndisUAT has the most impact on the ACC and incurs a drop of 44.2%-68.1% in the ACC.
The IndisUAT-crafted adversarial examples are semantically similar to the original examples compared with the trapdoors as shown in Table <ref> and Table <ref> by Universal Sentence Encoder (USE) <cit.>.
Thus, IndisUAT is difficult to be identified by semantic detection methods and has good concealment.
§.§ Effect of IndisUAT on Other Tasks
IndisUAT can be used to attack the models for text generation, text inference, and reading comprehension in addition to the text classification task.
A custom attack dictionary is used to make the models much more risky and vulnerable to unknown attacks.
We target many pre-trained models, adversarial trained models, and trained models to illustrate that IndisUAT is still highly transferable.
Text Generation. IndisUAT is used to generate triggers for racist, malicious speech on the GPT-2 <cit.> model with 117M parameters.
Applying the triggers to the GPT-2 with 345M parameter model is able to generate malicious or racially charged text as shown in
Table <ref>. The detailed results refer to Sec. <ref>.
Reading Comprehension.
The SQuAD dataset is used for the questions about why, who, where, when.
The F1 score of the result from BiDAF <cit.> is set as a metric, and only a complete mismatch indicates a successful attack <cit.>.
Table <ref> shows the results, where the triggers generated under BiDAF (white box) migrated to the BiDAF model with ELMO embeddings (BiDAF-ELMO, black box).
Text Inference.
The top-5 triggers are searched and used to attack the ESIM <cit.> (white-box) model for inference tasks.
IndisUAT is highly transferable, since the triggers directly attack black-box models (DA <cit.>, DA model with ELMO <cit.> embeddings (DA-ELMO)) and incur a remarkable decrease in the ACC in Table <ref>.
§ CONCLUSION
We propose a novel UAT attack that can bypass the DARCY defense called IndisUAT.
IndisUAT estimates the feature distribution of benign examples and produces adversarial examples to be similar enough to the distribution estimates at the DARCY's detection layer.
Meanwhile, the adversarial examples with the maximal loss of predicted results of the original model are selected to attack the model with a high success rate.
Extensive experiments show that IndisUAT circumvents the DARCY defense even with decades of injected trapdoors, while reducing the accuracy of the original model, adversarial training model, and pre-training model.
Beside the text classification tasks, IndisUAT is at work for other tasks, e.g., text generation, text inference, and reading comprehension.
Therefore, IndisUAT is powerful and raises a warning to model builders and defenders.
It is challenging to propose approaches to protect the textual NN models against IndisUAT in the future.
§ LIMITATIONS
IndisUAT generally outperforms other attack methods for many reasons.
First, IndisUAT, as an universal attack method, does not require the white-box (gradient) access and the access to the target model at the inference stage.
The widespread existence of trigger sequences lowers the barrier for attackers to enter into the model.
Second, the trigger search is bath-oriented in the IndisUAT method, while other attacks rely on the results of a single example, so the overall attack effect of IndisUAT is stronger than that of others.
Third, the trigger search can be extended to find more powerful trigger sequences in an extended vocabulary. The time complexity of searching triggers increases linearly with the size of the vocabulary.
However, this increased complexity is negligible, since Top-K, beam search, and KDTree methods can be used to speed up the search process by discarding trigger sequences with low impact on the results.
If the information of the detector is fully obtained, IndisUAT is highly transferable to attack even the black-box defense models with different tokenizations and architectures.
§ BROADER IMPACT STATEMENT
IndisUAT inspired by FIA <cit.> uses the cosine similarity to build adversarial examples against honeypot-injected defense models.
Although the IndisUAT attack is specifically designed to bypass the DARCY defense, it also provides effective ideas of adversarial examples generation to circumvent similar detection and defense mechanisms.
The vulnerability of the learning model can be found using adversarial attack methods, and its robustness can be improved using adversarial defense methods.
Meanwhile, it is necessary for researchers to design novel methods that can filter out potential adversarial examples to improve the robustness of learning models.
§ APPENDIX
§.§ Preliminaries
§.§.§ UAT Attack
Given a textual DNN model ℱ parameterized by θ, an attacker adds a perturbation δ to the original data x, and obtains a perturbed example x'≡ x + δ. x' is an adversarial example, if the addition of x' results in a different classification output, i.e., ℱ_θ(x') ≠ℱ_θ(x).
UAT attack <cit.> consists of two steps.
(1) Trigger Search.
The task loss ℒ for the target class L is minimized to search the best trigger S, i.e., min_Sℒ = -∑_x logℱ_θ(x⊕ S, L).
Trigger S is a fixed phrase consisting of k tokens (original example tokens). ⊕ is token-wise concatenation.
(2) Trigger Update.
UAT method updates the embedding value e'_i to minimize its influence on the average gradient of the task loss over a batch ∇_e_adv_iℒ, i.e.,
argmin_e'_i∈𝒱 [ e'_i - e_adv_i ] ^ T∇_e_adv_iℒ,
where 𝒱 is the set of all token embeddings in the model's vocabulary, and T is the first-order Taylor approximation.
The embeddings are converted back to their associated tokens, and the tokens that alter the corresponding classification results are selected as the updated triggers.
§.§.§ DARCY
DARCY <cit.> consists of the following three steps.
(1) Trapdoor Search. To defend attacks on a target label L of model ℱ, DARCY performs a multiple-greedy-trapdoor search algorithm H with the inputs of (K, D_train, L) to select K trapdoors S_L^*={w_1,w_2,⋯, w_K}. H has the properties of fidelity, robustness, and class-awareness.
(2) Trapdoor Injection. DARCY injects S_L^* into ℱ by populating a set of trapdoor-embedded examples, and obtains a new dataset D_trap^L{ (S_L^*⊕x, L):(x,y) ∈D_y ≠ L}, where D_y ≠ L{D_train : y ≠ L }.
DARCY baits S_L^* into F by training ℱ to minimize the NLL loss on both original examples and trapdoor-embedded examples.
(3) Trapdoor Detection. DARCY trains a binary classifier ℱ_g using the binary NLL loss, i.e., min_θ_ℱ_gℒ_ℱ_g=∑_x∈ D_train -log(ℱ_g(x)) - log(1-ℱ_g(x')),
where θ_ℱ_g denotes the parameters of ℱ_g, and x' ≡ x ⊕ S_L^*.
§.§.§ HotFlip
In the HotFlip method <cit.>, the attacker inputs the adversarial examples into the original model, and then uses the back-propagation learning process of the model to obtain the gradients of the trained triggers.
The attacker calculates the model product of the gradient vectors corresponding to the triggers and the trained triggers at the embedding layer.
The trigger-involved dimension of the model product matrix can be denoted as a vector.
All components of the vector are sorted to select the k-highest components, and the attacker gets the words in 𝒱 corresponding to these k components as the k candidate tokens.
§.§ More Detailed Analysis
§.§.§ Threshold Analysis
The threshold τ is critical to adaptively circumvent the DARCY defense with k trapdoors.
When k is small, e.g., k < 5,
τ can ensure that the features of the adversarial examples are as similar as possible to the target class and they are not matched with the signature of the detection layer.
When k is large, e.g., k > 10,
the detector is extremely sensitive.
Thus, τ should be large for Eq. (<ref>) by selecting T_cand, e.g., a value close to 1.
Then, the first objective of the IndisUAT attack in Eq. (<ref>) is to find the adversarial examples whose output under DARCY is very similar to the detection output of original data under DARCY.
§.§.§ Trigger Analysis
In the process of generating triggers, the smaller length of the trigger has higher concealment.
The default length of triggers in IndisUAT is 3.
IndisUAT uses the beam search and pruning method to accelerate searches and achieve a low time complexity O(|𝒱|), where 𝒱 is the vocabulary set.
Thus, the speed of searching triggers in the IndisUAT method is fast.
The searched triggers are effective, because of the constraints on the similarity part of Eq. (<ref>) and the HotFlip method.
For example, even if the length of a trigger is small, e.g., 3, it can successfully compromise the DARCY's detector with 20 trapdoors.
Thus, the IndisUAT method produces effective and imperceptible triggers.
§.§ Further Details of Experiments
* Table <ref> shows the detailed statistics of four datasets used in the experiments as mentioned in Sec. <ref>.
* Table <ref> shows the details of the malicious output of the text generation model in Sec. <ref>.
§.§ Reproducibility
§.§.§ Source Code
We release the source code of IndisUAT at: xxxxxxsource code
§.§.§ Computing Infrastructure
We run all experiments on the machines with Ubuntu OS (v22.04), Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz, 93GB of RAM, and an RTX 3090. All implementations are written in Python (v3.7.5) with Pytorch (v1.11.0+cu113), Numpy (v1.19.5), Scikit-learn (v0.21.3), allennlp (v0.9.0), Textattack (v0.3.7)[https://github.com/QData/TextAttack]. We use the Transformers (v3.3.0)[https://huggingface.co./] library for training transformers-based BERT. Note that, the version of python can also be 3.6.9.
§.§.§ Model’s Architecture and # of Parameters
The structure of the CNN model with 6M parameter consists of three 2D convolutional layers, a max-pooling layer, a dropout layer with probability 0.5, and a Fully Connected Network (FCN) with softmax activation for prediction.
The pre-trained GloVe <cit.> is used to transform the original discrete texts into continuous features and feed them into the models.
The RNN model with 6.1M parameters uses a GRU layer to replace the convolution layers of CNN, and its other layers remain the same.
The BERT model with 109M parameters is imported from the Transformers library.
The ELMO[https://allenai.org/allennlp/software/elmo] model with 13.6M has a LSTM network, and the size of the input layer and that of the hidden layer of LSTM are 128 and 1024, respectively.
We construct a vocabulary set, called 𝒱, for the trigger search in IndisUAT.
𝒱 contains 330K words, 126k words are extracted from the datasets shown in Table. <ref>, and the other words are randomly produced.
The features of all words in 𝒱 are taken from the GloVe pre-trained features.
In our experiments, DARCY is run with the vocabulary set 𝒱.
§.§.§ Implementation of Other Attacks
We use the tool kit of Textattack <cit.> to generate the adversarial examples of PWWS, TextBugger, and Textfooler.
The parameters setting is shown in Table <ref>. The bert-base-uncased version of BERT model is used, and the structures of CNN and RNN are the same as those presented in Sec. <ref>.
These adversarial attacks and the IndisUAT attack use the same test datasets, which are extracted from the four datasets shown in Table <ref>.
|
http://arxiv.org/abs/2409.02590v1 | 20240904101449 | Magnon spin transport in the van der Waals antiferromagnet CrPS4 for non-collinear and collinear magnetization | [
"Dennis K. de Wal",
"Muhammad Zohaib",
"Bart J. van Wees"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
APS/123-QED
[email protected]
Zernike Institute for Advanced Materials, University of Groningen,
Groningen, the Netherlands
Zernike Institute for Advanced Materials, University of Groningen,
Groningen, the Netherlands
§ ABSTRACT
We investigate the injection, transport and detection of magnon spins in the van der Waals antiferromagnet chromium thiophosphate (CrPS4). We electrically and thermally inject magnon spins by platinum contacts and examine the non-local resistance as a function of in-plane magnetic field up to 12 Tesla. We observe a large non-local resistance from both the electrically and thermally excited magnon modes above the spin-flip field where CrPS4 is in the collinear state. At 25 K for an in-plane field of ranging 5 - 12 T, we extract the magnon relaxation length λ_m ranging 200-800 nm and a typical magnon conductivity of σ_m≈1×10^4 Sm-1, which is one order of magnitude smaller than in yttrium iron garnet (YIG) films at room temperature. Moreover, we find that σ_m is almost zero for CrPS4 in the non-collinear state. Our results open up the way to understanding the role of the antiferromagnetic magnon modes on spin injection into antiferromagnets and implementation of two-dimensional magnets for scalable magnonic circuits.
Magnon spin transport in the van der Waals antiferromagnet CrPS4 for non-collinear and collinear magnetization
Bart J. van Wees
September 9, 2024
==============================================================================================================
§ INTRODUCTION
Magnon spin transport has been extensively studied in insulating ferro- and ferrimagnets, by spin pumping<cit.>, the spin Seebeck effect (SSE)<cit.> and electrical injection and detection<cit.>. Antiferromagnets possess several advantages over ferromagnets for spintronic applications, such as stability to external fields<cit.> and operation frequencies up to terahertz scale<cit.>. Long distance magnon transport has been demonstrated in YFeO3<cit.> and antiferromagnetic hematite<cit.>, as well as coherent control of magnon spin dynamics<cit.>. The discovery of insulating ferro- and antiferromagnetic van der Waals materials such as the chromium trihalides (CrX3, X = Cl, Br, I) and transition metal phosphates (MPS3, M = Fe, Mn, Ni, Co) allows for the study of magnon spin transport in the quasi two-dimensional (2D) limit. These materials can be isolated into monolayer or few-layer thicknesses and show diverse inter- and intralayer exchange couplings and magnetic anisotropies. The resulting rich spin textures make 2D antiferromagnetic van der Waals materials a promising platform for the study of spin wave properties and transport.
In these materials the investigation of antiferromagnetic resonance (AFMR) reveals the existence of acoustic and optical magnon modes <cit.>. However, AFMR studies are typically limited to sub-40 GHz excitation frequencies at external fields below 1.5 T, whereas the critical fields and precession frequencies of antiferromagnets often exceed these values. Moreover, AFMR does not resolve the role of the magnon modes in spin transport. The study of the propagation of magnons by employing a non-local geometry unveils information about the transport properties such as the magnon relaxation and magnon conductivity as well as the role of the antiferromagnetic magnon modes on the transport. Magnon transport driven by a thermal gradient (SSE) has been reported in both ferro- and antiferromagnetic van der Waals materials<cit.>. Yet these thermally driven magnons provide only convoluted information about the magnon transport properties.
On the other hand, “all electrical” magnon transport does not suffer these problems, as the exact locations of magnon exitation and detection are clear, allowing for direct electrical control over the magnon currents. This enables the study of the spins carried by the different AFM magnon modes and their effect on transport. Yet, so far in antiferromagnetic van der Waals materials, field tunable all electrical long distance magnon transport has only been shown in CrPS4<cit.>.
Here, we expand this work and perform a detailed study of the injection, transport and detection of spins by exciting the antiferromagnetic magnon modes for magnetic field perpendicular to the anisotropy axis (c-axis) of CrPS4 through investigating the non-local resistance as a function of field and temperature.
§ EXPERIMENTAL DETAILS
In this work, we employ a non-local geometry of 7 nm thick platinum strips on top of a ∼160 nm thick exfoliated CrPS4 flake (Fig. <ref>). A low frequency (<20Hz) AC charge current I through an injector Pt strip generates an in-plane spin accumulation perpendicular to I via the spin Hall effect (SHE)<cit.>. Moreover, in the same Pt strip the current I via by Joule heating, creates both vertical and lateral thermal gradients, in the CrPS4 flake, which drives a spin Seebeck generated magnon current. Lastly, in an adjacent detector Pt strip, the two effects both generate non-local voltages V_el and V_th via the inverse SHE. The first comes from the injected spin current and the second from the spin Seebeck effect.
The time-dependent voltage response for I(t) = I_0 sin(ω t) can be expanded as V(t)=R_1I(t) + R_2I^2(t) + ... where R_1 and R_2 are the first and second order resistances. The non-local resistance R_nl=Vdetector/Iinjector can therefore be separated as R_nl^1ω=V_el/I and R_nl^2ω=V_th/I^2. Lebrun et al. <cit.> demonstrate that for the easy axis antiferromagnet hematite (α-Fe2O3), at fields below the spin-flop transition, the magnonic spin currents carry spin polarized along the Néel vector 𝐍 = (𝐦_1 -𝐦_2)/2, where 𝐦_1 and 𝐦_2 are the sublattice magnetizations. Yet, they show that thermal spin currents scale with the net (field-induced) magnetic moment 𝐌 = (𝐦_1 + 𝐦_2)/2. However, the effect of the magnon modes themselves on the transport is not discussed.
In figure <ref>, the (k=0) magnon modes for CrPS4 are depicted for non-zero out-of-plane (oop) and in-plane (ip) external field. Figure <ref>a and <ref>b show the modes before and after the spin-flop transition (oop), respectively.
For the ω_∥1 mode (acoustic) the Néel vector is in the y-direction (Fig. <ref>b). Under the assumption that the ip anisotropies H_a=H_b (where the subscript indicated the crystal axis), the frequency of ω_∥2 (optical) vanishes at the spin-flop. We expect that ω_∥2 becomes a soft magnon mode, with zero frequency. In this case only the acoustic mode ω_∥1 has a non-zero frequency<cit.>. Figure <ref>d and <ref>e show the non-collinear (canted AFM) and the collinear (FM) state (for ip), before and after the spin-flip field above which the two sublattices align with the field, respectively. For ip fields, the ip projection (y-direction) of the net magnetization 𝐌 increases linearly with increasing field till the spin-flip field (H_E⊥)<cit.>. The Néel vector 𝐍 is along the z-axis and decreases with increasing field.
The canting angle sinθ_⊥ = H/H_E⊥ is the angle between the static sublattice magnetization and the anisotropy axis (z-axis).
At H>H_E⊥ the system is in the collinear state. The acoustic and optical modes persist; the first, ω_⊥1, is a FM-mode, which is the same as for an uniaxial ferromagnet (also known as Kittel mode).
The latter, ω_⊥2, is the AFM-mode,
where 𝐌 is static and 𝐍 still oscillates in the xz-plane. The dispersion relation at the band edge (k=0) as a function of the external field (at T=2 K) is shown in figure <ref>c for both the in-plane and out-of-plane magnon modes.
A recent study on spin pumping in CrCl3 reveals that in the spin-flopped state the acoustic mode (ω_∥1) the spin current is driven by 𝐌, yet for the optical mode (ω_∥2) it is driven by 𝐍<cit.>. Further, a recent theoretical study by P. Tang et al.<cit.> on spin pumping in the non-collinear AFMs shows that when the two modes contribute equally, the spin current follows 𝐌, whereas the spin current component along 𝐍 only plays a role when the modes do not contribute equally (uncompensated Pt/AFM interface).
Despite this understanding, the effect of the antiferromagnetic magnon modes on the transport of magnons, especially in the non-collinear regime with field perpendicular to the anisotropy axis, remains unclear.
Therefore, we explore here which antiferromagnetic magnon modes can be excited and detected by a spin accumulation μ. In the Pt strips, at the Pt/CrPS4 interface, produced by the SHE, the spin accumulation μ can only be generated in y-direction.
Hence, when a magnon mode can carry spin in the y-direction it can absorb μ. For CrPS4 where we apply field H, we can distinguish 5 phases: For oop fields: 1. H<H_sf; spin injection is only possible for H∥μ and μ∥ M,N, 2. H_sf<H<H_E∥ & 3. H>H_E∥ and for ip fields 4. 0<H<H_E⊥ & 5. H>E⊥; spin injection is only possible for μ∥ H and H∥ M, H⊥ N.
Secondly, only the magnon modes in which 𝐌 or 𝐍 precess can pump spins into the Pt. Since μ in the Pt is only collinear with H in phase 4. and 5. and only for the acoustic magnon mode ω_⊥1 𝐌 precesses, only ω_⊥1 contributes to spin pumping. By reciprocity, electrical injection of spin can therefore only excite ω_⊥1.
Nonetheless, the above depiction only holds under the assumption that both magnetic sublattices contribute equally to the spin pumping (and injection). With CrPS4 being an A-type antiferromagnet, the second sublattice has a greater distance to the Pt/CrPS4 interface than the first. This could lead to a magnetically uncompensated interface
, for which precession of 𝐌 or 𝐍 orthogonal to the y-axis will contribute to the spin pumping as well<cit.>. The two spin currents from CrPS4 to the Pt as a result of spin pumping originating from the two individual sublattices carry spin collinear to their static sublattice magnetization (dashed line in figure <ref>d). The spin polarization of these spin currents over the interface have equal projection on the y-axis, but opposite on the z-axis. Hence, when both sublattices do not couple equally, the optical magnon mode ω_⊥2 can contribute to the spin pumping (and injection) proportional to oscillating 𝐍.
§ RESULTS AND DISCUSSION
In figure <ref>a, R_nl^1ω is plotted at 25 K as a function of magnetic field along the y-axis (ip). We find that at H< 6 T, the system is in the non-collinear state where R_nl^1ω is zero. At 6 T <H< 8 T, R_nl^1ω increases till it saturates at H> 8 T. This behavior agrees with the magnetization of the CrPS4 at 25 K, given in the Supporting Information (SI)<cit.>, where 𝐌 for an in-plane field gradually saturates for 6 T <H< 8 T<cit.>. R_nl^1ω in figure <ref> is given for three injector detector spacings d, which is the edge-edge distance between the Pt strips. These results agree well with the R_nl^1ω obtained from ADMR measurements in our earlier work<cit.>.
At 25 K, k_BT ≫ħω, with ω being the frequency of the magnon modes, hence thermal equilibrium magnons populate both magnon modes (see figure <ref>c) at all field strengths shown in figure <ref>a. Therefore, both modes (ω_⊥1 and ω_⊥2) could contribute to transport. Regardless of which magnon mode contributes, the absence of R_nl^1ω below H_E⊥ is surprising. Possibly, the spin is not conserved when CrPS4 is in the canted AFM state, due to the axial symmetry breaking<cit.>. We discuss other possible reasons later.
The thermally generated magnon transport signal shows a very different trend as a function of field. In figure <ref>b the local (right axis) and non-local (left axis) R^2ω are shown for different d. The advantage of the local over the non-local SSE signal is that the first contains direct information on effect of the magnon modes on the thermal spin pumping. Whereas the latter contains convoluted information on the magnon transport as well. For the local SSE (R^2ω_l), in the non-collinear state, the number of thermally excited magnons increases with increasing sinθ_⊥, i.e. follows the net magnetization (ferromagnetic SSE)<cit.>. However, at fields close to H_E⊥ R^2ω_l increases non-linearly w.r.t sinθ_⊥. Upon saturation of the sublattice magnetization, the signal starts to saturate, yet for H>H_E⊥ the saturation continues up to higher fields than for R_nl^1ω (Fig. <ref>a).
For the non-local SSE signal R_nl^2ω, at H<H_E⊥, (Fig. <ref>b), the dependence on the field is similar to that of the local SSE signal. However, the sign of R_nl^2ω is opposite to R_l^2ω and the amplitude is two orders of magnitude smaller. The former indicates that, driven by the SSE, the magnon chemical potential is opposite. The behavior of R_nl^2ω, can be understood as follows: A thermal magnon current (proportional to ∇ T) is driven away from the injector by the SSE, effectively creating a depletion of magnons at the injector and a magnon accumulation away from the injector. This accumulation drives diffusive magnon currents, proportional to ∇μ_m, towards the injector and detector<cit.>. For H<H_E⊥, the absence of a R_nl^1ω is this state suggests that diffusive transport lengths are very small and the similarity of R_nl^2ω to R_l^2ω and in this state points out that the SSE drives the magnons till just under the detector, where the magnon accumulation is detected.
The increase at H_E⊥, which is very similar to that in R_nl^1ω (figure <ref>a), shows the onset of diffusive magnon transport. Yet, where R_nl^1ω clearly saturates at larger fields, R_nl^2ω decreases and even changes sign at fields far above H_E⊥ for the shortest d. This can be understood by the competing thermal (SSE-driven) and diffusive magnon current in the system, i.e. both the SSE generated magnon accumulation and diffusive magnon currents contribute to R_nl^2ω. The decrease in R_nl^2ω at largest field strengths could indicate that the diffusion lengths increase with increasing field. To our surprise, for larger d, R_nl^2ω actually becomes larger. In YIG a sign change of R_nl^2ω is observed as a function of d. At larger d (where d≪λ_m still holds), R_nl^2ω increases with increasing d<cit.>. Similar behavior could be possible in CrPS4.
In figure <ref>c, R_nl^1ω as a function of field at different oop angles (β) is shown. Note that β=90^∘-θ_⊥. Within the non-collinear regime R_nl^1ω remains zero, whereas in the collinear regime the R_nl^1ω scales with the projection of 𝐌 on the y-axis (cos^2β, as being dependent on the electron spin accumulation at both the injector and detector). For R_nl^2ω, (figure <ref>d), the signal also scales with the projection of 𝐌 (cosβ). For β=90^∘, where the spin-flop is predicted around 0.8 T, neither for R_nl^1ω, nor for R_nl^2ω this transition is observed. As we cannot see any contribution of oop spins and since μ∥𝐍, following the spin pumping in CrCl3<cit.>, only the optical mode (ω_∥2) could contribute to spin pumping. However, this mode is a soft mode in our system.
In figure <ref>, R_nl^1ω and R_nl^2ω are given as a function of temperature. In the non-collinear regime R_nl^1ω is zero for all temperatures above and below the Néel temperature. In the collinear regime a non-monotonous dependence is observed. For temperatures <10 K (at H>7 T) the acoustic magnon mode (ω_⊥1) is possibly not occupied, but R_nl^1ω is non-zero. This suggests that the optical magnon mode (ω_⊥2) does contribute spin pumping, indicating an unequal coupling of the sublattices. Around 25 K, R_nl^1ω is maximum and at higher temperature R_nl^1ω decreases, diminishing just above the Néel temperature, TN = 38 K) (at larger field strenghts, the magnetic ordering is maintained up to slightly higher temperatures, see magnetization behavior in the SI).
However, the number of magnons and the magnon transport properties are highly temperature dependent, therefore, information on the contributions by the different magnon modes cannot be directly extracted.
In addition, R_nl^2ω shows a very different behavior as a function of temperature. The measured R_nl^2ω changes sign for low temperature (<15 K), where the effect is strongest in the collinear regime. The local SSE R_l^2ω, given in the SI, does not show this behavior. The occupation of the acoustic magnon mode (ω_⊥1) is affected by the temperature, whereas the optical mode (ω_⊥2) will remain populated for all temperatures in figure <ref>b. For fields strength of ≤7 T, the eigen energy ħω_⊥1<k_BT, the acoustic mode could contribute spin pumping. However, the effect of the temperature on the non-local SSE in CrPS4 and the effect of the magnon modes on transport are not fully understood.
Altogether, the injection (excitation) of magnon spins and detection (incoherent spin pumping), and the transport of magnon spins by the magnon modes are entirely different processes. Spin is not a conserved quantity. This holds for injection and detection, and also for magnon spin transport. Thus, magnons excited by a spin current at the injector can diffuse towards the detector where they again create a spin current via spin pumping, but the spin is not necessarily conserved during transport (or injection and detection). Therefore, in our transport measurements we cannot determine if the magnon modes `carry' the spins. Furthermore, the magnon relaxation might be affected by the non-collinearity of the AFM. When the two sublattices are non-collinear, the strong exchange interaction possibly suppresses the magnon conductivity σ_m and relaxation length λ_m. At the spin flip field, the exchange energy is overcome and the sublattices become collinear, possibly allowing a strong increase in both σ_m and λ_m<cit.>.
In figure <ref>a, the R_nl^1ω in the collinear state (at 7 T) is given as a function of edge-edge distance, d, between the injector and detector Pt strip, measured for multiple devices. The model for diffusive magnon transport leads to a decay in R_nl^1ω with increasing d as a function of the λ_m. Under the assumption of a large enough effective interface spin mixing conductance, R_nl^1ω is given by:
R_nl^1ω = C/λ_mexpd/λ_m/1-exp2d/λ_m,
where C is a constant capturing all distance-independent pre-factors, such as the magnon conductivity σ_m. For d<λ_m the transport is purely diffusive and R_nl^1ω∼ 1/d, with the transport being Ohmic<cit.>, whereas for d>λ_m, R_nl^1ω decays exponentially as the magnon relaxation sets in. From equation <ref> we extract λ^1ω_m= 490±30 nm with σ_m≈1×10^4 Sm^-1 , at 7 T, under the assumptions elaborated in SI.
In figure <ref>b, λ_m and σ_m are extracted using equation <ref>, in similar fashion as in Fig. <ref>a, as functions of the ip field around the spin-flip transition. At 5 T, λ_m is smaller than the smallest d on our devices and σ_m is (close to) zero. Here, the magnon transport is heavily suppressed. Increasing in field, between 6-8 T, both transport parameters increase. σ_m increases sharply and saturates at field >8 T, whereas the increase in λ_m is less abrupt and, in our measurements, does not seem to saturate. In fact, >8 T, the decay of R_nl^1ω as a function of d seems to be fully Ohmic, yet due to insufficient data we cannot extract λ_m at these fields (see SI). For these fields we extract σ_m by assuming λ_m ≫ d, see SI.
The effect of the different magnon modes on magnon spin transport is still not entirely disclosed. For systems using the non-local geometries the measured non-local resistance depends on several factors, the SHE and ISHE in the Pt contacts, the transparency of the Pt/CrPS4 interface and the magnon conductivity and relaxation.
The large spin Hall magnetoresistance measured on these samples (see SI) and in previous work<cit.> indicates a transparent interface.
§ CONCLUSION
The exact effect of the antiferromagnetic magnon modes on magnon spin transport in the uniaxial antiferromagnet CrPS4 with field orthogonal to the anisotropy axis, remains so far unclear. In the non-collinear regime the magnon relaxation length λ_m and the magnon condutivity σ_m are almost zero. We find that both λ_m and σ_m strongly increase at the spin-flip transition. At 7 T, we find λ_m= 490±30 nm and σ_m≈1×10^4 Sm^-1, the latter is one order of magnitude smaller than the typical values found in 210 nm thick YIG at room temperature<cit.>. The thermally generated magnons via the SSE indicates that both magnon modes contribute to the non-local resistance, yet their individual contributions to transport continues to be unresolved. Obviously, currently we do not have a full comprehension of the role of the various modes for magnon spin transport. Nevertheless, these result prepare the way to understanding and using the antiferromagnetic magnon modes for long-distance magnon spin transport in 3D and 2D van der Waals antiferromagnets.
We want to express our special gratitude towards G.E.W. Bauer, P. Tang and J. Barker for insightful discussions and suggestions. We acknowledge the technical support from J. G. Holstein, H. Adema, H. H. de Vries, and F. H. van der Velde. We acknowledge the financial support of the Zernike Institute for Advanced Materials and the European Union’s Horizon 2020 research and innovation program under Grant Agreements No. 785219 and No. 881603 (Graphene Flagship Core 2 and Core 3). This project is also financed by the NWO Spinoza prize awarded to B.J.W. by the NWO and has received funding from the European Research Council (ERC)
under the European Union’s 2DMAGSPIN (Grant Agreement No. 101053054).
|
http://arxiv.org/abs/2409.03701v1 | 20240905165739 | LAST: Language Model Aware Speech Tokenization | [
"Arnon Turetzky",
"Yossi Adi"
] | cs.CL | [
"cs.CL",
"cs.SD",
"eess.AS"
] |
Appendix
Kathryne J. Daniel
September 9, 2024
======================
§ ABSTRACT
Speech tokenization serves as the foundation of speech LM, enabling them to perform various tasks such as spoken language modeling, text-to-speech, speech-to-text, etc. Most speech tokenizers are trained independently of the LM training process, relying on separate acoustic models and quantization methods. Following such an approach may create a mismatch between the tokenization process and its usage afterward. In this study, we propose a novel approach to training a speech tokenizer by leveraging objectives from pre-trained textual LMs. We advocate for the integration of this objective into the process of learning discrete speech representations. Our aim is to transform features from a pre-trained speech model into a new feature space that enables better clustering for speech LMs. We empirically investigate the impact of various model design choices, including speech vocabulary size and text LM size. Our results demonstrate the proposed tokenization method outperforms the evaluated baselines considering both spoken language modeling and speech-to-text. More importantly, unlike prior work, the proposed method allows the utilization of a single pre-trained LM for processing both speech and text inputs, setting it apart from conventional tokenization approaches.
Speech Tokenization, Speech Language Models
§ INTRODUCTION
The development of Speech Language Models () was recently raised as a new research direction within the spoken language processing community <cit.>. are usually composed of two or three main components: (i) speech tokenizer which converts raw speech signals into discrete tokens; (ii) uLM, operating over this discrete representation to learn the underlying distribution of speech utterances; and (iii) in the generative setup, a unit-based vocoder which converts speech tokens into a waveform signal <cit.>. The ability to operate directly over raw speech recordings without accessing any textual supervision holds great potential: (i) This can be beneficial for languages that do not have large textual resources or standardized orthography (Swiss German, dialectal Arabic, Igbo, etc.); (ii) It may also be useful for “high-resource” languages, in which the oral and written forms often mismatch; and (iii) It can also model non-verbal cues such as laughter, coughing, etc. Recent studies in the field have shown that following this modeling paradigm can be beneficial for spoken dialogue modeling <cit.>, speaking style conversion and speech emotion conversion <cit.>, direct speech-to-speech translation <cit.>, silent video-to-speech generation <cit.>, etc. In this work, we study the speech tokenization part.
The most common approach nowadays to speech tokenization is applying the k-means algorithm over representations obtained from a pre-trained SSL models, this results in a semantic speech tokens <cit.>. While being simple and effective, this approach has several drawbacks. First, it requires a separate training stage on top of the SSL model, and can not be done jointly. Second, it is not clear what layer should we use to obtain representations to train the k-means model. Prior work proposed different layers <cit.> and show varying results when different layers are being chosen, even when considering the same model. Lastly, the k-means model is sensitive to the data used to learn the cluster centroids. Hence, it remains an open question of what data should be used to train the clustering model. Prior work found that different dataset splits result in different model performance <cit.>. Another line of speech and audio tokenization was proposed by <cit.> using a Residual Vector Quantization (RVQ) method under the auto-encoding setup. Such a tokenization method is highly general and can capture any type of audio (not only speech), however, it was found to be challenging to optimize on top of it without conditioning on text or semantic tokens <cit.>. Such representation is often denoted as acoustic tokens.
Considering the fact that the above-mentioned tokens will be later on used to train , raises the following question: can we construct a speech tokenizer which will be guided by a language model? Moreover, recently <cit.> found that performing a warm-initialization of parameters from a pre-trained text LM, such as OPT <cit.> or LLaMA 2 <cit.>, results in superior performance to cold-random initialization.
Equipped with the above findings we propose , which stands for Language Model Aware Speech Tokenization. Specifically, we propose to involve a pre-trained text LM during the tokenization process to result in speech tokens that will be better suited for sequential modeling. Generally, is comprised of three main components: (i) a pre-trained, frozen speech SSL model which extracts contextualized speech representations from speech signals. We experimented with HuBERT <cit.>, however is general and can be applied to any SSL method; (ii) an adapter-quantization module which converts this contextualized representation into discrete tokens; and (iii) a pre-trained, frozen LM which guides the tokenization process towards better sequential modeling. A visual description of the proposed method can be seen on <Ref>. Notice, in contrast to <cit.> which fine-tunes text LM using speech tokens, and as a result removing the learned textual information, the proposed approach keeps the text-LM frozen, hence does not affect performance on text benchmarks.
We evaluate on a set of zero-resource speech modeling tasks <cit.> and found the proposed method is superior to the traditional k-means method across all setups. We also evaluate the ability of the newly proposed speech tokens on the task of ASR and found them to produce superior performance than the k-means alternative. We additionally provide an extensive ablation study, which sheds light on the importance of each of the components constructing .
§ BACKGROUND
As mentioned before, the general pipeline for constructing is comprised of two main modules: (i) Speech tokenizer and (ii) uLM (See Figure <ref> for a visual description). In the following paragraphs, we give a background for each of the components including the standard evaluation methods.
Speech Tokenizer module encodes the raw speech signal into a discrete representation. The common approach is first to encode the speech into a continuous representation and then quantize the representation to achieve a sequence of discrete units <cit.>.
Formally, denote the domain of audio samples by 𝒳⊂ℝ. The representation for a raw signal is therefore a sequence of samples x = (x_1,…, x_T), where x_t∈𝒳 for all 1≤ t ≤ T.
Consider an encoder network, f, that gets as input the speech utterance and outputs a sequence of spectral representations sampled at a low frequency as follows f(x) = (v_1, …, v_T'). Note that we do not assume anything about the structure of the encoder network f. <cit.>, evaluated several speech encoders, namely, Mel-spectrogram, Contrastive Predictive Coding <cit.>, wav2vec2 <cit.>, and HuBERT <cit.>.
Since the representations learned by such models are usually continuous, a tokenization or quantization algorithm is applied over the models' outputs to generate discrete units, denoted as z = (z_1,…,z_T'). Each element z_i in z is a positive integer, z_i∈{1,...,K} for 1≤ i ≤ T', where K is the number of discrete units. We denote the quantization model with Q. The common approach in prior work is to use the k-means algorithm as the quantization method.
Unit Language Model is trained on the extracted discrete units, z. Such a language model learns a probability distribution of the learned unit sequences, which enables direct modeling of speech data without textual supervision.
The language model can be used to sequentially model speech utterances, and generate speech conditionally or unconditionally. Moreover, such a modeling framework allows for capturing and modeling prosodic features <cit.>, as well as speaker identity <cit.>, or even natural dialogues <cit.>. This is in contrast to using textual features, as they do not encode such information. In this work, we do not focus on speech generation but rather demonstrate that the proposed speech tokenizer provides better performance for modeling sequential data via the uLM.
Zero-Resource Speech Evaluation. <cit.> proposed a set of zero-shot evaluation tasks specifically targeting speech modeling (i.e., sWUGGY, and sBLIMP). The sWUGGY metric requires detecting the real word from a pair of short utterances such as 'brick' vs. 'blick.' Similarly, sBLIMP requires detecting the syntactically correct sentence from a pair of sentences. In both metrics, detection is done by comparing the probabilities of both sequences. These metrics are desirable under our setup as, unlike perplexity, these allow us to compare models with different tokenizers. We mainly report this method throughout the paper.
§ LANGUAGE AWARE SPEECH TOKENIZER
In this section we present . We start by describing the speech tokenization approach guided by a frozen LM, followed by the exact implementation details and fine tuning configuration.
§.§ Model
Given a speech utterance x, we first feed it into a frozen and pre-trained speech encoder network f(x) = v = (v_1, …, v_T'), where each v_i∈ℝ^d. Next, v is being fed into a learnable encoder module E(v) = u = (u_1, …, u_T'), where each u_i∈ℝ^d'. Then, a quantization process is performed, via a Vector Quantization (VQ) module, Q, to quantize each u_i. Formally, Q(u) = (z_1,…,z_T'), where each z_i is a positive integer in the range of {1, …, K} and K is the number of codes in Q.
Next, to guide the quantization process toward sequential modeling, we feed z into a pre-trained textual LM to perform next token prediction. To extend the text LM for speech tokens, we add randomly initialized adaptation layers before and after the LM which we freeze for the entire training, hence keeping the existing text LM capabilities unchanged. We analyze the effect of the adapter size in <Ref>.
Similarly to the common practice in training LMs we minimize the negative log-likelihood loss between the predicted and true probability distributions over the learned tokens. Formally, we minimize the following,
ℒ_LM = -∑_i=1^nlog p_θ(z_i | z_i-1, …, z_1),
where we consider θ as the parameters of both the adaptor and new speech tokens look-up table. We do not update the LM parameters, we only backpropagate through it.
Notice, that we learn to tokenize speech via next token prediction. Such an optimization process may easily collapse to a single token or a sequence of tokens. To prevent collapse we introduce a reconstruction loss function to stabilize the optimization process. For that, we introduce a decoder module, D, which gets as input u and is optimized to reconstruct v. Specifically, we minimize the L2 loss between D(u) and v. Overall, the objective function of the system is as follows,
ℒ = ℒ_LM + λD(u) - v_2,
where λ is a hyperparameter balancing between both loss functions. A visual description of the proposed method can be seen in <Ref>.
§.§ Details
We set f to be a pre-trained HuBERT- can be beneficial
for languages that do not have large textual resources or even a widely used standardized
orthographybase model <cit.>. Similarly to <cit.>, representations are obtained from its 9th layer. For E, we utilize a transformer encoder followed by a projection layer, where the latent dimension d' is determined by the LM dimension.
We employ VQ <cit.> to discretize the output u where we experimented number of codebooks ∈{100, 200, 500, 1000}. We empirically analyze results for the different number of codebooks in <Ref>.
For the LM part, we utilize the OPT model <cit.> with 125M and 350M parameter models. These models are initialized from a pre-trained text LM following <cit.>. We employ a separate lookup table with with a new randomly initialized embedding and add two more projection layers to enable speech tokens in addition to text tokens. In order to feed tokens into the model as input, the first layer projects from the embedding dimension to the model dimension. The second layer maps the model output to the number of tokens, allowing for the calculation of next speech token probabilities. In our setup we use only the speech tokens but using both text and speech tokens, only requires an indicator to specify whether to use text or speech. This indicator will determine which group of layers should be used for each token. Similarly to <cit.> we remove sequential repetitions of speech tokens before feeding them into the LM. Notice, unlike <cit.> which fine-tunes the text LM and as a result removes text modeling capabilities, following the proposed approach keeps the original performance over text benchmarks unchanged while extending it to speech modeling ones. We update the LM lookup table with the vector quantization codebooks at each training step and feed z as input for the LM.
Finetune. Although the proposed method allows the utilization of a single pre-trained LM for processing both speech and text inputs, we would also evaluate similarly to <cit.>, i.e., finetuning the LM. We denote the difference between both cases as “Pretrain” and “Finetune” to avoid confusion.
§ EXPERIMENTAL SETUP
We use different numbers and types of GPUs across our experiments due to diverse computational requirements ranging from single to 4 GPUS out of the Nvidia A5000, A6000, and A40. Due to computational constraints we use grad accumulation, and set a limit for maximum steps. Unless stated otherwise, except for the speech recognition experiments, we set a limit of 200K optimization steps with a batch size of 32. We sampled 10 seconds from each example and zero padded from right whenever needed. We used AdamW optimizer and linear-warmup-cosine scheduler that increase learning rate from 0 to 1e-4/2.5e-5 by 1K/10K steps then decrease to 3e-5/1e-5 for pretraining and finetuning experiments respectively. Speech recognition experiments were limited to 150K steps with batch size 64, same optimizer and scheduler but with a warmup of 5K steps to learning rate 1e-4 decrease to 1e-5.
§ RESULTS
We report results for the zero-resource speech metrics <cit.> together with speech recognition metrics, comparing the proposed method to the k-means alternative.
§.§ Zero-Resource Speech Metrics
We start by evaluating the proposed method using the zero-resource speech metrics as proposed by <cit.>, namely sWUGGY and sBLIM, and compare it to the commonly used k-means tokenizer. Unless stated otherwise, for a fair comparison we follow the same approach as in <cit.> and report the results for the “in-vocabulary” split. For both measures, we compare the geometric mean of the models' sequence probabilities assigned to each utterance within a pair. We report results in two setups. In the first one, we keep the LM frozen and do not update its parameters. In the second setup, we follow TWIST <cit.>, and we fine-tune the LM initialized from a text model. For a fair comparison to the k-means baseline, we consider two setups of : with and without the adaptation layers. Both and the k-means baseline are built on top of the HuBERT- model. Results are presented in <Ref> using with OPT-350 model and codebook size of 500.
We observe a consistent improvement across all the evaluated setups when following the proposed approach compared to the evaluated baseline. When considering no TWIST fine-tuning of the LM, significantly outperforms the k-means alternative. Interestingly, even without applying TWIST to the proposed method it provides competitive performance to TWIST using the k-means tokenizer.
Another benefit of following is it requires significantly fewer tunable parameters compared to TWIST. For example, to obtain the results presented in <Ref> requires training ∼230M parameters less than TWIST.
§.§ Speech Recognition
Next, we evaluate how the proposed tokenization method is able to preserve linguistic content. For that, we train an acoustic model for the task of ASR and measured the WER. We train a T5- <cit.> seq2seq model on LS-960. Text tokenization is similar across all methods using T5- pre-trained tokenizer.
We additionally measure the ABX metric, as proposed by <cit.>. Unlike WER which measures the content preservation at the word level, ABX operates at the phoneme level. The ABX task measures the phonetic discriminative abilities of the representation. It involves a pair of words differing by a single phoneme and a reference test word sharing a phoneme with one of the pair. It assesses whether the test phoneme is closer in representation to the correct or incorrect phoneme, expecting a shorter distance to the correct one. The ABX task is conducted in two setups: 'within' and 'across'. 'Within' is evaluated on input data from the same speaker, while 'across' is evaluated on input data from different speakers. Notice, our goal is not to construct the best performing ASR model <cit.>, we are primarily interested in comparing the relative results of compared to the k-means alternative.
Results are presented in <Ref>. When considering the WER metric, shows superior performance than the k-means alternative, however, when considering the ABX metric, the k-means method outperforms . Considering the way both methods were constructed, these findings imply that the k-means model better captures phonemic information.
To better analyze this, we follow <cit.> and visualize the tokens clusters as a function of phoneme families measured over the TIMIT corpus which contains full alignment. For each token cluster, we select the majority phoneme, the one that was most present in the cluster. Results are depicted in <Ref>. Results suggest that both methods are correlative with phonetic information.
§ ABLATION
We turn into analyzing the different components composing . We start by evaluating the effect of the vocabulary size, we systematically compare different numbers of codebook sizes. Next, we evaluate the effect of the LM tokenizer used and the final . We consider LM of different sizes from the OPT family of models. Lastly, we consider different numbers of layers for encoding, decoding and modality adapting layers.
§.§ Vocabulary Size
Similarly to <cit.> we experiment with different codebook sizes to better evaluate the effect of the number of tokens on modelling performance. We measure zero-resource speech metrics (i.e., sWUGGY and sBLIMP) using with the additional decoding layers. Results are reported in <Ref>. Results suggest that 500 tokens provide the best performance. Interestingly using 1000 units provides inferior performance to even 100 units. We suspect this is due to high redundancy in the codebook usage.
§.§ Tokenizer-Model Relation
Next, we evaluate the effect of LM size on the tokenization method and as a afterwords. We consider different model sizes from the OPT model family (i.e., 125M and 350M parameters) as either the and the model used for tokenization (denoted by Tokenizer LM). Results are reported in <Ref>.
We observe several interesting findings: (i) although conducting the experiments on the same model family, we can say our tokenizers can be used by different LMs. (ii) finetuning the 125M with the tokenizer trained with the 350M LM achieves better results than the one finetuned with the tokenizer train with the 125M LM.
§.§ Modelling Choices
Model Architecture. We analyze the effect of different architectural choices of . Specifically, we consider different numbers of layers in the encoder, decoder, and adaptation modules. For the adaptation module, we additionally consider a varying number of layers before and after the LM. In all setups, we report the sWUGGY metric using OPT-350 with 500 tokens. Results presented in <Ref>.
Results suggest that except of one setup (second row in <Ref>), the proposed method is not sensitive to the specific design choice. However, a few interesting insights comes up: (i) using 2 layers for all modules provides the best overall results; (ii) using more adaptation layers before the LM provides superior performance than using more adaptation layers after the LM; and (iii) using equal number of adaptation layers before and after the LM provides the best performance and stabilizes the results, as long as there is enough capacity in the encoder module.
Reconstruction loss. Lastly, we discussed the need for the L2 loss <Ref> to prevent collapse during model optimization. Specifically, we proposed to regularize the training process with an L2 loss between the D(u) and v. To evaluate different regularization alternatives, we experiment with replacing v with the different layer from f (HuBERT model), specifically layer 10 or with the acoustic features obtained from the first CNN layers in HuBERT. However, we found both options to provide inferior performance to the currently used one.
§ RELATED WORK
Speech Language Models. Speech language models were first demonstrated by <cit.>. The authors showed how raw and uncurated speech data can be leveraged into building a GSLM system. Next, <cit.> proposed a multi-stream to jointly process “pseudo-text” tokens together with quantized prosodic features (i.e., duration and F0). Such a modeling framework opened up a new and promising research direction for processing and modeling spoken data. <cit.> evaluated the robustness and disentanglement properties of speech-to-tokens models and demonstrated the ability to perform voice conversion as well as a lightweight speech codec. <cit.> proposed to cast the task of speech emotion conversion as a translation task, hence translating between one emotion to the other in the discrete space, while <cit.> proposed a similar approach for speaking style conversion. <cit.> proposed training two jointly to mimic natural spoken dialogues. Recently, <cit.> proposed cascading several LMs, in which one LM operates over semantic speech tokens while the others operate on acoustic tokens. Such modeling framework allows generating natural speech while keeping the identity of the speaker and acoustic conditions unchanged <cit.>. <cit.> show that the semantic units obtained from such models highly correlate to phonemes and phoneme states. <cit.> followed a similar modeling mechanism using a different speech tokenizer and proposed a textless approach for speech-to-speech translation. <cit.> and <cit.> considered speech as another language in multilingual setups, and showed that involving speech and text as part of training data improves results for speech translation and multilingual text tasks. <cit.> and <cit.> used the joint training to improve transcriptions tasks as ASR and TTS. <cit.> proposed augmenting a text LM with continuous speech data to improve spoken question-answering tasks.
Speech and Audio Tokenizers. The most common approach to tokenize speech for spoken language modeling is the k-means method, which was first proposed by <cit.>. Later on, other studies follow a similar modeling paradigm <cit.>. These results in a discrete semantic representation of the speech signal. <cit.> pointed out the lack of robustness in current k-means tokenization methods and proposed to match representations from clean and augmented signals. Another alternative approach is to follow the VQ-VAE approach using an RVQ <cit.>. Due to its objective function, such representation better captures acoustic information rather than semantic information. Recent studies either use the semantic tokens <cit.>, the acoustic tokens conditioned on global textual descriptions <cit.>, or both semantic and acoustic tokens <cit.>. Recently, <cit.> proposed to jointly train speech tokenizers to capture both semantic and acoustic tokens. The authors proposed training a discrete auto-encoder using RVQ and optimizing the first RVQ codebook to be similar to the semantic tokens obtained from the k-means model. Notice, that all of the above methods are orthogonal to the proposed method as they can be applied jointly with our method.
Another relevant related work to ours is the work by <cit.> that proposes to leverage a pre-train text-to-image and text-to-video models (respectively) to build image and video generation models condition on audio inputs. The authors proposed to augment the textual inputs with the newly learned single audio token. Unlike these works, our method is focused on speech and not general audio, and more importantly, aimed at learning a full vocabulary over time rather than a single audio token.
§ DISCUSSION
In this study, we propose : a language model aware speech tokenization method. Unlike prior work, which constructs the speech tokens independently from the , involves the during the tokenization process. leverages both frozen pre-trained contextualized speech encoder and frozen pre-trained text LM while introducing a lightweight modality adapter. Results suggest provides superior performance to the k-means alternative considering both zero-resource metrics and transcription capabilities. Interestingly, as augments a frozen pre-trained text LM, it is not only superior in terms of speech modeling but also keeps the same text capabilities of the original LM model.
Limitations. While is superior to the k-means alternative and allows to optimize jointly the speech tokenizer and , it requires more computational resources than the standard k-means approach. Additionally, we only presented results for zero-resource speech modeling metrics, more evaluations are needed in the direction of unit-to-speech synthesis. We leave that for future research.
Future work. We intend to extend this work in two main directions: (i) as mentioned above, evaluating also under the unit-to-speech synthesis framework. This will allow us to evaluate the system similarly to the GSLM setup as was done by <cit.>. Additionally, as the demonstrated that a frozen text LM can be efficiently augmented with speech tokens, we intend to explore the merger of the two modalities in the form of ASR and TTS.
IEEEbib
|
http://arxiv.org/abs/2409.03194v1 | 20240905023425 | Free circle actions on $(n-1)$-connected $(2n+1)$-manifolds | [
"Yi Jiang",
"Yang Su"
] | math.GT | [
"math.GT",
"math.AT",
"57R19, 57S25"
] | |
http://arxiv.org/abs/2409.02542v1 | 20240904085946 | Compact, folded multi-pass cells for energy scaling of post-compression | [
"Arthur Schönberg",
"Supriya Rajhans",
"Esmerando Escoto",
"Nikita Khodakovskiy",
"Victor Hariton",
"Bonaventura Farace",
"Kristjan Põder",
"Ann-Kathrin Raab",
"Saga Westerberg",
"Mekan Merdanov",
"Anne-Lise Viotti",
"Cord L. Arnold",
"Wim P. Leemans",
"Ingmar Hartl",
"Christoph M. Heyl"
] | physics.optics | [
"physics.optics"
] |
1Deutsches Elektronen-Synchrotron DESY, Notkestraße 85, 22607 Hamburg, Germany
2Friedrich-Schiller-Universitat Jena, Max-Wien-Platz 1, 07743 Jena, Germany
3Department of Physics, Lund University, P.O. Box 118, SE-221 00 Lund, Sweden
4GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291 Darmstadt, Germany
5Helmholtz-Institute Jena, Fröbelstieg 3, 07743 Jena, Germany
*[email protected]
Combining high peak- and high average power has long been a key challenge of ultrafast laser technology, crucial for applications such as laser-plasma acceleration and strong-field physics.
A promising solution lies in post-compressed ytterbium lasers, but scaling these to high pulse energies presents a major bottleneck.
Post-compression techniques, particularly Herriott-type multi-pass cells (MPCs), have enabled large peak power boosts at high average powers but their pulse energy acceptance reaches practical limits defined by setup size and coating damage threshold.
In this work, we address this challenge and demonstrate a novel type of compact, energy-scalable MPC (CMPC).
By employing a novel MPC configuration and folding the beam path, the CMPC introduces a new degree of freedom for downsizing the setup length, enabling compact setups even for large pulse energies.
We experimentally and numerically verify the CMPC approach, demonstrating post-compression of 8 pulses from 1 down to 51 in atmospheric air using a cell roughly 45 in length at low fluence values.
Additionally, we discuss the potential for energy scaling up to 200 with a setup size reaching 2.5.
Our work presents a new approach to high-energy post-compression, with up-scaling potential far beyond the demonstrated parameters.
This opens new routes for achieving the high peak and average powers necessary for demanding applications of ultrafast lasers.
§ INTRODUCTION
Ultrafast laser technology has experienced immense progress within recent years.
Ultrashort, high-peak power lasers are used in a vast range of applications, including attosecond science and high-harmonic generation <cit.>, laser-plasma acceleration <cit.> or high-field science including laser-based nuclear fusion <cit.>. However, developing a laser source which is simultaneously average and peak power scalable remains a major challenge.
The invention of mode-locked solid-state laser technology, in particular Titanium-doped sapphire (Ti:Sa) lasers in combination with chirped-pulse amplification (CPA) enabled ultrashort, few-cycle pulses with unprecedented pulse energy <cit.>.
Nowadays, peak powers exceeding the Terawatt regime are routinely employed <cit.>.
While excelling in peak power performance, Ti:Sa amplifiers are commonly constrained in average power to a few tens of Watts, which can be attributed to their large quantum defect <cit.>.
As an alternative to laser amplification in active gain media, optical parametric processes can be employed.
In particular optical parametric chirped-pulse amplifiers (OPCPA) offer broad bandwidths supporting few-cycle pulses and simultaneously high average powers <cit.>.
However, OPCPA systems suffer from low pump-to-signal efficiencies typically around 10-20% for pulses in the range of 10s of femtoseconds (fs) <cit.>.
Ultrafast Ytterbium (Yb) -based laser architectures on the other hand provide excellent average power scalability exceeding 10 <cit.>, but pulse durations limited to 100s of femtoseconds up to about 1 picosecond (ps). Combining Yb lasers with efficient post-compression methods supporting large (>10) compression factors and high pulse energies can offer an excellent solution to the power scaling challenge.
In recent years, a number of post-compression techniques have been developed, mostly relying on self-phase modulation (SPM) as the nonlinear process for spectral broadening <cit.>.
In particular, gas-based technologies provide excellent tools for post-compression of high power lasers.
Example systems rely on gas-filled hollow-core fibers (HCF) <cit.>, cascaded focus and compression (CASCADE) <cit.>, white-light filaments <cit.>, as well as Herriott-type multi-pass cells (MPCs)<cit.>.
In HCFs, post-compression of 70 220 pulses down to 30 has been demonstrated in a 3 long fiber <cit.>.
Post-compression of very high pulse energies in the multiple Joule range has been achieved via thin-film spectral broadening techniques. However, typical compression factors lie in the range of only 2-5 <cit.>.
Similar to HCFs, MPCs enable large compression factors reaching 10-20 or more while supporting a wide range of pulse energies. In addition, MPCs support high average powers <cit.> and outperform HCFs in system footprint especially for large compression factors <cit.>.
The maximum attainable energy acceptance in a standard, two-mirror MPC is directly proportional to its size <cit.>. A record of 200 has been achieved in a 10 long MPC setup <cit.>.
Further energy up-scaling leads to MPC sizes that are impractical for standard laboratory settings.
The development of a highly efficient post-compression method supporting large compression factors and high pulse energies thus remains a key challenge.
We here introduce a new MPC type, the compact MPC (CMPC) which possesses a weakly focused fundamental mode as well as a linear beam pattern on the focusing mirrors.
This geometry allows us to fold the beams inside the MPC using two additional planar mirrors, thus introducing a new energy scaling parameter, the folding ratio Γ.
The CMPC in principle allows for an arbitrary amount of folding and thus, very compact setup sizes while sharing key properties of standard MPCs such as high average power support, excellent beam quality and efficiency.
We experimentally demonstrate spectral broadening of 1030, 8, 1 pulses in a CMPC in atmospheric air using a setup with an effective length of around 45.
We keep the maximum mirror fluence at a moderate level of around 170^2 and demonstrate compressibility of 1 input pulses down to 51 with an MPC throughput reaching 89% while maintaining excellent spatio-temporal pulse characteristics.
§ CONCEPT
Most MPCs demonstrated for post-compression to date rely on two identical concave mirrors, resembling the most basic optical cavity arrangement. However, more complex designs employing a convex and a concave mirror <cit.> or even multiple additional mirrors can provide advantageous mode-forming capabilities.
MPCs with more than two mirrors have been proposed in previous works focusing on energy-scaling of MPCs <cit.>.
The concept of the CMPC is based on a weakly focused beam and folding of the beam path via multiple reflections on two additional, planar mirrors in each pass through the cell, as shown in Fig. <ref>.
This provides an additional tuning parameter, namely the folding ratio Γ, which reduces the length of the CMPC by L_eff≈ L/Γ.
Figures <ref>(A) and <ref>(B) depict the principle of beam folding and the effective size reduction of the cell together with the configuration regimes for standard MPCs and the CPMC.
In general, the geometry of a symmetric Herriott-type MPC - including a CMPC - is fully determined by three of the following four parameters: radius-of-curvature (ROC) of the mirrors R, the propagation length between the two focusing mirrors L, the number of round-trips N, and the configuration parameter k. These parameters are related by <cit.>:
L/R = 1 - cos(π k/N) .
Typically, k is chosen to be integer and relatively prime to the number of round-trips N in order to ensure the re-entrant condition of the MPC.
Moreover, the parameter k determines the angular advance of the beam on the MPC mirror, which is equivalent to the Gouy phase per pass ϕ_G^(1) = π k/N <cit.>.
The amount of Gouy phase per pass determines the caustic of the beam in the MPC. For a standard MPC, ϕ_G^(1) approaches π, which leads to a large mode-size w_m on the mirrors and a small waist w_0 in the focus.
In the CMPC, the beam propagates within the Rayleigh-range, where the accumulated Gouy phase per pass is ϕ_G^(1)<π/2.
This is the case when L/R<1, where the beam radius at the mirror w_m is comparable to the beam radius at focus w_0.
The peak fluence of the beam follows a similar behavior,
with F_0/F_m < 2 for L/R<1 [Fig. <ref>(C)].
This weak focusing geometry with small fluence variations enables the placement of additional optical components within the beam path of the CMPC and thus folding of the beam. In addition, a weakly focused mode eliminates pulse energy limitations arising due to ionization in standard gas-filled MPCs.
Figure <ref> illustrates the CMPC scheme. In Fig. <ref>(A) the setup schematic of a standard MPC with linear pattern alignment is shown.
Here, the strongly focused beams propagate directly between the two focusing mirrors FM1 and FM2, without any optical components in-between.
In the CMPC [Fig. <ref>(B)], two additional planar mirrors PM1 and PM2 are placed behind FM1 and FM2. As seen in Fig. <ref>(B)(ii), the beam propagates from FM1 to PM2 and PM1, where it is folded multiple times before reaching FM2.
Here, the depicted folding ratio is Γ=9.
The total path length between FM1 and FM2 is the length determined by Eq. (<ref>).
However, the effective length is now reduced by L_eff≈ L/Γ, which under the correct choice of R and Γ can become significantly shorter than a standard MPC with similar pulse energy acceptance.
The pulse energy acceptance of a gas-filled MPC can be estimated considering the mirror fluence and the focus intensity.
The beam waist radius in the focus and on the mirrors in an MPC can be calculated using the equations w_0^2 = Rλ/2π sin(π /kN) and w_m^2 = Rλ/π tan(π k/2N) respectively, where λ is the wavelength of the laser <cit.>. The parameters L/R can be directly mapped to k/N according to Eq. (<ref>).
Assuming a laser with Gaussian pulses and pulse energy E, the maximum peak fluence in the focus F_0 = 2E/π w_0^2, as well as the peak fluence on the mirror F_m = 2E/π w_m^2 can be calculated <cit.>.
For a standard MPC consisting of two identical focusing mirrors, the fluence on these mirrors should not exceed the threshold fluence F_th, yielding a maximum pulse energy
E_MPC≤R λ F_th/2 tan(π k/2N) .
For simplicity, the second energy limitation which can arise at high focus intensity due to ionization is omitted in this discussion as it has little or no relevance for the CMPC.
In a CMPC, where beam folding is achieved by additional mirrors [Fig. <ref>] the beam can be reflected at almost any point between the focusing mirrors including the focus. Therefore, the condition for avoiding damage needs to be F_0 ≤ F_th and thus
E_CMPC≤R λ F_th/4 sin(π k/N) .
In addition, replacing R with L using Eq. (<ref>) and writing L in terms of L_eff≈ L/Γ, we obtain:
E_CMPC≤ λ F_th Γ L_eff/4 tan(π k/2N) .
Equations (<ref>)-(<ref>) illustrate the energy scaling possibilities of MPCs. For standard MPCs [Eq. (<ref>)], the energy scaling options are restricted to maximizing the ratio k/N → 1, increasing the ROC of the mirrors R (which increases L proportionally at constant k/N) or increasing the threshold fluence F_th.
In the case of CMPCs, energy scaling can be achieved differently. Since k/N is generally smaller, the energy acceptance for the same set of parameters R and F_th is typically reduced according to Eq. (<ref>).
Thus, the first step of scaling is achieved by matching R in Eq. (<ref>) such that E_CMPC = E_MPC.
This typically increases L, i.e. L_CMPC > L_MPC.
The second step is to fold the beam by the factor Γ and decrease the length of the system by exploiting L_eff = L/Γ.
To illustrate the discussion with an example, we use a typical set of parameters for a standard MPC with R=1 and a λ = 1030, 1 level laser.
The threshold fluence for quarter-wave stack high-reflectance dielectric coatings can e.g. be set to F_th = 0.2^2, leaving headroom to damage.
With N=15 round-trips and k=14, we arrive at E_MPC≤9.8 and a length of close to 2.
To match the pulse energy acceptance of the CMPC to the MPC, we first set k=4, increase the ROC to R=25 and the length to L=8.27.
Now folding the beam Γ=25 times, we arrive at E_CMPC≤9.54 with an effective length of L_eff = 33.
A second example is illustrated in Fig. <ref>.
Here we refer to the works reported by Pfaff et al., where 200 pulses have been compressed in a 10 long MPC, with a fluence on the mirrors of approximately 0.5^2 <cit.>.
Figure <ref> shows that the energy acceptance increases with larger folding ratio and effective system length.
At 10 m length, a folding ratio of Γ=25 supports an energy of 700 and at Γ=49, the accepted energy exceeds 1 Joule.
Conversely, for a pulse energy of 200, the CMPC size can be scaled down to about 3 or 1.5 considering Γ=25 and Γ=49, respectively.
As visible in Figure <ref>, for Γ=1, the fully "unfolded" CMPC is generally longer than the standard MPC considering an identical threshold fluence for both cases. This is due to the fact that the CMPC operates in the L/R<1 regime while the MPC operates at L/R ≈ 2.
§ EXPERIMENT
Our experimental setup displayed in Fig. <ref> uses a commercial innoslab Yb laser system delivering 1030 1 pulses at a repetition rate of 1 with a pulse energy of > 8.
After suitable mode-matching, the beam is sent onto the pick-off mirror POM1 and into the CMPC [Fig. <ref>(B)].
The CMPC consists of two concave 4 inch mirrors with R=25 and two planar folding mirrors with size 10×10.
The configuration is set to N=11 round-trips and k=3, which corresponds, according to Eq. (<ref>) to an unfolded MPC length of L=8.63.
Following in-coupling at the pick-off mirror POM1, the beam is sent onto the planar folding mirror PM2 and is subsequently reflected by PM1, with both mirrors being aligned in a V-shaped configuration and separated by about 34.
The beam continues its path as described in Fig. <ref>, forming the typical CMPC pattern.
A folding ratio of Γ=25 results in an effective CMPC length of L_eff=34.5.
However, due to spatial constraints, the two focusing mirrors FM1 and FM2 are placed a few centimeters behind PM1 and PM2, making the total CMPC length slightly larger, resulting in around 45 [see Fig. <ref>(B)].
After propagating N=11 round-trips (22 passes), the beam returns on PM1 and is coupled out with a different angle in the vertical direction.
The total number of mirror reflections amounts to 2NΓ = 550 and the total propagation length is roughly 190.
We use ambient air at atmospheric pressure (1006) as the nonlinear medium for spectral broadening.
Nevertheless, a chamber or housing is necessary to avoid beam fluctuations due to turbulences in air.
After out-coupling, the spectrally broadened beam is picked-off with another pick-off mirror POM2 [Fig. <ref>(A)].
Here, we measure an output power of around 7.1 and thus a total transmission of roughly 89%.
This corresponds to a reflectance of the mirrors of at least 99.98% per reflection, disregarding clipping losses.
Subsequently, a wedge is used for beam sampling, reflecting around 8%, corresponding to roughly 570.
The transmission of the wedge is sent onto a power meter.
The wedge-reflected pulse is compressed using a chirped-mirror compressor with 26 reflections corresponding to about -5200^2 of compensated second order dispersion.
We analyze the compressed pulses using frequency-resolved optical gating (FROG) and measure the spectrum.
In addition, the beam quality parameter M^2 is analyzed using the same wedge reflection.
We furthermore record input and output near-field pointing using the input/output mirror (M_in) transmission of both input and output beams.
For direct comparison of input and output, both cameras are placed in the exact same distance d=42 from a common reference point z_0, located at the vacuum chamber input window.
The spectrum and FROG results are shown in Fig. <ref>. The input pulse has a pulse duration of about 1.1 with a Fourier-transform-limit (FTL) of about 1.
After spectral broadening in the CMPC, the output pulse FTL reaches 50. Following compression, we reach 51 [Fig. <ref>(A)].
We simulate the spectral broadening process using the measured input pulse [Fig. <ref>(A)].
The simulations are conducted using our in-house developed (2+1)D radially symmetric simulation code based on Hankel-transforms in the spatial dimension, which is described in the supplemental document [Eqs. (<ref>)-(<ref>)].
The simulated output pulse agrees very well with the measured output pulse, exhibiting similar temporal characteristics after compression. In both measurement and simulation, we observe a temporal pedestal likely stemming from uncompressed spectral components in the longer wavelength range.
On the trailing edge, a post-pulse appears at around 800 fs, which we attribute to a pulse-breakup caused by the delayed response of the Raman-Kerr contribution in air.
The measured and simulated spectra [Fig. <ref>(B)] also show similar characteristics as well as similar broadening, except for the strength of the side lobes compared to the background close to 1030.
We further conduct spectral broadening experiments in 1 bar Krypton.
Due to technical constraints originating from mechanical instabilities of the vacuum chamber, the chamber could only be operated at 1 bar pressure.
We measure the spectral broadening with 6.5 pulse energy and the results are shown in Fig. <ref>, where we reach a FTL of approximately 120.
The beam path inside the CMPC involves 550 reflections and a long propagation path of about 190.
The beam is particularly sensitive to displacements of the folding mirrors PM1 and PM2 [Fig. <ref>], where most of the reflections take place.
Thus, it is important to characterize the pointing stability of the beam.
Figures <ref>(A) and <ref>(B) show the measured near-field pointing stability for both input and output beams.
We conduct the measurements separately within a few minutes. The measurements show only a very slight decrease of near-field pointing stability of approximately 5%.
This demonstrates the spatial stability owing to the input-to-output imaging property of the CMPC, also known from standard MPCs.
In addition, we measure the spectral stability of the system over 5 minutes [Fig. <ref>(C)]. We measure RMS fluctuations of the FTL of 0.33, with a mean value of 50.9, resulting in a relative RMS deviation of 0.65%, indicating that the spectral broadening is stable.
We further measure the beam quality parameter M^2 of the input beam after the mode-matching telescope and the output beam. We observe a slight beam quality degradation from M^2=1.36 at the input to 1.76 at the output at full power.
As this observation may indicate spatio-temporal coupling (STC) effects, we decided to further investigate the spatio-spectral pulse characteristics using spatially resolved Fourier transform spectrometry <cit.>.
The STC measurements provide information on the spatially-dependent spectral homogeneity of the pulse V(x,y), which we calculate via the spectral overlap integral as defined in Eqs. (<ref>)-(<ref>) in the supplemental document.
An average spectral overlap V_avg is then computed by weighting the V(x,y) with the fluence F(x,y) obtained from the same dataset and averaging over an area defined by the measured 1/e^2 diameter of the beam.
In air [Fig. <ref>], we measure V_avg=90.6% at full power, confirming a slight degradation of the spatio-spectral homogeneity compared to the input beam, which exhibits an almost perfect homogeneity of V_avg≈ 99%.
We further investigate more generally how the spatio-spectral homogeneity behaves in atomic gases as a function of the input peak power P of the pulse, compared to the critical power P_c of the gas.
The measurement is performed using Krypton as the nonlinear medium at 1 bar pressure and the same CMPC configuration is used as described in the beginning of this section.
We carry out STC measurements at four different pulse energies and thus four different values for P/P_c and measure an STC trace for each point.
Figure <ref> summarizes the STC measurement results.
From the first measurement point [Fig. <ref>(A)] at P/P_c ≈ 0.4 to the third at P/P_c ≈ 0.6, we observe a linear decline from 98.3% to 90.5 %.
The fourth point at P/P_c ≈ 0.65 exhibits a stronger deterioration.
We compare our results with simulations considering again the measured input pulse as input for the simulation.
Here we observe an onset of homogeneity reduction [Fig. <ref>(A)] at approximately P/P_c = 0.55 and an overall weaker deterioration compared to the measurements.
However, the simulations reproduce the measured behavior qualitatively including the onset of homogeneity degradation at around P/P_c ≈ 0.6 well.
At P/P_c ≈ 0.8, the simulations predict a beam collapse causing a sharp decline of V_avg.
The faster degradation of the spatio-spectral homogeneity in the experimental data might be related to imperfect input beam characteristics in the experiment and/or possible spatial phase distortions arising due to many beam reflections on the CMPC mirrors.
For an example point at P/P_c ≈ 0.6, the spatio-spectral distribution obtained via an STC scan is shown in Fig. <ref>(B).
We compare the results with a spectrally broadened pulse from a standard gas-filled MPC with N=17, k=16 and R=1 at similar spectral broadening characteristics.
The corresponding spatio-spectral distribution is shown in Fig. <ref>(C). Both measurements indicate quite homogeneous spatio-spectral characteristics and thus a similar spectral broadening performance for both MPC types, with V_avg = 96.4% for the standard MPC and a slightly reduced V_avg = 90.5% for the CMPC.
§ DISCUSSION
The CMPC enables post-compression for high pulse energies at compact setup length, tunable via the folding ratio Γ [Eq. (<ref>)] while exhibiting excellent characteristics known from conventional MPCs. These include support for large compression factors, excellent beam quality and pointing properties as well as high transmission efficiency.
In principle, Γ can be increased to arbitrarily large values, enabling very compact systems.
There are, however, practical limitations.
The mirror size, in particular the length (corresponding to the x-dimension in Fig. <ref>) set a limit on Γ (a mirror dimension estimate is provided in Eqs. (<ref>)-(<ref>) in the supplemental document).
Increasing Γ also gives rise to a larger amount of total reflections.
This can lead to a decreased throughput and lower efficiency of the system.
In our experiment we measure a throughput of 89% at 550 reflections, thus we can derive an average mirror reflectivity of >99.98%.
Moreover, a large number of reflections can cause wavefront distortions, which we minimize by using mirrors with a high surface flatness of λ/20.
Another factor that can restrict energy scalability is the radius-of-curvature of the mirror substrate that can be manufactured.
As discussed in section <ref> the CMPC scheme requires a regime where the beam inside the cell is loosely focused in order to enable beam folding on additional mirrors.
This in turn fundamentally limits the amount of nonlinear phase ϕ_NL^(1) which can be accumulated per pass to smaller values compared to a strongly focused geometry.
In gas-filled MPCs, the accumulated Gouy phase per pass ϕ_NL^(1) sets a theoretical limit on the nonlinear phase ϕ_NL^(1)≤ϕ_G^(1) <cit.>, which is directly related to the MPC geometry via ϕ_G^(1) = π k/N.
In standard MPCs with k/N → 1 this limit approaches ϕ_NL^(1) = π, whereas in the CMPC typically k/N is lower.
In our experiment, the configuration is set to N=11 and k=3 and thus ϕ_NL^(1)≤ 0.27π.
Getting close to this limit, the peak power approaches the critical power of the medium P_c and spatio-temporal couplings can become more pronounced leading to a sudden degradation of the spatio-spectral homogeneity.
However, this effect is not necessarily a limiting factor for spectral broadening in a CMPC.
The generally smaller ϕ_NL^(1) in a CMPC can be compensated by simply employing more passes through the cell. For both MPC types, operation at P < P_c ensures that excellent spatio-spectral and thus spatio-temporal pulse characteritics can be reached.
§ OUTLOOK AND CONCLUSION
In our proof-of-principle experiment we demonstrate post-compression of 8 pulses in a compact setup.
However, we can further scale up the energy by increasing the mirror sizes as well as the radius-of-curvature R of the mirrors.
As an outlook for high-energy CMPC operation, we simulate the post-compression of 200, 1 pulses at 1030 central wavelength in a CMPC using 50 of Argon as the nonlinear medium.
We show that post-compression down to 80 can be achieved, with a transmission of almost 80%.
Figure <ref> summarizes the simulation results for this scenario, indicating excellent post-compression performance.
Here, we assume mirror reflectivity of 99.98% and consider the influence of the mirror on the spectral phase of the pulse.
We include the effects of all reflections on the mirrors in the simulations, which amount to 2NΓ = 1170.
Using focusing mirrors with R=300 and folding mirrors of 35×35, a folding ratio of Γ = 39 is achievable, assuming a clear aperture of 5 times the beam radius w_m (see supplemental document section (S2)).
In this case an effective length of the setup of 2.5 can be reached.
The calculated fluence on the mirrors is kept at roughly of 0.5^2, including nonlinear self-focusing effects and nonlinear mode-matching taking into account Kerr lensing in the cell <cit.>.
In conclusion, we introduce a novel multi-pass cell scheme enabling post-compression of high-energy laser pulses in a compact setup.
The CMPC enables tuning and down-scaling of the setup size via beam folding using additional planar mirrors, using weakly focused cell modes.
Instead of increasing the length of the setup, the folding ratio Γ acts as the energy scaling parameter. We demonstrate post-compression in air from 1.1 down to 51 in a CMPC with an effective length of 45 and a folding ratio Γ = 25, while keeping the fluence comparable to a standard MPC supporting the same pulse energy but requiring around 2 cell length.
Further up-scaling options promise post-compression of pulses with an energy of 100 and beyond in a table-top setup.
§ ACKNOWLEDGEMENT
We acknowledge Deutsches Elektronen-Synchrotron DESY (Hamburg, Germany), the Helmholtz Institute Jena (Jena, Germany), members of the Helmholtz Association HGF for support and/or the provision of experimental facilities. We further acknowledge the Helmholtz-Lund International Graduate School (project no. HIRS-0018) for funding and support as well as Vetenskapsrådet/Swedish Research Council (grant no. 2022-03519).
§ DISCLOSURE
The authors declare no conflict of interest.
§ DATA AVAILABILITY
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
§ S1. NONLINEAR PULSE PROPAGATION MODEL
The pulse propagation model used for simulations in this work is presented here. We use single-atomic gases as well as ambient air, which mainly consists of molecular gases (N_2 and O_2), as the nonlinear media. In order to conduct simulations, we thus need to take into account time-dependent nonlinear 3^rd-order effects, which stem from coupling of the electric field to the rotational states of the gas molecules <cit.>. The full equation for the propagation model used in this work can be written in the frequency and spatial frequency domain as:
E(k_x,k_y,ω)z = ik_z E(k_x,k_y,ω) + P^NL(k_x,k_y,ω) ,
where k_z = √(k^2(ω) - k_x^2 - k_y^2), k(ω)=n(ω)ω/c_0 with ω denoting the radial frequency, k_x and k_y the spatial wave-numbers, c_0 the speed of light in vacuum and n(ω) the refractive index. The second term P^NL in equation (<ref>) contains all the nonlinear effects used in the model. Here we include the Kerr-effect via the nonlinear refractive index n_2 up to its first order derivative, as well as the single damped-oscillator model for the molecular response as described in references <cit.>. The gas-specific single damped-oscillator model is described by the damping time Γ, the frequency Λ, the Raman-Kerr nonlinear refractive index n_2^R and the Raman-Kerr fraction f_R.
In space and time domain, the nonlinear polarization P_NL can be written as:
P^NL(x,y,t) = πε_0 c_0 n(λ)/λ [ (n_2 |E|^2 - i(λ/2πn_2/c_0 + n_2ω)( 2 E^* Et + E E^*t) ) (1-f_R)
+ f_R n_2^R Γ^2/4 + Λ^2/Λ Im{e^-(Γ/2 - iΛ) t ∫_-∞^∞e^(Γ/2 - iΛ) t^' |E(t^')| dt^'}] ,
where E = E(x,y,t) is the electric field of the pulse, E^* the complex conjugate of E, ε_0 the dielectric constant, n(λ) the refractive index and n_2 the nonlinear refractive index. The first derivative dn_2/dω is determined using the scaling formula described in reference <cit.> (Eq. (12)).
Equation (<ref>) describes the delayed molecular response of the medium. For the case of 1 bar of air, we use Λ = 12, Γ = 10, f_R = 0.6, which we extract from reference <cit.>, as well as the Raman-Kerr nonlinear refractive index n_2^R = 42e-24^2 <cit.>. We further use n_2 = 8e-24^2 <cit.> for air and n_2 = 24e-24^2 for krypton <cit.>.
We solve Eq. (<ref>) using a radially symmetric (2+1)D split-step approach with E=E(r,t) and the spatial coordinate r = √(()x^2+y^2), where in the spatial domain, the Fourier-transforms are replaced by Hankel-transforms[In the simulations we use the "pyhank" package by Github user etfrogers for Hankel-transforms. ].
§ S2. CALCULATION OF CMPC MIRROR DIMENSIONS
We here provide some useful equations describing the required minimum mirror dimensions for the CMPC.
We consider a pulse energy E, a threshold fluence F_th as well as the CMPC configuration with k and N.
We determine the focusing mirror radius-of-curvature R, the diameter of the focusing mirrors D (which is the same as the height of the planar mirrors) and the width of the planar folding mirrors W. The corresponding dimensions are shown in Fig. <ref>.
In order to calculate R, we use the equation for the fluence in the focus F_0 = 4E/(λ R sin(π k/N)) <cit.>, set F_0 ≤ F_th and re-arrange such that
R ≥ 4E/λ F_th 1/sin(π k/N) .
To find out the width of the planar folding mirrors W, we need to ensure that the beam at any reflection has sufficient free aperture.
For this, we define a factor β, where β w_m is the distance between the spot on the focusing mirror to the first reflection on the folding mirror [Fig. <ref>], and w_m = (Rλ/π·tan(π k/2N))^1/2 <cit.> is the 1/e^2 beam radius on the focusing mirror.
With some basic geometric considerations, we arrive at:
W ≥Γ/4 β √(4 E/π F_thtan(π k/2N)/sin(π k/N)),
defining the minimum width of the planar folding mirrors.
Here, Γ is the folding ratio.
We typically choose a value β = 5 for our size estimations.
The height of the mirrors, or equivalently, the minimal diameter of the focusing mirrors D can be calculated via:
D ≥(2N+1)β/π^3/2√(Rλtan(π k /2N)) .
§ S3. SPECTRAL HOMOGENEITY CALCULATION AND EXPERIMENTAL DATA
The spatio-spectral homogeneity, expressed as the x and y-dependent spectral overlap V(x,y) as it is used in reference <cit.>, is calculated using the overlap integral
V(x,y) = [∫I_0(λ) I(λ,x,y) dλ]^2/∫I_0^2(λ) dλ ∫ I^2(λ,x,y) dλ× 100 ,
where λ is the wavelength, I the spectral intensity and I_0 the spectral intensity at (x,y) = (0,0).
The averaged spectral homogeneity is then calculated using the average of V(x,y) weighted with the wavelength-integrated intensity F(x,y) = ∫I(λ,x,y) dλ, yielding
V_avg = ∫_-w^wV(x,y) F(x,y) dx dy/∫_-w^wF(x,y) dx dy ,
where w is the 1/e^2 beam radius in x- and y-direction respectively.
In Figs. <ref>-<ref> we display V(x,y) for the measurements in krypton and air in the CMPC, as well as the comparison measurement conducted in a standard MPC.
Figure <ref> shows V(x,y) for each measurement point which is displayed in Figure (8)(A) in the main article.
In Fig. <ref>, the spectral homogeneity is shown for the air measurements, carried out at the same parameters as the main spectral broadening and post-compression measurements shown in Fig. 5 in the main article with 1 bar air and 8 pulse energy.
Finally, Fig. <ref> shows V(x,y) for the comparison measurement in a standard MPC.
|
http://arxiv.org/abs/2409.03236v1 | 20240905041313 | Unveiling Context-Related Anomalies: Knowledge Graph Empowered Decoupling of Scene and Action for Human-Related Video Anomaly Detection | [
"Chenglizhao Chen",
"Xinyu Liu",
"Mengke Song",
"Luming Li",
"Xu Yu",
"Shanchen Pang"
] | cs.CV | [
"cs.CV"
] |
IEEE Transactions on Multimedia, VOL.XX, NO.XX, XXX.XXXX
Shell et al.: Bare Demo of IEEEtran.cls for Journals
Unveiling Context-Related Anomalies: Knowledge Graph Empowered Decoupling of Scene and Action for Human-Related Video Anomaly Detection
Chenglizhao Chen Xinyu Liu Mengke Song^† Luming Li Xu Yu Shanchen Pang
College of Computer Science and Technology, China University of Petroleum (East China)
† Corresponding author: Mengke Song ([email protected])
September 9, 2024
====================================================================================================================================================================================================================================================
§ ABSTRACT
Detecting anomalies in human-related videos is crucial for surveillance applications. Current methods primarily include appearance-based and action-based techniques.
Appearance-based methods rely on low-level visual features such as color, texture, and shape. They learn a large number of pixel patterns and features related to known scenes during training, making them effective in detecting anomalies within these familiar contexts. However, when encountering new or significantly changed scenes, i.e., unknown scenes, they often fail because existing SOTA methods do not effectively capture the relationship between actions and their surrounding scenes, resulting in low generalization.
In contrast, action-based methods focus on detecting anomalies in human actions but are usually less informative because they tend to overlook the relationship between actions and their scenes, leading to incorrect detection. For instance, the normal event of running on the beach and the abnormal event of running on the street might both be considered normal due to the lack of scene information.
In short, current methods struggle to integrate low-level visual and high-level action features, leading to poor anomaly detection in varied and complex scenes.
To address this challenge, we propose a novel decoupling-based architecture for human-related video anomaly detection (DecoAD). DecoAD significantly improves the integration of visual and action features through the decoupling and interweaving of scenes and actions, thereby enabling a more intuitive and accurate understanding of complex behaviors and scenes.
DecoAD supports fully supervised, weakly supervised, and unsupervised settings. In the UBnormal dataset, DecoAD increases the AUC by 1.1%, 3.1%, and 1.7% in fully supervised, weakly supervised, and unsupervised settings, respectively. In the NWPU Campus dataset, it increases the AUC by 0.2% in both weakly supervised and unsupervised settings. We make our source code and datasets publicly accessible at <https://github.com/liuxy3366/DecoAD>.
Human-Related Video Anomaly Detection, Knowledge Graph, Scene-Action Interweaving, Deep Learning.
§ INTRODUCTION
Video anomaly detection is a critical task that involves identifying unusual or abnormal events, behaviors, and activities within video sequences.
This task is essential in several domains, including security, surveillance, public safety, and abnormal behavior analysis <cit.>.
Human-related video anomaly detection refers to specifically detecting anomalies involving human subjects.
This branch of anomaly detection primarily focuses on identifying abnormal activities such as criminal behavior, accidents, or unusual behavior patterns displayed by individuals.
The traditional methods include appearance-based methods and action-based methods.
Most video anomaly detection methods rely on low-level visual features, namely appearance-based methods, to capture human behavior <cit.>. These methods learn to recognize extensive pixel patterns and features related to known scenes during training, thus enabling effective anomaly detection within these familiar contexts.
However, because these methods rely solely on low-level visual features such as color, texture, and shape, they fail to effectively capture the relationship between actions and their surrounding scenes. This results in low generalization and high sensitivity to factors that significantly alter the visual appearance of objects, such as changes in lighting conditions, camera viewpoints, and object occlusion <cit.>.
Consequently, their performance significantly degrades when encountering new or significantly changed scenes. For instance, as shown in Fig. <ref>-A, appearance-based methods can successfully detect a running person in a known road scene but may fail in an unknown scene. To overcome this limitation, many existing video anomaly detection methods consider using high-level action features.
Methods using high-level action features can be categorized as action-based methods. These methods utilize high-level features extracted from videos during training, such as skeletal data and pose estimation <cit.>. These features are compact, well-structured, and highly descriptive of human behaviors and actions, thereby significantly enhancing the model's generalizability. However, existing methods primarily focus on identifying anomalies in human actions, such as running or fighting <cit.>. These methods are often less informative because they tend to overlook the relationship between scenes and human actions. For example, as shown in Fig. <ref>-B, existing action-based methods cannot distinguish between riding a bicycle on the street and riding it in a square. This lack of contextual information leads to detection failures.
Whether appearance-based or action-based, the methods almost always use implicit associations through the model's internal learning mechanisms to capture and represent the relationships between data, as shown in Fig. <ref>-A, B. However, using implicit associations makes it challenging to effectively capture the relationships between features, leading to somewhat chaotic handling of these relationships.
Additionally, these methods tend to memorize training data, meaning the models can only detect anomalies or actions that appeared in the training set. When new scenes or anomaly events occur, the models need to be retrained, which lacks generalizability. In practical applications, companies often do not have sufficient computational resources to retrain models, so they can only use pre-trained models directly. Therefore, a method balancing performance and generalizability is urgently needed.
To further enhance the performance and generalizability of the model, this study introduces a novel decoupling-based architecture for human-related video anomaly detection (DecoAD).
DecoAD uses explicit associations by fusing visual and action features to compensate for the limitations of low-level visual features and address the issue of being less informative.
DecoAD introduces the concept of “Scene-Action Interweaving", which decouples scenes and human actions within video clips and interweaves them with elements from other clips. This approach aims to explore and understand the complex relationships between these scenes and actions.
Specifically, “Scene-Action Interweaving" consists of two main parts: “Relation Interweaving" and “Feature Interweaving". “Relation Interweaving" focuses on learning deep and complex relational patterns between scenes and human actions. “Feature Interweaving" aims to comprehensively understand complex, context-related, and interrelated patterns.
To achieve “Scene-Action Interweaving”, we have designed four main components, as illustrated in Fig. <ref>-C: Scene-Action Decoupling (Sec. <ref>), Relational Knowledge Mapper (Sec. <ref>), Scene-Action Integrator (Sec. <ref>), and Uncertainty Refinement (Sec. <ref>).
Firstly, we decouple scenes and associated human action elements within video clips.
Then, the Relational Knowledge Mapper performs “Relation Interweaving" to obtain scene-action relations.
This involves intricately interweaving the relations of scenes and human actions from different video clips, aiming to understand their complex interactions.
Next, the Scene-Action Integrator is used for “Feature Interweaving" to obtain initial anomaly scores, representing the likelihood of anomalies in the video clips. Finally, Uncertainty Refinement ensures that video clips predicted with uncertain anomaly scores are iteratively fed into the Scene-Action Integrator to obtain more accurate results.
DecoAD has been trained under fully/weakly-supervised and unsupervised conditions, outperforming existing human-related video anomaly detection methods on three widely-used benchmark datasets — NWPU Campus <cit.>, UBnormal <cit.>, and HR-ShanghaiTech <cit.>.The main contributions of this work are then summarized as following.
*
In video anomaly detection tasks, the relationship between scenes and actions is often overlooked, leading to suboptimal detection performance. To address this, we propose a novel video anomaly detection framework, DecoAD, which emphasizes the relationship between scenes and actions, achieving finer-grained anomaly detection.
* Current approaches often mix action information with scene data, introducing noise and complexity. Our proposed Scene-Action Decoupling technique effectively separates scenes from actions and removes action information from scenes, minimizing noise and irrelevant features. This significantly boosts model generalization and ensures more reliable and precise anomaly detection.
* Existing methods primarily use implicit associations, which often overlook complex contextual information. We designed a Relational Knowledge Mapper that uses knowledge graphs to explicitly define the relationships between scenes and actions, improving anomaly detection accuracy and adapting to new data. We also developed a Scene-Action Integrator to combine scenes and actions for initial anomaly scores, and Uncertainty Refinement to iteratively refine scores for uncertain cases, enhancing detection reliability and accuracy across varied scenarios.
* We conduct detailed experiments on three widely used datasets, demonstrating that our method surpasses existing methods in both accuracy and robustness.
§ RELATED WORKS
§.§ Video Anomaly Detection
Video anomaly detection has long been a challenging task in the field of computer vision. Early research regarded it as an unsupervised learning task, more precisely, an out-of-distribution task, where the training process only involved normal samples <cit.>. However, these early methods mostly rely on manually crafted features and statistical models, often resulting in limited generalization and robustness. With the advancement of deep learning technology <cit.>, a wide array of new unsupervised learning methods have emerged in recent years <cit.>. These methods aim to more effectively learn normal behavior patterns in video content. Due to the difficulty in annotating abnormal video data, unsupervised video anomaly detection has received widespread research attention. However, it is challenging to cover all normal samples during the training phase, often leading to higher false positive rates. To address this challenge, researchers have proposed weakly supervised video anomaly detection methods <cit.>, primarily relying on the multiple instance learning framework to compensate for the absence of video-level labels. By striking a balance between annotation costs and detection performance, weakly supervised methods have shown considerable effectiveness. As research progresses, some datasets <cit.> have begun to provide frame-level annotations, opening up new possibilities for fully supervised video anomaly detection <cit.>, and allowing existing fully supervised models to achieve higher detection accuracy.
In response to the diverse application demands of video data, we propose a novel video anomaly detection method that is flexible and applicable to unsupervised, weakly supervised, and even fully supervised learning scenarios.
§.§ Human-Related Video Anomaly Detection
Detecting anomalies in human-related videos is particularly challenging due to the complexity and diversity of human actions. Most human-related video anomaly detection methods fall into the category of appearance-based approaches <cit.>. Although these representations are simple and straightforward, they rely solely on low-level visual features such as color, texture, and shape to identify anomalies. This results in low generalizability of the models, and they often fail to detect anomalies when encountering new or significantly changed scenes.
In recent years, innovative advancements have been made in video anomaly detection of human behavior using action-based methods <cit.>.
These methods leverage deep learning techniques to analyze the skeleton data extracted from videos to detect abnormal behavior. Using skeleton data as training data can mitigate or reduce the risk of privacy breaches. Additionally, human pose data can effectively reduce interference from noise and lighting factors. However, solely considering less informative skeletons without taking the scene into account can lead to critical issues. For example, the same action, such as a long jump, can be considered a normal event on a beach but an abnormal event on a road. This situation is common, where actions like running, dancing, or boxing can have different effects in different scenes.
§.§ Knowledge Graph
Knowledge graph is a complex graph-like data structure that organizes and represents knowledge to reveal relationships and connections between data <cit.>. It is widely applied in various fields, such as search engine optimization, recommendation systems, natural language processing, and social network analysis. Knowledge graphs effectively integrate and correlate vast amounts of information in these applications, providing users with more accurate and insightful results.
Our research work introduces a pioneering application of knowledge graphs in the field of video anomaly detection. In our approach, we decompose the video content into action and background elements and then utilize the knowledge graph to describe and understand the relationships between these elements. Within the knowledge graph, the relationships between scenes and actions are annotated as “normal" or “abnormal", offering an intuitive understanding and explanation of abnormal behaviors for the model.
§ PROPOSED METHOD
§.§ Method Overview
Our proposed method, DecoAD, as illustrated in Fig. <ref>, consists of four main components: Scene-Action Decoupling (Sec. <ref>), Relational Knowledge Mapper (Sec. <ref>), Scene-Action Integrator (Sec. <ref>), and Uncertainty Refinement (Sec. <ref>).
In Stage 1, we begin by decoupling a video clip into scenes and their associated skeleton-based human actions. Next, in Step1, we employ the Relational Knowledge Mapper to interweave these actions and scenes with those from different video clips. This involves constructing a detailed knowledge graph that captures the relationships between the scenes and skeleton-based actions, resulting in scene-action relations.
In Step2, the Scene-Action Integrator is utilized to generate initial anomaly scores. These scores indicate the likelihood of anomalies present in the video clips. Finally, in Stage 2, we incorporate Uncertainty Refinement (Step3) to ensure the Scene-Action Integrator iteratively processes video clips that are predicted with uncertain anomaly scores. This iterative process helps to obtain more accurate results.
It is worth noting that this paradigm is trained using both fully-supervised and weakly-supervised approaches, while unsupervised methods do not undergo iterative training.
§.§ Preliminaries
§.§.§ Scene-Action Interweaving
Building on existing human-related video anomaly detection methods <cit.>, it is essential to emphasize integrating scene context with human actions for more effective anomaly detection. Current approaches, whether appearance-based <cit.> or action-based <cit.>, can recognize abnormal human actions like running or fighting. However, they frequently fail to consider the context of the scenes and actions, which can be crucial for accurately identifying context-related anomalies.
Thus, as mentioned in Sec. <ref>, we propose the concept of “Scene-Action Interweaving" for the first time. By decoupling scenes and human actions in video clips and interweaving them with elements from other video clips, we explore and understand the complex relationships and interactions between these scenes and actions. By combining and analyzing diverse elements from different video clips, we form a comprehensive semantic network, thereby enhancing the detection of context-related anomalies.
§.§.§ Scene-Action Decoupling
The core concept of “Scene-Action Interweaving” involves exploring the complex relationships between scene contexts and human actions by integrating them with another video clip to capture comprehensive interactions.
To facilitate this, we first decouple scenes and their associated human actions within each video clip.
For the extraction of human actions, we employ a human skeleton extraction tool, similar to the methods used in existing human-related video anomaly detection research <cit.>. Specifically, we derive skeletal data a from the video clip V as a representation of actions[In this study, we treat skeletal data as equivalent to actions, as actions can be effectively represented by skeletons.], and simultaneously extract the positional information pos of each skeleton for subsequent operations, as shown in Fig. <ref>-182:
⟨a, pos⟩ = SE( V),
where SE denotes the human skeleton extraction tool[AlphaPose <cit.> is used here; any state-of-the-art human skeleton extraction tool can be applied.].
If action information is not removed and scene data containing actions is used directly, the action information may be considered noise, increasing the complexity of the model's processing and making the detection results unstable[The performance of the model using scene data without removed action information is shown in Table <ref> and Table <ref> in the “Ours^2" row.]. Additionally, since the scene data contains irrelevant action information, the model may learn unrelated features, affecting its generalization ability on new data.
To prevent action information from affecting detection results, we need to remove these elements from the scene. First, using the extracted positional information pos, we generate an action mask mask with an image segmentation tool, as shown in Fig. <ref>-183. Then, utilizing this mask with an image inpainting tool <cit.>, we erase the actions from the video frames, thereby obtaining clear scene data s, as shown in Fig. <ref>-184.
mask = ST( V, pos),
where ST denotes the image segmentation tool[Segment Anything Model (SAM) <cit.> is used here; any state-of-the-art image segmentation tool can be applied.].
s = IT( V, mask),
where IT denotes the image inpainting tool[Inpainting Anything Model (IAM) <cit.> is used here; any state-of-the-art image inpainting tool can be applied.].
Having successfully decoupled the video clips into scenes and associated human actions, we now proceed to examine the interrelationships between these elements.
§.§ Relational Knowledge Mapper
Existing methods mostly capture and represent relationships between data through implicit associations within the learning mechanisms of the model, rather than explicitly defining and representing these relationships. For example, deep learning models learn implicit relationships between input features during training through large amounts of data and labels. These implicit relationships are reflected in the model's weights and structure but are not explicitly represented. While this is effective for some simple detection tasks, it mainly relies on automatically learned data features during training, making it difficult to fully capture and utilize complex contextual information, especially when there is insufficient training data.
As shown in Figure <ref>-Stage 1, we propose an explicit association method, the Relationship Knowledge Mapper (RKM) for “Relation Interweaving". This leverages the powerful representation capabilities of knowledge graphs to explicitly integrate high-level feature, providing a deep understanding of the relationships between scenes and actions. This is crucial for improving the accuracy of anomaly detection. Additionally, this method has a flexible updating mechanism that can represent new relationships by adding new nodes and edges, thereby adapting to continuously changing data and environments.
Given the training sets, the construction of the RKM involves four processes — clustering, combining, constructing, and updating, as shown in Fig. <ref>.
§.§.§ Clustering
It is unrealistic to treat all data as independent information for constructing RKM. Clustering enables us to more effectively understand and categorize complex data structures. By grouping similar scenes and actions, clustering significantly enhances the manageability and accuracy of data analysis.
For static scenes, where only the people move and the scene remains unchanged (e.g., videos filmed with cameras at fixed angles),
intuitively, when we already know the number of categories[Different scene and action types categorized based on video content.] for scenes and actions, we can simply put these scenes and actions in that category and find the centers without doing clustering. In contrast, dynamic scenes feature a variable number of elements in motion, including both the scenes and the people (e.g., videos captured by handheld or moving cameras), require clustering (Fig. <ref>-182) to unify similar scenes into the same scene category, thus simplifying scene complexity and reducing scene categories. This process groups similar scenes and actions to ensure data accurately reflects the situation, while also reducing the number of scene categories, making subsequent processing more efficient.
Given any decoupled scene and human action from the dataset, we first cluster these two elements using the K-means clustering algorithm to obtain the cluster centers of the human actions and scenes from normal and abnormal videos.
We technically set the number of clustering centers of human actions within normal and abnormal videos as θ_fn and θ_fa for each clip by the distribution statistics in the datasets[Ablation studies are shown in Table <ref>.]. The number of clustering centers of scenes is the same as the number of video scene categories.
By clustering actions and scenes, this method not only simplifies the complexity of the data but also significantly enhances processing efficiency and classification accuracy. Moreover, it strengthens the robustness and efficiency of the video analysis framework, enabling the model to perform anomaly detection more reliably when dealing with varied and complex video data.
§.§.§ Combining
Since the clips of the abnormal video may contain the content of the normal actions, we combine these normal actions clustering centers with the same normal actions clustering centers in normal videos (Fig. <ref>-183).
This is achieved by calculating the cosine similarity (Sim) between these cluster centers, which is denoted by:
Sim(A^fn,A^fa)=A^fn·A^fa/A^fn_2 ·A^fa_2 ,
where A^fn and A^fa denote the cluster centers of the human actions from normal videos and abnormal videos, respectively, without considering if they are normal or abnormal actions. Here, · represents the dot product of the vectors, and _2 denotes the L2 norm of the vector.
Then, we combine the cluster centers of human actions from normal videos and abnormal videos — if the cosine similarity exceeds ρ[The ablation study is shown in Table <ref>-A.], combining the two cluster centers.
These cluster centers serve as the template to guide the subsequent knowledge graph construction. Note that the cluster centers of the scenes do not need to be combined.
§.§.§ Constructing
In a normal video, the occurrence of an action is always considered normal, whereas in an abnormal video, the occurrence of an action may not necessarily be abnormal; it could also be normal. Thus, as shown in Fig. <ref>-184, to construct a detailed knowledge graph, we first use normal videos' scenes and human actions and mark these relationships as “normal”. This serves as the initial knowledge graph.
Then, we incorporate abnormal videos' scenes and human actions into the initial knowledge graph. This is done by computing the cosine similarity between the human actions and the cluster centers in the initial knowledge graph, and based on this similarity, we assign a numerical identifier to the foreground. To achieve this process, we query the relationship between the scenes and human actions within the knowledge graph: if the relationship is “normal”, we maintain it as is; if there is no relevant relationship, we mark it as “abnormal".
Let G represent the initial knowledge graph consisting of a number of scene-action relationships, denoted by (S,A,R), where S and A are the cluster centers of the scenes and actions in normal videos, respectively, and R is the relation between scenes and actions of normal video clips:
G={(S,A,R)},
where R is defined as “normal” in the initial knowledge graph. We can update the knowledge graph based on the relationships between scenes and human actions from abnormal videos:
G^'={(S',A',R')},
where S' and A' denote the cluster centers of the scenes and actions contained within both normal and abnormal video clips. R' is the relationship between scenes and actions of normal and abnormal video clips. R' is defined as:
R' =
Normal,
if (S',A',R') ∈ G,
Abnormal , if (S',A',R') ∉ G.
By querying and adjusting the relationships between scenes and human actions in the knowledge graph, these relationships can be effectively maintained or labeled as “normal" or “abnormal", resulting in the final knowledge graph G^', providing support for Uncertainty Refinement (Sec. <ref>).
§.§.§ Updating
If we want to add new video data that includes scenes and actions not previously included in the knowledge graph, we first need to construct a sub knowledge graph with the new data and then update the main knowledge graph, as illustrated in Fig. <ref>-185. This updating process allows the knowledge graph to flexibly accommodate the inclusion of new data. This flexible knowledge graph updating mechanism provides the foundation for the system's continual learning and adaptation, enabling it to continuously adjust to evolving data and environments.
The updating process involves the dynamic generation of cluster centers based on the computation of cosine similarity between each newly added video data instance, e.g., scenes and actions, and all scenes and actions cluster centers in the previously constructed knowledge graph, then, determine the maximum cosine similarity obtained, as outlined below:
max_sim^a = Max( ⋃_i^n Sim(A^new_i,A')),
max_sim^s = Max( ⋃_i^n Sim(S^new_i,S')),
where A^new_i and S^new_i are the newly added i-th action and scene. Sim denotes the cosine similarity. Max is the maximization operation to obtain the maximal value of cosine similarity of actions (max_sim^a) and scenes (max_sim^s). ⋃_i^n is the union of the values of cosine similarity. n means the total number of newly-added actions or scenes.
Based on the calculation results of the maximum cosine similarity, we add the newly added i-th action and scene as new cluster centers into A' and S', denoted as add.
A^new_i add→A',
if max_sim^a≤μ_a,
S^new_i add→S' ,
if max_sim^s≤μ_s,
where μ_a and μ_s are thresholds to determine the add operation. The ablation study of these two thresholds can be seen in Table <ref>.
It's important to note that this process makes no distinction between normal and abnormal video clips.
Then, when the maximal value of cosine similarity of actions (max_sim^a) and scenes (max_sim^s) are greater than μ, we combine the newly-added i-th action and scene into S' and A', denoted by combine, with existing cluster centers in the constructed knowledge graph:
A^new_i combine→A',
if max_sim^a > μ_a,
S^new_i combine→S',
if max_sim^s > μ_s.
Moreover, directly updating the main knowledge graph with all the relationships from the sub knowledge graph might lead to a decline or even failure in the model's detection capability, as there could be extreme or incorrect relationships in the sub knowledge graph. Therefore, we need to filter the relationships in the sub knowledge graph by calculating the cosine similarity between the nodes of the sub relationships and the nodes of the main relationships. If the sub relationship with the highest cosine similarity matches the main relationship, we proceed with the update; otherwise, we do not update the relationship. This ensures the safe updating of the main knowledge graph. It is important to note that all nodes in both the sub knowledge graph and the main knowledge graph come from S' and A'.
In this way, we complete the construction of the detailed knowledge graph for “Relation Interweaving” to obtain scene-action relations. Next, we will detail how to use “Feature Interweaving” to obtain initial anomaly scores.
§.§ Scene-Action Integrator
As shown in Fig. <ref>-Stage 1 (Step2), to enhance video anomaly detection involving human subjects, we introduce the Scene-Action Integrator (SAI) for “Feature Interweaving". This innovative approach scrutinizes individual motion and posture and comprehensively interprets the environmental context. SAI represents a multifaceted strategy that effectively bridges the gap between human actions and their surroundings, leveraging a deeper understanding of physical movements and environmental semantics.
To implement the SAI, we use the decoupled scenes (sc) and the isolated human action (sk) from the video clips. Using skeleton features, we encode the scenes with a feature encoder (ℰ) and capture semantic relationships with a Graph Convolution Network (GCN) operation (𝒢). To understand temporal dynamics, we employ a Long Short-Term Memory (LSTM) network (ℒℳ). Position embeddings (𝒫ℰ) record the position of the actions within previous scenes, ensuring coherent integration and reasonable action arrangement when fusing with another action. By concatenating the features through the operation (𝒞) to obtain the fused features f_concat, and passing them through the fully-connected layer (ℱ𝒞), we obtain the final anomaly scores (AS). This approach combines skeleton-based representations, semantic relationships, temporal dynamics, and positional information to generate accurate anomaly scores.
The whole processing is denoted by:
AS = ℱ𝒞 (⇑f_concat).
𝒞(ℰ( sc), ℒℳ(𝒢( sk)),𝒫ℰ( sk))
In training SAI, we employ the Multiple Instance Learning approach. As illustrated in the upper right of Fig. <ref>, consider a typical video composed of multiple clips. Each clip is assigned an anomaly score. To determine the anomaly score for the entire video[We compile N clips from each normal video into a normal bag, while N clips from an abnormal video are grouped into an abnormal bag. Each clip contains 24 frames. The ablation study is shown in Table <ref>-B.]. We select and average the highest K anomaly scores from these clips. This method is applied consistently to both normal and abnormal videos.
This procedure effectively increases the distinction between normal and abnormal videos by amplifying the difference in their respective anomaly scores. This approach is instrumental in enhancing the model's ability to differentiate between normal and abnormal content in video data.
§.§ Uncertainty Refinement
We propose Uncertainty Refinement(UR) to train our DecoAD in an iterative training way in Stage 2[The ablation study is shown in Table <ref>-C.] (Step3). To achieve this goal, we set hyperparameters β_1 and β_2 as thresholds[The ablation study is shown in Table <ref>.] and construct three pools, i.e., “normal pool”, “abnormal pool” and “pending pool”. Initially, the “normal pool” is constructed by normal video clips. For abnormal video clips, we first combine all scenes (including their positional information) with the human actions and feed them into the models of the Stage 1. In the first iteration (Stage 2), the abnormal video clips are further put into these three pools based on the anomaly scores and the relationships in the knowledge graph:
1) Video clips with anomaly scores below β_1 and marked as “normal" in the knowledge graph G^' are placed in the “normal pool", as normal training datas;
2) Video clips with anomaly scores above β_2 and marked as “abnormal" in the knowledge graph G^' are placed in the “abnormal pool", as abnormal training datas;
3) Video clips that do not meet the above two conditions are placed in the “pending pool", which is used for UR iteration training.
Then, we use the data from the “pending pool" for further iterative training of the model.
§.§ Training Methodology
The method mentioned above is trained under fully-supervised and weakly-supervised conditions. To increase the generalization, our method can also be trained in an unsupervised learning manner. In the unsupervised learning environment, where the training phase involves only normal videos, which does not meet the requirements of Multiple Instance Learning, we instead employ a traditional auto-encoder <cit.> to tackle this challenge. As shown in Fig. <ref>, we utilize the original model (SAI) as the encoder and construct a corresponding decoder within this framework. By comparing the combined features of the input videos with the reconstructed video features, we can determine the presence of anomalies.
Inspired by the knowledge graph, we adopt a similar strategy of recombining all scenes and human actions. This is done to maximize the auto-encoder's grasp and learning of the features within normal video clips, thus enhancing its capability for detecting abnormal situations.
Note that the main differences between unsupervised and fully/weakly-supervised training methodology are two manifolds — 1) The Scene-Action Integrator (Sec. <ref>) in Stage 1 (Step2), where in unsupervised training, it changes to an auto-encoder; 2) The Relational Knowledge Mapper in Stage 1 (Step1) and UR in Stage 2 (Step3) are discarded from fully/weakly-supervised training.
§.§ Training Loss
Fully-supervised and Weakly-supervised Training.
In Stage 1 of both fully-supervised and weakly-supervised training, we calculate the Multiple Instance Learning Loss <cit.>, denoted as ℒ_mil, by comparing the anomaly scores of abnormal videos with those of normal videos. The overall process can be formulated as follows:
ℒ_mil =α_1 ×ℒ_rank + α_2 ×ℒ_focal,
where α_1 and α_2 are learnable weight parameters. ℒ_rank is the Ranking Loss <cit.>. ℒ_focal is the Focal Loss <cit.> incorporating with BCE Loss.
In Stage 2, to train our DecoAD iteratively under fully/weakly-supervised conditions, we employ the Binary Cross-Entropy loss (ℒ_bce) to increase the distance between the “normal pool" and the “abnormal pool". The total loss (ℒ_total) in this stage is formulated as:
ℒ_total = λ_1 ×ℒ_mil+λ_2 ×ℒ_bce.
where λ_1 and λ_2 are learnable weight parameters.
Unsupervised training.
For unsupervised training, we have excluded the Relational Knowledge Mapper and the Uncertainty Refinement and modified the Scene-Action Integrator to an autoencoder (Fig. <ref>).
The total loss (ℒ_total) for unsupervised training are consisting of reconstruction loss (ℒ_rec) and regularization term (ℒ_reg) is formulated as:
ℒ_total = λ_1 ×ℒ_rec+λ_2 ×ℒ_reg.
where λ_1 and λ_2 are learnable weight parameters. The regularization term ℒ_reg is calculated using L2 regularization to prevent overfitting by penalizing large weights in the model.
§ EXPERIMENTS
§.§ Datasets
We evaluate our method on three datasets, namely NWPU Campus <cit.>, UBnormal <cit.>, and HR-ShanghaiTech <cit.>. According to the characteristics of each dataset, we employ UBnormal for fully/weakly-supervised training, NWPU Campus for weakly-supervised training, and NWPU Campus, UBnormal, and HR-ShanghaiTech for unsupervised training.
The NWPU Campus dataset includes 43 different scenes and 28 types of abnormal events, pioneering the study of scene-dependent anomalies. However, its training set only contains normal video data, which does not meet the requirements for weakly supervised video anomaly detection. Therefore, we reconfigured the training and test sets to accommodate weakly supervised models, but we still used the original dataset for unsupervised training. The UBnormal dataset comprises 29 scenes and 22 types of abnormal events, with detailed annotations that make it highly valuable for advanced anomaly detection research. HR-ShanghaiTech, a subset of the ShanghaiTech Campus dataset, focuses on human-related scenes, encompassing 13 scenes and 11 types of abnormal events.
§.§ Evaluation Metrics
In the field of video anomaly detection, the commonly used performance evaluation metric is the area under the Receiver Operating Characteristic curve (AUC), which intuitively reflects the performance of detection methods. However, due to the imbalance in anomaly detection tasks, AUC may exaggerate performance. Therefore, we introduce the area under the Precision-Recall curve (AP) as a supplementary metric. A higher AP value indicates a stronger ability of the model to detect abnormal events.
§.§ Implementation Details
Our work is implemented in PyTorch and experimented on NVIDIA RTX 4090 GPU. We employ the AlphaPose <cit.> and YOLOX <cit.> detectors to independently detect the human skeleton in each video frame. The network is optimized using the Adam optimizer (β _1=0.9, β _2=0.999) with an initial learning rate of 1× 10^-4 for all model training, which decreases by multiplying 0.1 for every 10 epochs. Our method utilizes a batch size of 256, and the training process runs for a total of 120 epochs, only costing 2.2 hours. Additionally, the size of our supervised model has been optimized to 1 Mb, while the unsupervised model size has been optimized to 12.3 Mb, with the frames per second (FPS) remaining around 24.
§.§ Component Evaluation
We conducted a comprehensive evaluation of our method's components, as shown in Table <ref>. To ensure successful code execution, we replaced the key components requiring verification with simpler operations. For example, we substituted the proposed components with a basic ResNet model <cit.> consisting of two fully connected layers. This served as our baseline, and the qualitative results are shown in line 1.
Lines 2-5 demonstrate the effectiveness of the Scene-Action Integrator (Sec. <ref>) in achieving “Feature Interweaving" between scenes and associated human actions. Comparing line 4 (our method) to line 11, where we removed LSTM and GCN, we observed a decrease in the area under the curve (AUC) from 78.4% to 72.2%. Additionally, we observed that line 3 (GCN) outperformed line 2 (LSTM), with AUC values of 64.2% and 71.6%, respectively, indicating that GCN is better at modeling action relationships, which is crucial for understanding human actions. These results underscore the importance of the Scene-Action Integrator in capturing the relationship between scenes and human actions, and highlight the effectiveness of GCN in this task.
Lines 6-9 provide evidence of the effectiveness of Uncertainty Refinement (Sec. <ref>). By comparing line 7 to line 8, we deduced that the iterative training process of the “pending" pool is more effective than using binary cross-entropy (BCE) loss for the “normal" pool and sub-“abnormal" pool, as indicated by the higher AUC. Moreover, removing the two constraints on anomaly score and scene-action relation (line 9) resulted in decreased AUC performance.
Comparing line 10 to line 11, our method incorporating the Relational Knowledge Mapper (Sec. <ref>, line 11) outperforms the method without it (line 10). This is because the Relational Knowledge Mapper enables a comprehensive understanding of the intricate interplay between different scenes and human actions by leveraging a detailed knowledge graph.
§.§ Performance Comparison
To demonstrate the effectiveness of our approach, we conducted a comprehensive comparison with state-of-the-art methods using three different training methodologies: fully-supervised, weakly-supervised, and unsupervised training.
For fully/weakly-supervised training, we selected the DeepMIL <cit.>, ST-GCN <cit.>, Shift-GCN <cit.>, RTFM <cit.>, MGFN <cit.>, BN-WVAD <cit.>, STG-NF <cit.>, and RTFM-BERT <cit.>. For unsupervised training, we evaluated the GEPC <cit.>, MPN <cit.>, LGN-Net <cit.>, MoCoDAD <cit.>, STG-NF <cit.>, CampusVAD <cit.>, TrajREC <cit.>, and GiCiSAD <cit.> methods.
The results we compared were obtained either from the source code or reported results provided by the respective authors.
The “Ours^1" is our method which does not consider scene information, meaning that the model only utilizes skeleton information for video anomaly detection and cannot perform Relational Knowledge Mapper (RKM) construction or Uncertainty Refinement (UR). The “Ours^2" is our method, but it uses scene data for training without removed action information, as detailed in Sec. <ref>. The “Ours*" comprehensively considers all information (skeleton, scene, and location).
§.§.§ Quantitative Comparisons with Fully/Weakly-supervised Training Methods
The quantitative comparison results with fully/weakly-supervised training methods are shown in Table <ref>. We found that “Ours^1" shows inferior performance compared to existing action-based methods such as STG-NF. STG-NF overlooks scene information, operating directly on the distribution of data and providing a more direct probabilistic interpretation, making it more sensitive to the detection of abnormal behaviors. Our proposed method “Ours*" outperforms all previous state-of-the-art approaches in fully/weakly-supervised training settings. Specifically, “Ours*" achieves an improvement of 0.2% and 3.1% in AUC values, and 0.6% and 3.8% in AP values over the best existing weakly-supervised methods on NWPU Campus and UBnormal, respectively. Moreover, it achieves an improvement of 1.1% in AUC value and 1.0% in AP value over the best existing fully-supervised method on UBnormal. These results demonstrate the effectiveness of our proposed method, which leverages the “Scene-Action Interweaving" approach to combine and analyze elements from different scenes and human actions in videos for enhanced anomaly detection.
§.§.§ Quantitative Comparisons with Unsupervised Training Methods
The quantitative comparison results with unsupervised training methods are shown in Table <ref>. We found that the “Ours^1" method performs worse than existing action-based methods such as TrajREC. The TrajREC method overlooks scene information and directly uses skeleton data, utilizing a self-supervised learning approach to enhance reinforcement learning effectiveness through positive and negative sample pairs. This strategy improves the model's ability to distinguish between normal and abnormal trajectory behaviors.
Meanwhile, we found that “Ours^2" performs worse than “Ours*" because the use of scene data containing action information interfered with the model's training, thereby affecting its performance. Our “Ours*" method also surpasses all previous state-of-the-art unsupervised training methods in NWPU Campus and UBnormal. “Ours*" achieves improvements of 0.2% and 1.7% in AUC values, and 4.7% and 0.5% in AP values over the best existing unsupervised method, MoCoDAD, on the NWPU Campus and UBnormal datasets, respectively. Additionally, Our method achieves suboptimal results on the HR-ShanghaiTech dataset. Although some methods have smaller model sizes and higher FPS values, their video anomaly detection capabilities are not excellent. Our method (both supervised and unsupervised), after balancing model size, FPS, and video anomaly detection capability, achieves the best performance.
§.§.§ Qualitative Results
Fig. <ref> demonstrates the superior results of our method (fully/weakly-supervised and unsupervised) in context-related situations. Our approach successfully and promptly detects these abnormal events by generating high anomaly scores for abnormal frames. F-3, W-3, W-5, and U-3 are four normal videos, for which our method generates low anomaly scores throughout the entire video (close to 0). It is worth mentioning that W-4 depicts a person riding a bicycle in a square, while W-5 shows a person riding a bicycle on a bike lane. The former is an abnormal event, while the latter is a normal event. Our model successfully identifies and detects this abnormal event in the scene without any false alarms, thanks to the concept of “Scene-Action Interweaving".
§.§ Ablation Study
§.§.§ Choices of the Number of Cluster Centers
Since the clustering operation in the Rational Knowledge Mapper (see Sec. <ref>) is to unify similar scenes into the same scene category, thus simplifying scene complexity and reducing scene categories, the number of cluster centers is not ideal if it's too large or too small. Thus, we conducted an ablation study on the UBnormal dataset to determine the proper number of cluster centers. As shown in Table <ref>, when the number of cluster centers is too small, it fails to distinguish effectively between very similar scenes or actions, reducing the efficacy of the model.
Conversely, when the number of cluster centers is too big, although a more refined data segmentation is possible, it may lead to model overfitting, where the features learned are too specific and fail to generalize to new data.
Thus, we set the number of cluster centers for human actions in normal and abnormal video segments to 15 and 25 respectively to achieve sufficient coverage and distinction.
§.§.§ Choices of β_1 and β_2 in Constructing Three Pools
Additionally, we conducted further experiments on the UBnormal dataset to explore the impact of different thresholds on the classification of the “pending pool" (see Sec. <ref>).
Proper threshold settings help the model generalize better to new and unseen data. Setting the thresholds too high or too low could lead to inappropriate sensitivity of the model to the data, thereby affecting its performance in practical applications. As shown in Table <ref>, when β_1 was set too low, normal video clip data might incorrectly classify as abnormal; conversely, if β_1 is too high, abnormal data might be wrongly classified as normal, thus reducing the overall performance of DecoAD.
Our DecoAD achieved its best performance when β_1 and β_2 were set to 0.4 and 0.8, respectively. This is primarily because these thresholds effectively differentiated between normal and abnormal data within the “pending pool".
§.§.§ Choices of Different Cosine Similarity Threshold ρ in Combining Two Cluster Centers
We conducted another ablation study on the UBnormal dataset to examine the effect of different cosine similarity thresholds on combining two cluster centers (Sec. <ref>). As shown in Table <ref>-A, the results indicated that when ρ was 0.95, the clustering result was closest to the true number of categories. Therefore, we set it as the cosine similarity threshold for DecoAD. Moreover, DecoAD achieved the best results on this basis, possibly because this threshold allowed the merged cluster centers to align more closely with the distribution of human actions in the actual dataset.
§.§.§ Choices of Different Segment Lengths of Video Clips
We notice that the frame rates of the datasets we compared vary. For instance, the UBnormal dataset is at 30 fps, while the HR-ShanghaiTech and NWPU Campus datasets are at 24 fps. To evaluate the effectiveness of different segment lengths of video clips (see Sec. <ref>), we conducted extensive experiments.
Segment length is a critical factor in determining the time window observed by the model when making decisions. If the segment length is too short, it may not capture enough behavior sequences, making it difficult to accurately understand the context of the behavior. If it's too long, it might introduce redundant information, reducing processing efficiency and complicating the extraction of key features. The right segment length helps maintain the continuity of behavior and avoids interference from irrelevant actions or background activities, enhancing the model's recognition capabilities.
As shown in Table <ref>-B, we found that setting the segment length to 24 frames offers the best performance, while settings of 12 or 30 frames led to significant performance declines. A 24-frame length strikes the perfect balance between the comprehensiveness of data and the complexity of processing, allowing the DecoAD model to achieve optimal performance on these specific datasets.
§.§.§ Effectiveness of the Number of Iterations
We also conducted comprehensive experiments to assess the effectiveness of iteration numbers in the uncertainty refinement process (see Sec. <ref>). As shown in Table <ref>-C, performance improved with an increase in iterations. However, after reaching ten iterations, the performance began to stabilize. This phenomenon could be due to insufficient data in the “pending pool", making it difficult to further effectively expand the “normal pool" and “abnormal pool", and the model may have already converged to its potential optimal solution.
§.§.§ Effectiveness of the Updating Thresholds
We further conducted an ablation study on the updating thresholds μ_a (for actions) and μ_s (for scenes) (see Sec. <ref>). To determine the updating thresholds, we carried out ablation experiments on the same dataset. As shown in Table <ref>, we found that when μ_a was set to 0.45, the number of action clusters was closest to the actual number of action categories. Similarly, when μ_s was set to 0.90, the number of scene clusters was closest to the actual number of scene categories, indicating that the updating effect was optimal at these thresholds.
§.§ In-depth Discussion of the Poor AP Performance
We found that the AP performance of all models (including weakly supervised and unsupervised) was poor on the NWPU Campus dataset. We analyzed all scenes in the dataset using weakly supervised and unsupervised models and visualized the performance of the top five and bottom five scenes in Fig. <ref>. A detailed analysis of the poorly performing scenes (as shown in Fig. <ref>) revealed that the anomalies in these scenes often involve severe occlusion, significant ambiguity, and a substantial presence of non-human-related anomalies. These factors lead to the models' inability to effectively detect the anomalies, thereby affecting the AP values.
§.§ Limitations
While the DecoAD approach shows promise in addressing the limitations of existing human-related video anomaly detection methods, through the analysis of the NWPU Campus dataset (see Sec. <ref>), we identified some potential limitations of this approach: 1) in cases where behaviors are highly similar, their semantic distance is minimal, making it difficult for the model to accurately distinguish between them. This difficulty is particularly evident when combined with scene context; 2) in complex scenarios involving occlusion and background distractions, there may be errors in skeleton extraction, such as obtaining only partial skeletons. This incomplete skeleton information may lead to incorrect predictions of anomaly scores because the missing semantic context can mislead the model; 3) when dealing with appearance anomalies, such as improper backpack positioning, the model, based on skeleton data for anomaly detection, is unable to recognize these anomalies; 4) for abnormal behaviors not directly involving humans, such as vehicles violating traffic rules, action-based methods are unable to detect them.
Finally, we found that the FPS (frames per second) of our model is relatively low. We further analyzed the time required for each key step in Table <ref> and discovered that the time consumed in processing a single video frame is primarily concentrated in the “Scene-Action Decoupling" part, mainly due to the excessive time overhead of skeleton extraction. As skeleton extraction technology advances, there is potential for further improvement in the FPS of our method.
§ CONCLUSION
This study introduces DecoAD, an innovative architecture for detecting anomalies in human-related videos. By employing the concept of “Scene-Action Interweaving", DecoAD surpasses existing methods in accuracy and robustness to detect context-related anomalies. The proposed methodology involves “Relation Interweaving”, “Feature Interweaving”, and “Uncertainty Refinement”, enabling a comprehensive understanding of the complex relationships between scenes, human actions, and video clips.
Extensive experiments on benchmark datasets demonstrate that DecoAD outperforms state-of-the-art approaches, achieving superior accuracy and robustness.
Future research could focus on challenges such as incomplete skeleton extraction and distinguishing between similar behaviors. Current skeleton extraction technologies often struggle with occlusions or fast movements, which directly impacts the effectiveness of anomaly detection models. Improving algorithms or introducing new technologies could enhance the accuracy of skeleton extraction. Additionally, differentiating behaviors that look similar but have different meanings is crucial. This can be achieved by optimizing feature extraction and classification algorithms, incorporating more contextual information, and utilizing multimodal data to improve model performance. These efforts will enhance the functionality and applicability of the model across a wider range of scenarios.
IEEEtran
|
http://arxiv.org/abs/2409.03721v1 | 20240905172407 | Finite-size Effects in periodic EOM-CCSD for Ionization Energies and Electron Affinities: Convergence Rate and Extrapolation to the Thermodynamic Limit | [
"Evgeny Moerman",
"Alejandro Gallo",
"Andreas Irmler",
"Tobias Schäfer",
"Felix Hummel",
"Andreas Grüneis",
"Matthias Scheffler"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
APS/123-QED
Institute for Theoretical Physics, TU Wien,
Wiedner Hauptstraße 8–10/136, 1040 Vienna, Austria
The NOMAD Laboratory at the FHI of the Max-Planck-Gesellschaft
[email protected]
§ ABSTRACT
We investigate the convergence of quasi-particle energies
for periodic systems to the thermodynamic limit using increasingly
large simulation cells corresponding to increasingly dense integration meshes in reciprocal space.
The quasi-particle energies are computed at the level of
equation-of-motion coupled-cluster theory for ionization
(IP-EOM-CC) and electron attachment processes (EA-EOM-CC).
By introducing an electronic correlation structure factor, the
expected asymptotic convergence rates for systems with different dimensionality
are formally derived.
We rigorously test these derivations through numerical simulations for
trans-Polyacetylene using IP/EA-EOM-CCSD and the G_0W_0@HF approximation,
which confirm the predicted convergence behavior.
Our findings provide a solid foundation for efficient schemes to correct
finite-size errors in IP/EA-EOM-CCSD calculations.
Finite-size Effects in periodic EOM-CCSD for Ionization Energies and Electron Affinities:
Convergence Rate and Extrapolation to the Thermodynamic Limit
Matthias Scheffler 0000-0002-1280-9873
September 9, 2024
=============================================================================================================================================================
§ INTRODUCTION
For most theoretical materials science studies, density functional
theory (DFT) is employed due to its favorable balance between
computational scaling and moderate accuracy. However, for many materials
properties the accurate inclusion of electronic exchange and correlation
effects is critical to achieve a qualitatively correct description for
scientifically and technologically important properties. In particular,
for the theoretical description
of electronic band gaps and band structures, most Kohn-Sham density functional
approximations (KS-DFAs) in use today are known to generally severely
underestimate
band gaps, oftentimes referred to as the band gap problem <cit.>.
Relaxing the condition of a multiplicative exchange-correlation potential of KS-DFT to allow
for a more flexible integral operator as it is the case in generalized KS-DFT resolves many of
the issues associated with the band gap problem <cit.>,
generally reducing the band gap error and rectifying the missing derivative discontinuity.
Apart from hybrid functionals, which are the most widely used representatives
of generalized KS-DFAs, the electronic structure method of choice for the
calculation of quasi-particle energies has become the
GW-approximation <cit.>, which takes the DFA
one-electron wave functions as a starting point but explicitly
accounts for electronic exchange and
correlation effects using a perturbation theory approach.
The GW-approximation yields significantly improved band gaps
compared to the most widely used approximate density functionals.
However, the most commonly used method based on the GW approximation, the
G_0W_0 method,
is known to have its limitations as well, the most glaring one being
the dependency of the band gap result on the underlying DFA, the so-called
starting point dependence <cit.>.
Other higher-order corrections to the GW approximation require
the inclusion of so-called vertex corrections in the self-energy and
the screened interaction W <cit.>.
Although certain improvements can be achieved
using vertex corrections, it remains challenging to systematically improve
upon the GW approximation, which already achieves an excellent trade-off
between computational cost and accuracy <cit.>.
A systematically improvable method is the equation-of-motion
coupled-cluster <cit.>(EOM-CC) framework.
EOM-CC theory, being the extension of ground-state CC theory to excited states, allows to
theoretically describe systems upon removal (IP-EOM-CC), addition
(EA-EOM-CC) or vertical excitation (EE-EOM-CC) of an electron. As the
electronic band gap is defined as the difference between the electron
affinity/attachment (EA) and the ionization potential (IP), it is possible
to obtain band gaps and entire band structures in the EOM-CC
framework <cit.>.
The relation between EOM-CC and GW band gaps was also investigated,
showing the differences and similiarites between the diagramatic contributions
and the results of both approaches <cit.>.
However, the major obstacle of ab initio calculations employing EOM-CC theory,
and CC theory in general, is the high computational cost and excessive
memory requirements. In addition to that, CC theory
explicitly incorporates long-range electronic exchange and correlation contributions,
making convergence to the bulk-limit significantly slower than it is the case for DFAs.
Due to the
high computational cost of CC methods, it is even more challenging
than for DFA and GW calculations to converge to the tdl,
which is approached by increasing the number of particles N_part in the
simulation cell, N_part→∞, while keeping the particle density constant.
The GW approximation partly ameliorates this problem by adding corrections
for the long-range behavior of the dielectric function using
k· p-perturbation theory, which are often referred to as
head- and wing-corrections <cit.>.
In recent years,
studies of electronic band gaps via Quantum Monte Carlo (QMC)
methods have been published as well <cit.>,
where finite-size effects were also discussed as one of the major sources of
error. For these QMC band gaps, the N-electron ground-state
and the electronic state with one electron more (N+1) or less (N-1) were
determined separately and the energy difference of these states was computed
to obtain the band gap value. As the leading-order contribution to the
finite-size error, the interaction of the added particle (N+1) or
hole (N-1) with its periodic images was identified, which was corrected
by subtracting the screened Madelung term from the quasi-particle band gap.
Higher-order finite-size errors resulting from multipole moments of the
charged states were corrected by means of system size extrapolation.
Unfortunately, these approaches are not straight-forwardly applicable to
EOM-CC methods: The dielectric function, and therewith the head- and
wing-correction to it, is not directly accessible in the canonical
EOM-CCSD formulation. For that, linear response CC theory would be
necessary <cit.>. In contrast to the QMC ansatz,
CC methods do not use trial wavefunctions but generally rely on the
hf wave function as a starting point, which already
incorporates the Madelung term. Instead, one needs to resort to
extrapolation techniques
or more sophisticated
finite-size error estimations using the transition structure
factor <cit.>, which,
however, has been only
formulated for ground-state CC theory so far and is only a viable
technique if the correlation structure factor is represented in a
plane wave (PW) basis.
Using a very different approach for the simulation of crystalline systems
in the CC framework, it has been
demonstrated that via a cluster embedding approach accurate band gap
predictions can be achieved <cit.>. It must be stressed,
however, that the results discussed in the present work
assume periodic boundary conditions. A second point of departure is that
the work on band gaps from cluster embedding techniques utilized similarity transformed
EOM (STEOM) theory, while we in this work explore the EOM-CC method.
In this work, all CC and EOM-CC calculations were performed using a super cell approach.
Even though
it is entirely sufficient and – due to the exploitation of the translation symmetry –
computationally significantly more efficient to calculate the band gap of a perfect crystal
on a regular k-grid in reciprocal space using the primitive unit cell, a k-point aware,
block-sparse treatment of the CC and EOM-CC equations is not yet available in CC4S.
Even for DFAs, it is well known that the super cell size convergence is an impractical
approach for solving a proper k-point summation.
The IP- and EA-EOM-CCSD method exhibit a computational scaling of
N_o^3N_v^2N_k^3 and N_oN_v^4N_k^3 <cit.>, respectively, where N_o, N_v and N_k denote
the number of occupied and unoccupied orbitals per unit cell and the number of k-points.
The memory scaling is dominated by the T_2-amplitudes and is proportional to
N_o^2N_v^2N_k^3. If instead of N_k k-points, a super cell approach with
N_u=N_k unit cells is employed, a (N_uN_o)^3(N_uN_v)^2 = N_o^3N_v^2N_u^5
and a (N_uN_o)(N_uN_v)^4 = N_oN_v^4N_u^5 computational scaling for the
IP- and EA-EOM-CCSD method, respectively, is the consequence. Analogously,
the memory scaling becomes proportional to N_o^2N_v^2N_u^4.
Thus, a k-point based treatment of the EOM-CC equations
would result in a reduced computational scaling of a factor
N_k^2 and a reduction in memory scaling by a factor
of N_k.
It must, however, be stressed that in accordance with
Bloch's theorem, the bvk cell of a M× K× L
super cell evaluated at a single k-point k_off
is identical to the bvk cell resulting from a
primitive unit cell being evaluated on a regular M× K× L
k-grid shifted by k_off. Hence, even though the super cell
approach is notably more computationally expensive, the numerical EOM-CCSD
results presented here are not affected by this choice. Still, as is well-known
for standard electronic-structure theory, only the k-summation approach
is practically feasible in order to achieve covergence.
For the EOM-CC methods, one is currently forced to perform
calculations of increasing system size and perform an extrapolation
to the tdl <cit.>, which requires knowledge
about the convergence rate of the correlation energy with respect to
the system size. While this convergence rate has been studied for the ground-state of the
3-dimensional case of a bulk solid <cit.>,
the formal convergence behavior for electronically excited states (charged or neutral) for any dimension is unknown.
In this work, we formally derive the analytical expression governing the convergence
rate of the band gap on the IP/EA-EOM-CCSD level of
theory. Subsequently, we verify the correctness of the derived
expression by applying it to the band gap of a single chain of
trans-Polyacetylene, demonstrating an efficient
extrapolation approach to the tdl.
Furthermore, by repeating the
calculations using
the G_0W_0 method with a hf starting point (G_0W_0@HF), we show that
the derived convergence rate is not specific to periodic EOM-CC theory
but can be used for other correlated methods as well.
§ THEORY
§.§ The EOM-CC theory
The EOM-CC framework is an extension to ground-state CC
theory, to compute properties of excited states. Depending on the
nature of the excitations, different EOM-CC methods are available: The
most prominent ones are EE-EOM-CC, for neutral electronic excitations,
IP-EOM-CC for ionization processes and EA-EOM-CC for electron
attachment processes. For the present work only the latter two methods
are of importance, as the fundamental band gap of a material is given
by the difference of its lowest ionization potential and electron affinity.
The starting point of the EOM-CC method is the ground-state CC
many-electron wave function |Ψ_0⟩, which is defined by an
exponential ansatz
|Ψ_0⟩ = e^T̂|Φ_0⟩,
with the Slater determinant |Φ_0⟩, which is the ground-state wave
function of a preceding mean-field calculation, usually hf.
T̂ = ∑_n^MT̂_n is the so-called cluster
operator, which can excite up to M electrons:
T̂ =T̂_1 + T̂_2 + ⋯+T̂_M
=∑_i,at^a_iâ^†_aâ_i + ∑_i,j,a,b1/4t^ab_ijâ^†_aâ^†_bâ_jâ_i + ⋯
+(1/M!)^2
∑_i,j⋯
a,b⋯
t^ab⋯_ij⋯â^†_aâ^†_b⋯â_jâ_i
with the coefficients t^a_i,
t^ab_ij,⋯ being again the
cluster amplitudes and â^†_p and â_q
the creation/annihilation operators
in second quantization, creating/annihilating an electron in orbital p/q.
The notation in Equation <ref> is
such that i,j,k
denote occupied and
a,b,c unoccupied spin orbitals.
If M is equal to the number of electrons N of the system, the
ansatz in Equation <ref> is exact. For reasons
of otherwise impractical computational scaling, T̂ is truncated
in practice.
For example, M=2 corresponds to CC theory
with single- and double excitations (CCSD),
which is also used in this work.
To compute the wave function of the n-th
excited state, the EOM-CC framework
starts from a linear ansatz
|Ψ_n⟩ = R̂_n|Ψ_0⟩
where R̂_n is an excitation operator
similar to the cluster operator T̂.
For IP-EOM-CC and EA-EOM-CC, R̂_n
assumes the form
R̂^IP_n =
∑_i r_i,nâ_i +
∑_ija r^a_ij,nâ^†_aâ_jâ_i + ⋯
R̂^EA_n =
∑_a r^a_nâ^†_a +
∑_iab r^ab_i,nâ^†_aâ^†_bâ_i + ⋯.
The r-coefficients contain the description of the wave function of
the n-th excited state and will be summarized under the excitation vectors
|R_n^IP⟩ or |R_n^EA⟩ .
The objective of the EOM-CC methodology
is to determine these coefficients. In analogy
to the T̂-operator, the operators defined in Equations <ref>
and <ref> contain all possible
processes to excite an N-electron system into an (N-1)-state and
an (N+1)-state, respectively. However, for the same reasons stated
before for ground-state CC, R̂^IP_n and
R̂^EA_n are truncated in practice. The most common
approximation for EOM-CC involves only the first two terms of Equations
<ref> and <ref> and is termed
IP-EOM-CCSD and EA-EOM-CCSD, respectively. Consequently, IP-EOM-CCSD
accounts only for 1-hole- and 2-hole-1-particle-excitation processes,
while EA-EOM-CCSD is restricted to 1-particle- and 2-particle-1-hole
processes. We stress that, here, the term excitation process is not restricted
to charge neutral processes but includes the removal and addition of
electrons.
In order to determine the excited states |R_n^IP/EA⟩ and
the related IP- or EA-energy, the eigenproblem
H̅|R_n^IP⟩ =
IP_n|R_n^IP⟩
H̅|R_n^EA⟩ =
EA_n|R_n^EA⟩
needs to be solved, where H̅ = e^-T̂Ĥe^T̂
is the similarity-transformed Hamiltonian, with
T̂ being the ground-state CC cluster operator.
Due to the
generally intractable size of H̅ in the chosen representations,
the eigenvalues and -vectors cannot be determined directly but need to be computed via an
indirect approach like Davidson's method <cit.>.
Once the excited states |R_n^IP/EA⟩ are obtained, the
corresponding IPs or EAs can be computed via
IP_n/EA_n =
⟨ R_n^IP/EA|H̅|R_n^IP/EA⟩/⟨ R_n^IP/EA|R_n^IP/EA⟩.
Note here, that even though
H̅ is for the given T̂ non-symmetric and therefore every eigenvalue
is associated with both a left and right eigenvector,
the calculation of the eigenvalue in
Eq. (<ref>) only requires the knowledge of one of the eigenvectors.
The working equations for H̅|R_n^IP/EA⟩ can be
found in, e.g. <cit.>.
Finally, note that while the theoretical framework of CC theory for the ground
and excited states was elucidated using spin orbitals, henceforth all quantities
will be expressed in terms of (spin-independent) spatial orbitals, as all
the results shown in this work have been obtained without the consideration of
spin degrees of freedom.
§.§ The EOM-CC structure factor
Let us now introduce an expression for the IPs and EAs
that makes it possible to analyze their dependence on the interelectronic
distance.
A closer look of the working
equations show
that all contributions to the expectation value in
Eq. (<ref>) constitute contractions of cluster
amplitudes (see Eq. (<ref>)), r-coefficients
(Eqs. (<ref>)/(<ref>)) and Coulomb
integrals
V^pq_rs = ∬d𝐫d𝐫'
ϕ_p^*(𝐫)ϕ^*_q(𝐫')
ϕ_r(𝐫)ϕ_s(𝐫')/|𝐫-𝐫'|,
where ϕ_p(𝐫) denotes a single-particle
state of the underlying mean-field
theory (hf in this work).
By assuming bvk boundary conditions we can introduce the co-densities in reciprocal
space
C^p_r(𝐪) =
∫d𝐫ϕ^*_p(𝐫)ϕ_r(𝐫)e^-i𝐪𝐫.
One can rewrite Eq. (<ref>) as
V^pq_rs =∑_𝐪 w_𝐪C^r_p^*(𝐪)v(𝐪)C^q_s(𝐪),
The discrete 𝐪-vectors in Equation <ref> and <ref>
lie on a grid in reciprocal space and
v(𝐪) = 4π/|𝐪|^2 is
the Coulomb potential in reciprocal space for the three-dimensional case.
We stress that the 𝐪-mesh is used to represent the co-densities in Fourier space.
If the single-particle states are expressed using Bloch's theorem such that ϕ_s(𝐫)=e^i𝐤_s𝐫u_s(𝐫),
where u_s(𝐫) is a cell periodic function and 𝐤_s is a wave vector in the first Brillouin zone,
the 𝐪-vectors correspond to the difference between the corresponding
wave vectors and a reciprocal lattice vector of the periodic unit cell.
If the single-particle states are represented using a supercell approach, the 𝐪-vectors correspond to
reciprocal lattice vectors of the supercell. Both approaches are formally equivalent, although
Bloch's theorem enables a computationally more efficient implementation.
w_𝐪 is a weighting factor that depends on the employed integration grid and method.
It follows from Eq. (<ref>)
that C^q_s(0) is equivalent to the
overlap between the two involved single particle states
C^p_r(𝐪=0) = δ_p,r.
The terms contributing to the expectation value of
IP- and EA-EOM-CCSD of Equation <ref>, that is the IP and EA, can be broadly
separated into two types, single-body and many-body
contributions.
We define single-body mean-field contributions to explicitly depend on the
Fock matrix elements f^p_q and only contain Coulomb integrals
implicitly by virtue of the Hartree and exchange contribution of the Fock matrix elements. A
representative single-body mean-field contribution to the
IP and EA is given by
IP_n = ⋯ -r^ij*_b,n f^l_jr^b_il,n+⋯
and
EA_n = ⋯ +r^j*_ab,n f^a_c r^cb_j,n+⋯.
We define IP^(1)_n/EA^(1)_n as the sum of all single-body mean-field terms
in the expression for IP_n/EA_n.
It should be noted that the commutator expansion of the similarity transformed Hamiltonian
also gives rise to effective single-body contributions originating from the contraction
of the two-body Coulomb operator with different orders of T̂_1 and T̂_2.
However, in this work we choose
to include only terms from the underlying mean-field Hamiltonian in our definition
of single-body terms.
It follows from our definition of IP^(1)_n/EA^(1)_n that all remaining
contributions to IP_n/EA_n are referred to as many-body terms and
depend explicitly on Coulomb integrals given by, for example,
IP_n = ⋯
-2r^i*_n V^kl_cd t^cd_il r_k,n
+⋯
and
EA_n = ⋯
-2r^*_a,n V^kl_cd t^ad_kl r^c_n
+⋯,
where r^i_n,r_a,n and r^a_ij,n, r^ab_i,n
denote the single- and double excitation coefficients
of the n-th IP- and EA-EOM-CCSD excitation operator |R_n^IP/EA⟩, respectively.
A diagrammatic representation of two exemplary many-body EA-EOM-CCSD contributions is shown in Figure <ref>.
By replacing all explicit occurrences of the Coulomb integrals V^pq_rs in the many-body contributions
by the decomposition in Equation (<ref>)
and by contracting over all particle- and hole-indices
involved in the evaluation of Eq. (<ref>),
one arrives at an expression for
the IP_n and EA_n expectation value as a
sum of IP^(1)_n/EA^(1)_n and a product
of the EOM-CC structure factor S_n^IP/EA(𝐪) with the Coulomb potential v(𝐪) as
shown in Eqs. (<ref>) and (<ref>)
IP_n = ∑_𝐪w_𝐪 S^IP_n(𝐪)v(𝐪)+IP^(1)_n
EA_n = ∑_𝐪w_𝐪 S^EA_n(𝐪)v(𝐪)+EA^(1)_n,
where the first term of Equation <ref> (Equation <ref>) contains all the many-body,
or “correlation” contributions to the IP_n (EA_n) as introduced in Equation <ref>,
while the second term IP^(1)_n (EA^(1)_n) denotes the sum of all single-body
mean-field contributions defined in Equation <ref>.
The above expression gives access to the dependence of the “correlation” energy
contribution to IP_n/EA_n on the momentum transfer vector 𝐪,
making it possible to perform a Fourier transform of S^IP/EA_n(𝐪)
and attribute contributions to IP/EA_n
on the inter-electronic distance.
In this work, however, we restrict ourselves to studying the dependency on
the momentum transfer vector 𝐪.
§.§ The EOM-CC structure factor at 𝐪=0
Since C^q_s(0)=δ_q,s, the contributions to
S^IP/EA_n(𝐪=0) can be further simplified.
In particular, one discovers that
S^IP/EA_n(𝐪=0) corresponds to the many-body character of the
IP or EA state. We define
the single-body character p_1 of an IP or EA state as
p_1,n = ∑_i |r_i,n|^2
and
p_1,n = ∑_a |r^a_n|^2,
respectively, where the r-coefficients
from Equation <ref> and <ref> are
used. In this definition, the single-body character p_1,n of the
n-th IP or EA state is given by the contribution of single-hole or
single-particle processes to the overall description of the (N+1)/(N-1) wave
function. For a normalized excited state |R_n^IP/EA⟩,
p_1,n lies between 0 and 1.
As we will show, for both IP- and EA-EOM-CCSD,
S^IP/EA_n(𝐪=0) =
-(1-p_1,n)
holds. Thus, the value of the
EOM-CC structure factor at 𝐪=0 is equal to the negative
many-body character of the excitation.
This results from Equation <ref>, so that the only terms of the EOM-CCSD
equations, that contribute to the value
of the EOM-CC structure factor at 𝐪=0 are those
which feature Coulomb integrals of type V^ab_cd,
V^ij_kl, V^ai_bj or V^ia_jb. That is,
the integrals must be representable as products of hole-hole or particle-particle co-densities following Equation
<ref>. In the case of IP-EOM-CCSD,
there are 5 terms which contribute to S(𝐪=0), which are
S^IP_n(𝐪=0)=
[ .- r^ij*_b C^l*_j(𝐪) C^b_d(𝐪)r^d_il
- r^ij*_b C^k*_i(𝐪) C^b_d(𝐪)r^d_kj
r^ij*_b C^k*_i(𝐪) C^l_j(𝐪)r^b_kl
- r^ij*_b C^k*_i(𝐪)C^b_d(𝐪) t^d_j r_k
r^ij*_b C^k*_i(𝐪) C^l_j(𝐪) t^b_l r_k. ]_𝐪=0.
By making use of the fact that at 𝐪=0, the co-densities reduce to overlap integrals between the single-particle states (see Equation <ref>),
Equation <ref> reduces to
S^IP_n(𝐪=0)=- r^ij*_br^b_ij
-r^ij*_b r^b_ij
+r^ij*_b r^b_ij
- r^ij*_bt^b_j r_i
+r^ij*_b t^b_j r_i
=- r^ij*_br^b_ij,
which – for a normalized EOM-CCSD excitation vector –
is equivalent to Equation <ref>. The derivation for the EA-EOM-CCSD structure factor is analogous.
We emphasize that the value for S^IP/EA_n(𝐪=0) is a direct
consequence of the chosen definition of IP^(1)_n/EA^(1)_n
and the eigenvalue in Eq. <ref>. Although this choice is
helpful for the analysis of finite-size errors, there also exist
alternative approaches, for example, using left eigenvectors as bra-states in
Eq. <ref>, resulting in a different value for p_1,n
and S^IP/EA_n(𝐪=0).
We expect that for the systems studied in this work, which exhibit a very small
many-body character, both approaches discussed above would yield very similar results.
§.§ Long-range behavior of the EOM-CC structure factor
To determine the asymptotic behavior of the
EOM-CC structure factor in the long-wavelength limit, that is for |𝐪|→ 0,
one can perform a Taylor series expansion around 𝐪=0 by computing the
derivatives
of the EOM-CC structure factor with respect to 𝐪.
The only explicit 𝐪-dependence of that quantity comes from
the co-densities C^p_r(𝐪).
As laid out in Section
<ref>, v(𝐪) is not included in the expression for
S^IP/EA(𝐪), so that the products of co-densities
C^p*_r(𝐪)C^q_s(𝐪) are the only
𝐪-dependent quantities. We note that
we neglect the implicit 𝐪 dependence possibly introduced by the EOM-CC amplitudes
(r_i, r^a, r_ij^a, r_i^ab). This assumed 𝐪 independence of the
r-amplitudes for 𝐪→ 0 is justified by the fact that for systems with a band gap none of the quantities
that appear in the EOM-CC equations, which are the Fock matrix elements, the Coulomb integrals
and the ground-state T-amplitudes, depend on 𝐪 (assuming the crystal momentum is conserved).
While this is evident for the Fock matrix and the Coulomb integrals, the 𝐪 independence
of the T_2-amplitudes is necessary for the ground-state transition
structure factor to yield the correct 1/N_k convergence behavior
in the thermodynamic limit (𝐪→ 0) <cit.>. For the T_1 amplitudes,
which do not explicitely appear in the expression of the ground-state CC correlation energy or the
transition structure factor, a similiar argument as for the EOM-CC equations applies: Since the
T_1-amplitudes are determined iteratively from 𝐪 independent quantities, it is
reasonable to assume that they themselves do not exhibit any dependence on 𝐪 either.
By making use of the product rule and by realizing
that a co-density at 𝐪=0 is equivalent to the
overlap between the two involved single particle states as defined by Eq. (<ref>),
one finds that
∂/∂𝐪 C^p*_r(𝐪)C^q_s(𝐪)|_𝐪=0 =
δ_p,r∂/∂𝐪 C^q_s(𝐪)|_𝐪=0 +
δ_q,s∂/∂𝐪 C^p*_r(𝐪)|_𝐪=0
Eq. (<ref>) reveals, that
if the co-densities and by extension the Coulomb integrals in the
EOM-CCSD equations would only couple holes with particles, the first
derivative of the EOM-CC structure factor would vanish.
In passing we note that this is the situation
for the ground-state CC transition structure factor, which is why the
lowest 𝐪-order in the long-wavelength limit for insulators is
quadratic <cit.>. The EOM-CC working equations,
however, do also feature contractions with Coulomb integrals, which
couple holes with holes and particles with particles, so that formally
a linear contribution in 𝐪 to the EOM-CC structure factor must be considered,
because in general
∂/∂𝐪 C^q_s(𝐪)|_𝐪=0≠ 0.
By repeating this procedure for the second derivative,
one finds that the quadratic contribution
does in general not vanish either, so that we can approximate
the asymptotic behavior of the EOM-CC
structure factor up to second order
lim_|𝐪|→ 0 S^IP/EA(|𝐪|) ≈
S^IP/EA_n(𝐪=0) + α·|𝐪| + β·|𝐪|^2,
where α and β are constants and S^IP/EA_n(𝐪=0) is the value
of the EOM-CC structure factor at 𝐪=0 as discussed in Section <ref>.
We note that in Equation <ref>,
we assume S^IP/EA_n(𝐪) to be spherically symmetric, which – while not generally the case – simplifies the
following derivation and analysis.
Note, that the above derivation – while based on some assumptions– did not
depend in any way on the dimensionality of the system.
§.§ Convergence rate of IPs, EAs and band gaps
to the TDL
Based on the discussion from the previous sections, we now want to make a statement about the asymptotic
convergence behavior of computed IPs, EAs and band gaps to the tdl.
The tdl is approached for supercells with increasing size or increasingly dense k-mesh used to sample
the first Brillouin zone.
This corresponds to a continuous 𝐪-representation
of the EOM-CC structure factor such that Eqs. (<ref>) and
(<ref>) are given by
IP_n/EA_n = ∫d𝐪 S^IP/EA_n(𝐪)v(𝐪) + IP_n^(1)/EA_n^(1),
Finite size errors of, for example, IP/EA energies are defined by the difference between calculations
using finite system sizes and the tdl.
For the above equation this corresponds to the difference between a continuous integration and a
discrete sampling. Similar to the case of ground state CC calculations, we assume that the
values of the EOM-CC structure factor at the sampled 𝐪 points converge rapidly with
respect to the employed system size, which will be later verified numerically.
In other words, S^IP/EA_n(𝐪)=S^IP/EA-(TDL)_n(𝐪),
where S^IP/EA_n(𝐪) and S^IP/EA-(TDL)_n(𝐪) refer to the structure
factor from the finite system and tdl, respectively. Note that in this approach 𝐪 is restricted to a
discrete subset depending on the system size, whereas its continuous in the tdl.
In this case, the largest contribution to the finite size error originates from the
employed finite simulation cell size and the neglect of long-range interactions in real space
corresponding to short 𝐪-vectors.
Under these assumptions it is reasonable to define the following estimate for the finite size
error of the correlation contribution Δ_FS^IP/EA
Δ_FS^IP/EA =
∫_Ω_q_mind𝐪 S^IP/EA_n(𝐪)v(𝐪).
The integral in Equation <ref> is carried out over
a sphere Ω_q_min centered at the Γ point with radius q_min. The radius
should be understood as a measure for the shortest reciprocal lattice vector of
the considered simulation cell.
For the 3-dimensional case, we evaluate the integral
in Equation <ref> using the Fourier
transform of the Coulomb potential in three dimensions which is
v(𝐪) = 4π/𝐪^2,
so that – using q=|𝐪| – a measure of the finite-size error is given by
Δ_FS^IP/EA ∝∫_Ω_q_mind𝐪 S^IP/EA_n(𝐪)v(𝐪)
∝∫_0^q_mind q 4π q^2
S^IP/EA_n(𝐪)v(𝐪)
∝∫_0^q_mindq S^IP/EA_n(𝐪=0) + α· |𝐪|+ β· |𝐪|^2
∝
A· q_min +
B· q^2_min +
C· q^3_min,
where all constants resulting from the integration
are collected in the parameters A, B and C. By assuming spherical symmetry of the
EOM-CC structure factor, the derivation in Equation <ref>
is simplified substantially.
More practical is Equation <ref> if expressed as a function
of the total number of 𝐤-points N_k of a uniform k-mesh.
Noting, that
q_min∝ N_k^-1/d,
where d is the dimension of the system. By applying this
equality for the three-dimensional case, Equation <ref> becomes
Δ_FS^IP/EA ∝
A· N_k^-1/3 +
B· N_k^-2/3 +
C· N_k^-1.
§.§.§ Convergence rates for low dimensional systems
Since we assume that the asymptotic behavior of the EOM-CC structure factor is
independent of the dimension of the system, we can now make use of
Equation <ref> to derive the leading-order
rate of convergence to the tdl for one- and two-dimensional systems.
To this end we make use of the
Fourier transforms of the Coulomb potential in the two- and one-dimensional case <cit.>:
v^2D = 2π/|𝐪|
and
v^1D = -2γ_E + ln(1/𝐪^2),
respectively, where γ_E denotes the Euler constant.
Note that we employ here the lower-dimensional Fourier transforms
of the Coulomb potential even though realistic one-dimensional systems
generally have a finite extent in the directions perpendicular to
the a one- or two-dimensional material.
In the long-wavelength limit, however, which is the focus of the present work,
these contributions become negligible in comparison to those of the
sheet or chain direction(s). This justifies the representation
of the Coulomb potential by its lower-dimensional Fourier transforms.
By repeating the same steps as for
the derivation of the 3-dimensional convergence rate,
we find that the finite-size error in a two-dimensional
system converges like
Δ_FS^IP/EA ∝
A· N_k^-1/2 +
B· N_k^-1 +
C· N_k^-3/2.
and for a one-dimensional system the convergence rate is
given by
Δ_FS^IP/EA(N_k) ∝ A· N_k^-1ln(N_k^-1) + B· N_k^-1
+ C· N_k^-2ln(N_k^-1) + D· N_k^-2
+ E· N_k^3ln(N_k^-1) +
F· N_k^-3.
A summary of the derived convergence rates for
1, 2 and 3 dimensions is given in Table <ref>.
We reiterate that these convergence rates are estimated from the contributions
to IP_n/EA_n around 𝐪=0 in a sphere
with a radius decreasing as N_k increases.
It should also be noted that the actual convergence rate of numerically
computed IP_n/EA_n's depends on the chosen
treatment of the Coulomb singularity in the respective computer
implementation, which will be discussed in the following section.
§.§ Convergence rates and treatment of Coulomb singularity
The convergence rates derived in the previous section
assume a spherically truncated integration around 𝐪=0 to estimate
the finite size errors Δ_FS^IP/EA.
In practical ab initio calculations, however, there exist a variety
of treatments to approximate the integral around the Coulomb singularity at 𝐪=0,
which strongly influence the convergence rates as can already be observed for the exchange energy
contribution <cit.>.
We now briefly discuss the significance of the treatment of the Coulomb singularity for
the convergence rates derived in the previous section.
For the present study, we employ a Coulomb singularity treatment
that captures the contribution of the
S^IP/EA_n(𝐪=0) term to the
integral in Eq. <ref> exactly if
the value of S^IP/EA_n(𝐪=0) is converged with respect to system size.
As a consequence, the contributions to the finite size errors proportionate to A given
in Table <ref> will already be accounted for and the expected
next-leading order contribution to the finite size error will be proportionate to B.
In particular, the plane wave basis set calculations of the present work compute
the average Coulomb kernel for the volume element at 𝐪=0 to estimate
its contribution to the integral <cit.>.
We note that there also exist other approaches in the literature that disregard the
Coulomb singularity contribution
and obtain EOM-CC band gaps by extrapolation.
Refs. <cit.>
disregard the 𝐪=0 contribution already in the underlying hf calculation.
As a consequence the hf band gap is underestimated
and converges only as N_k^-1/3 in three-dimensional systems.
The same applies to the finite-size error in EOM-CC theory as shown in the previous section.
It should be noted that the finite size errors from hf and EOM-CC partly cancel each other.
However, the extrapolation procedure still requires a careful checking and can lead to errors
that are difficult to control due to next-leading order contributions to the finite size errors.
The last piece of information required to
compute this integral is the
Fourier transform of the Coulomb potential 1/𝐫 in 1 dimension, which is
∫ d𝐫 e^i𝐪𝐫/𝐫 = -2γ_E + ln(1/𝐪^2),
γ_E being the Euler constant.
Inserting Equation <ref> and
Equation <ref> into Equation
<ref>, yields
Δ_FS^IP/EA =
∫_0^q_min
d𝐪 S^IP/EA_n(𝐪)v(𝐪)
∝∫_0^𝐪_min
dq [S(𝐪=0) + a· |𝐪|+b· |𝐪|^2][-2γ_E + ln(1/𝐪^2)]
∝ A· q_minln(q_min) +
B· q_min +
C· q_min^2ln(q_min) +
D· q_min^2 +
E· q_min^3ln(q_min) +
F· q_min^3,
where all constants resulting from the integration
are collected in the parameters A, B, C, D, E and F.
More practical is Equation <ref> if expressed as a function
of the number of 𝐤-points N_k along
the periodic dimension. Noting, that
q_min∝1/N_k
Equation <ref> becomes
Δ_FS^IP/EA(N_k) ∝ A·1/N_kln(1/N_k) +
B·1/N_k +
C·1/N_k^2ln(1/N_k) +
D·1/N_k^2 +
E·1/N_k^3ln(1/N_k) +
F·1/N_k^3.
§.§ Finite-size convergence for 1, 2 and 3 dimensions
Since the asymptotic behavior of the EOM-CC structure factor is
independent of the dimension of the system, we can make use of
Equation <ref> to derive the leading-order
rate of convergence to the tdl for 1, 2 and 3 dimensional systems.
By following the same steps as in Equation Section <ref>,
we obtain these convergence rates, which are summarized in Table <ref>.
§ COMPUTATIONAL DETAILS
As the practical representation of the EOM-CC structure factor requires the utilization
of a plane wave basis, we employed the
Vienna Ab Initio Simulation Package (VASP) <cit.>
in combination with the CC4S software package, a periodic CC
code <cit.>, where the working equations of the IP- and
EA-EOM-CCSD methods and their respective structure factors were implemented.
In the case of the LiH EOM-CC and ground-state CC structure factors, which are discussed in
Section <ref>, a relatively small basis set with 3 virtual orbitals per occupied
orbital was employed in order to compute the huge super cell sizes necessary to resolve
the structure factor for very small momentum vectors.
However, we are concerned with the qualitative long-range description of the
EOM-CCSD structure factor, for which such a small plane-wave basis set is sufficient.
Similarly, the EOM-CC structure factors
for trans-Polyacetylene (tPA) were computed using a reduced basis set of 4 virtual orbitals
per occupied orbital. Due to substantial amount of vacuum in the simulation cell of the tPA chain,
it is not practical to converge the results with respect to the plane-wave energy cut-off, which is why
VASP was only used to obtain the EOM-CC structure factor but not the electronic band gaps themselves.
Specifically for the LiH calculations, the and POTCARs were employed, using
a plane-wave energy cut-off of 300 eV. The tPA EOM-CCSD structure factor was computed with the and POTCARs and an energy cut-off of 300 eV.
To alleviate convergence
problems of the hf and the CC calculations associated with
the discretization of the Brillouin zone (BZ) for the highly anisotropic simulation cells of tPA, we employ the recently
developed improved sampling method of the Coulomb potential
in VASP <cit.>.
The numerical convergence tests for the band gaps of tPA
were performed using FHI-aims <cit.>,
which employs numeric atomic orbitals and which has been interfaced to CC4S as well <cit.>.
To determine the convergence to the tdl, the unit cell of a single tPA chain with 2 carbon atoms and 2 hydrogen atoms
with at least 80 A of vacuum in each direction perpendicular to the chain was optimized
employing the B3LYP exchange-correlation
functional <cit.>, since this functional was shown
in previous studies <cit.> to reasonably reproduce the bond length alternation of tPA
observed in experiment <cit.>.
For the geometry optimization a tight
tier-2 basis set and a 1× 1× 20 𝐤-grid was used.
Due to the computational overhead associated with additional vacuum
when computed using a PW basis, the vacuum around the tPA chain was reduced to at least 7 A
for calculations with VASP.
The CC and GW calculations involving FHI-aims were performed using the loc-NAO-VCC-nZ basis sets
developed by Zhang et al. <cit.>.
The basis set convergence for the band gap of tPA was
investigated on the EOM-CCSD level of theory, the results of which are shown in Table
<ref>.
As Table <ref> conveys, the loc-NAO-VCC-2Z basis set allows to converge the band gap to
157 meV for a 1× 1× 6 𝐤-mesh and 141 meV for a 1× 1× 8 𝐤-mesh, essentially independently of the size of the BvK cell, which we consider to be sufficient for the present application as the finite-size error rather than the basis-set incompleteness error is at the center of this study. Hence, all calculations
involving FHI-aims will be performed using the loc-NAO-VCC-2Z basis set.
In order to approximate the contribution from the singularity of the Coulomb potential,
a variation of the spherical truncation approach pioneered by Spencer and Alavi <cit.> is employed
in FHI-aims. In this cut-Coulomb approximation the long-ranged part of the Coulomb potential
is made to decay fast to 0 beyond a set radius r_cut by multiplying the original 1/r
potential with a complementary error function <cit.>
v^cut-Coulomb =
1/2|𝐫|erfc[η(|𝐫| - r_cut)]
with η as the inverse decay width.
§ RESULTS
To determine the validity of the convergence rate of the finite-size error
for IP- and EA-EOM-CCSD, the findings will be presented as follows:
First, the fundamental properties of the EOM-CC structure factor that have been
mentioned in part already in Section <ref>, will be elucidated
by looking at the structure factor of the 3D LiH primitive cell repeated
periodically in one direction. Following that qualitative discussion,
the previously derived convergence rates of the EOM-CC band gap finite-size error will
be applied to a one-dimensional system, a chain of trans-Polyacetylene. Finally, the
analytical extrapolation expression will
be applied to the finite-size convergence of
the G_0W_0 band gap, testing its validity outside of EOM-CC theory.
§.§ The EOM-CC structure factor of LiH in one dimension
In order to relate the herein newly derived EOM-CC structure factor to the
ground-state analogue, the
transition structure factor <cit.>, we
want to start by computing these two quantities for the LiH chain
using supercells, which consist of increasingly many unit cells in a single direction.
Since the purpose of this section is
a direct comparison of the correlation structure factors of the two methods, only the Γ-point
was sampled in reciprocal space.
As noted in Section <ref>, the derived behavior
of S^IP/EA(q) for q→ 0 (and equally so for the
transition structure factor) should hold independently of the
system's dimension.
We shall start by reviewing the properties of the ground-state CCSD transition
structure factor, which is shown in Figure <ref> for
the LiH chain for super cell sizes of 1× 1× 10,
1× 1× 50 and 1× 1× 100. Subsequently, a comparison to the EOM-CC
structure factor will be drawn.
In analogy to the EOM-CC structure factor, which upon contraction with the Coulomb potential
and integration over the reciprocal space yields
the correlation contribution to the charged quasi-particle energies (see Equation <ref>),
the ground-state correlation energy E^corr_0 can be obtained in the same way by means
of the transition structure factor S(𝐪)
E^corr_0 = ∫d𝐪 S(𝐪)v(𝐪).
Figure <ref> shows the medium- to long-range portion of
the transition structure factor in the direction of the chain, that is the z-direction. Even though,
S(𝐪) can be computed for longer 𝐪-vectors, corresponding to short-range
processes in real space, this is not relevant to the present discussion of the finite-size convergence.
The vertical lines in Figure <ref> show the magnitude of the smallest 𝐪-vector
that can be resolved in the respective super cell. That minimal 𝐪-vector is given by the minimal distance
of two 𝐤-points or (in a super cell formulation) by the smallest reciprocal lattice vector and corresponds
to the biggest real-space distance captured by the bvk cell. As can be observed in Figure <ref>,
the bigger the super cell size of LiH, the smaller does this minimal 𝐪-vector become and
the more long-range correlation information
does the transition structure factor contain.
The transition structure factor in Figure <ref> features
a minimum, at some material- and state-specific distance in reciprocal space.
This distance can be interpreted as a characteristic distance for the electronic correlation, as
we expect the
contribution of the CCSD transition structure factor to the correlation energy to be maximal in the vicinity
of its extremum.
In the specific case of the LiH chain, one finds that the minimum is at 1.4 Å^-1 for
the ground state correlation energy, which corresponds to approximately a 1× 1× 2 supercell.
Another fundamental property of the transition structure factor of the ground-state is the fact
that for |𝐪|→0, S(𝐪)→0 with zero slope as is apparent in
Figure <ref>.
This directly results from the sum-rule of the pair correlation function, which is the Fourier transform
of the transition structure factor.
The EOM-CC structure factor for charged excitations exhibits qualitative differences to the ground-state
transition structure factor. The structure factor for the first IP and EA in the EOM-CCSD framework for the
LiH chain is shown in Figure <ref>.
The EOM-CC structure factors feature a minimum as well, but it appears at
a significantly smaller |𝐪| value of 0.5 Å^-1,
which corresponds approximately to a 1× 1× 6 super cell. This reflects the
longer range of correlation effects in the EOM-CC case in comparison to the ground-state.
This comparison
clarifies that the problem of reaching the tdl for charged quasi-particle energies is substantially
more difficult than for the ground state case.
Another point of departure between ground state CC
and EOM-CC theory, is the value of the structure factor at 𝐪=0.
While in the case of the ground state the S(𝐪=0) value is always 0,
in the case of EOM-CC we have previously derived that S(𝐪=0) is a finite negative value in our
formulation corresponding to the many-body character of the respective excitation.
As Figure <ref> demonstrates, all the properties discussed for the EOM-CC structure factor apply in a similar fashion to both the IP- and the EA-EOM-CCSD
case and the IP+EA case.
Let us stress once more that the IP+EA case corresponds to the electronic band gap in the presently used convention. It should be stressed at this point, that the EOM-CC structure factor that we have introduced
in this work only captures the correlation contribution to the IP and EA quasi-particle energies and to
the band gap, which are given by the many-body contributions of the EOM-CC equations as discussed
in Section <ref>. In order to obtain the full excitation energy, we compute and converge
the generally sizable single-body contributions (compare Equation <ref>) to the EOM-CCSD expectation value separately.
With respect to the previously derived asymptotic behavior of the EOM-CC structure factor for 𝐪→ 0, it becomes immediately apparent from the LiH EOM-CC structure factors, particularly for the 1× 1×
100 cell, that it does not seem to exhibit significant linear behavior at 𝐪=0.
Furthermore, S(𝐪=0) is relatively small.
Therefore, one can conclude for the anisotropic LiH super cell,
that both the constant |𝐪|^0 and the linear contribution |𝐪|^1 contribution of the EOM-CC structure
factor to Δ_FS^IP/EA is negligible.
§.§ The trans-Polyacetylene chain
§.§.§ The EOM-CC structure factor of tPA
In order to verify the applicability
of the convergence rates, which were derived in the
long-wavelength limit (|𝐪|→ 0), we now investigate
the IP- and EA-EOM-CCSD structure factors of an actual one-dimensional system, the tPA chain
(note, that the previosuly studied LiH system, is a bulk material, however only extended in one direction).
For that purpose, super cell sizes
of up to 1× 1× 32 were computed via IP- and EA-EOM-CCSD.
The IP- and EA-EOM-CCSD structure factors, and the structure factor corresponding to the correlation
energy contribution to the band gap, that is the sum of IP
and EA in the present convention, are illustrated in Figure <ref>.
Figure <ref> shows that
for the 1× 1× 32 super cell of the tPA chain, the minimum
of the EOM-CC structure factor is resolved and two data points to the
left of the minimum are obtained.
As discussed previously for the case of the LiH EOM-CC structure factor,
the analytical extrapolation expressions derived
for the asymptotic limit (|𝐪→ 0|) are strictly speaking only applicable for
system sizes for which the minimum of the structure factor
can be resolved. By applying a cubic spline interpolation
to the EOM-CC structure factors in Figure <ref>,
one finds that the minimum is located near 0.2 Å^-1, which corresponds
to approximately a 1× 1× 13 bvk cell. It is, however, important to note that the data point directly
to the right of the minimum in Figure <ref>, which corresponds to a 1× 1× 10 bvk cell,
is in close proximity to the minimum itself, so that the quadrature error resulting from extrapolation of
system sizes as small as 10 primitive cells in one direction can be assumed to be small.
Compared to the characteristic distance of 0.5 Å^-1 previously found for LiH,
this suggests that the length scale of the electronic correlation is more than doubled compared to LiH.
One possible explanation for that is the presence of long-ranged dispersion interactions, which are expected to be
more prominent in an organic compound like tPA than in an ionic compound with small ions like LiH.
Note, that in the case of the IP-EOM-CCSD structure factor in Figure <ref>, we observe
a second, shallow minimum at 0.8 Å^-1. Since this feature is located in the medium- to short
range region of the EOM-CC structure factor, this is most certainly an artefact of the comparably small PW
basis set, which had to be used to compute the 1× 1× 32 super cell of tPA. As, however, we are interested
in the long-range characteristics of the electronic correlation, this is not expected to have any effect on the
properties of the tPA EOM-CCSD structure factors discussed so far.
§.§.§ Convergence of the correlation contribution
If the derived convergence rate in Equation
<ref> does indeed accurately model the
convergence of one-dimensional systems to the tdl, will be determined by
studying the convergence of the numerically determined band gap for trans-Polyacetylene (tPA).
In the pursuit to determine if some order of q of
S^IP/EA(q) in Equation <ref> is
dominant and if the derived convergence rate in Equation
<ref> can be reduced meaningfully to a
single leading-order contribution, a set of convergence rate models
are applied to the finite-size convergence of tPA in Figure
<ref>. In particular, the full derived
model in Equation <ref> is fitted to the
calculated band gaps as a function of N_k along the direction of the
tPA chain. This model is compared to three other convergence rates,
which are the leading-order contributions originating from the q^0,
q^1 and q^2 contribution to the long-wavelength limit of the EOM-CC
structure factor, which are AN_k^-1ln(N_k^-1),
AN_k^-2ln(N_k^-1) and AN_k^-3ln(N_k^-1),
respectively, where A is determined by fitting to the numerical data.
To prioritize the accurate description of the finite-size
convergence in the long-range limit, a modified least-square
cost function was used for fitting, where a data point corresponding
to a system size of N_k was weighted with
a factor of N_k^2.
In agreement with the computed
EOM-CC structure factor of tPA shown in Figure <ref>,
it was found that the derived convergence rates only describe
the numerical data in Figure <ref>
qualitatively well for system sizes that exceed some minimal number
of 𝐤-points N_k, so that the models in
Figure <ref> are fitted to the
EOM-CCSD band gaps for N_k≥ 8. To remain consistent, all
following fits to FHI-aims band gaps, that is
Figure <ref>,
<ref> and
<ref>, are shown for
N_k≥ 4, but only fitted to data points with N_k≥ 8.
Undeniably, the q^0-order contribution to Equation
<ref> on its own, given by A N_k^-1lnN_k^-1,
fails entirely to model
the computed data in Figure <ref>,
underestimating the band gap in the tdl by almost 500 meV
relative to the full model. This lends credence to the aforementioned
estimation that for quasi-particle excitations with a minor many-body
character, these q^0 contributions would only play a minor role in
the description of the convergence to the tdl and is also consistent
with the small contribution of the |𝐪|^0 term of the
EOM-CC structure factor to the correlation energy in the
anisotropic LiH cell in Section <ref>. As a matter of fact,
the many-body contribution of the IP and the EA in tPA were found
to be both about 5%. Moreover, we stress that even if this contribution was
large, the employed Coulomb singularity treatment in FHI-aims would capture
this contribution to the finite-size error.
Similarly, the model corresponding to a EOM-CC structure factor linear
in 𝐪, given by AN_k^-2ln(N_k^-1),
converges visibly slower than the calculated data points
leading to an underestimate of the band gap of over 130 meV
compared to the full model.
This, in combination with the observation that the LiH EOM-CC structure factor
does not seem to exhibit any linear behavior for q→ 0, hints at the
possibility that in some systems – or possibly in general – there is no
or only a negligible linear contribution to S^IP/EA(𝐪) for q→ 0.
Instead, at least in the case of tPA, the q^2 contribution to the EOM-CC
structure factor appears to dictate the tdl convergence of the band gap.
Even though the AN_k^-3ln(N^-1) model notably deviates from the calculated
data for smaller N_k, it matches perfectly for N_k=8 to N_k=24,
lying virtually on top of the full model and yielding a band gap that
is underestimated by 43 meV.
This range of applicability is
slightly larger than the prior investigation of the EOM-CC structure factors
for tPA would suggest, were a minimum of 10 primitive cells or 𝐤-points
in one direction were found to be necessary. One likely explanation for that
minor disagreement is the utilization of different Coulomb potential approximations
in both codes as detailed in Section <ref>.
To conclusively verify the precision and accuracy of both the EOM-CCSD band gap data obtained via FHI-aims
and the herein proposed extrapolation approach,
the EOM-CCSD band gap calculations for supercells of size up to 1× 1× 16 were repeated using
one of the structures investigated by Windom et al.<cit.>. In that study, the fundamental
band gap of different tPA geometries was investigated on the
EOM-CCSD level of theory employing increasingly long
tPA oligomers with a cc-pVTZ basis set.
For the comparison, the central C2H2 unit of one of these structures,
in the original paper denoted by tPA3,
was extracted and treated under periodic boundary conditions
in the same manner as the B3LYP-optimized structure studied
so far.
To ensure a fair comparison, the calculations
were performed employing the loc-NAO-VCC-3Z basis, the results
of which are shown in Figure <ref>.
Using the previously identified AN_k^-3ln(N_k^-1)
leading-order model, an EOM-CCSD band gap of 4.914 eV
in the bulk-limit is found.
This is in reasonable agreement
with the estimated tdl value of 5.07 eV of the original study. The
remaining deviation of roughly 160 meV is likely the
result of comparably small oligomer sizes, namely 6 to 9 C2H2 units
that were used for the extrapolation. Also, the fact that
neither of the results are
converged with respect to the basis set (see Table <ref>) makes an exact
comparison more difficult.
§.§.§ Convergence of the mean-field contributions
What has been neglected so far, however, is the potential influence of the single-body contributions
from the underlying hf calculations on the overall
convergence of the band gap. Most importantly,
the convergence of the hf exchange with
respect to system size needs to be considered.
So far, the hf calculations have been performed with the same system sizes as were subsequently
used for the CC calculations, so that the total finite-size error of the final EOM-CCSD band gap contained
contributions from both the mean-field calculation and the correlated CC calculation.
To allow for a more systematic study, which only includes long-range correlation effects, the EOM-CCSD band gap results will be recomputed using converged hf single-particle states and eigenenergies from supercells of size 1× 1× 48 or more. Via this down-sampling approach, one can independently analyze the
convergence of the correlation energy of the EOM-CCSD band gaps. The finite-size convergence including the leading-order fits as they have been
shown in Figure <ref> is presented in Figure <ref>
for the down-sampled data.
The fitted leading-order models in Figure <ref> suggest
that after the single-body contributions, particularly the hf exchange, has been accounted for, still the AN_k^-3ln(N_k^-1) model which results from the contribution to the structure
factor quadratic in q, describes the convergence
to the thermodynamic limit best and virtually lies on top of the data points for N_k > 10.
§.§ Comparison to G_0W_0
To test the validity of the derived convergence rate for one-dimensional systems
outside of CC theory, the leading-order contribution determined in the previous
section is applied to the GW approximation, as well.
For that purpose, for the same system sizes that were previously computed via
IP- and EA-EOM-CCSD, the band gaps were obtained using the G_0W_0 method
with a hf starting point (G_0W_0@HF) to allow for a direct comparison to
the EOM-CC results.
The performance of the previously extracted AN_k^-3ln(N_k^-1)
convergence rate applied to both EOM-CCSD and G_0W_0@HF is shown in
Figure <ref>. There, one observes that
the convergence behavior of the GW method with respect to the number
of 𝐤-points N_k almost mirrors the one of the EOM-CCSD
method. In the same way the leading-order term originating from the
q^2 contribution to the EOM-CC structure factor models the band gap
convergence of both theories accurately over the entire range of
N_k=8 to N_k=24.
In an analogous step as in Section <ref>, the comparison
between EOM-CCSD and G_0W_0 band gaps was repeated after converging
the underlying hf orbitals and eigenenergies via down-sampling. As previously, for
that purpose the hf calculation was performed on a 𝐤-grid
of size 1× 1× 48 or more and the resulting hf orbitals and eigenergies
were subsequently used to compute the IP/EA-EOM-CCSD and G_0W_0
band gaps on 𝐤-meshes between 1× 1× 8 and 1× 1× 24.
The resulting convergence to the bulk-limit for the two methods
is shown in Figure <ref>.
Figure <ref> shows
that after the convergence of the single-body contributions of
the underlying hf method has been ensured, the remaining
contributions to both the EOM-CCSD and G_0W_0 quasi-particle energies
converge with the previously identified leading-order model
AN_k^-3ln(N_k^-1). While the down-sampling reduces
the individual band gaps of both methods most notably for small values
of N_k, the convergence behavior and the band gap in the
tdl remain unaffected, while the extrapolated G_0W_0@HF
band gap in the tdl changes by less than 6 meV
Figures
<ref> and
<ref>
also show the differences between
G_0W_0 and EOM-CCSD band gaps retrieved as a function of N_k without and with applying
the down-sampling technique, respectively.
We note that the convergence of the difference can also be well
approximated using an AN_k^-3ln(N_k^-1) model.
The relatively small fitting parameter indicates
that both methods capture contributions to the
correlation energy with a similar magnitude and the same leading-order behaviour.
In other words, although the G_0W_0 and EOM-CCSD band gaps exhibit a
finite size error that is similar in magnitude and converges with the same analytical
behavior to the tdl, there exist
different leading-order diagrammatic contributions of both methods.
In practice one can still benefit from the similarity of both methods
by correcting the finite size error of EOM-CCSD band gaps using
the computationally significantly cheaper G_0W_0 method.
As noted before, the long-range behavior of
the EOM-CC structure factor is independent of the dimension of the
system. Therefore, one can infer that a GW-aided finite-size
correction technique for EOM-CC theory can in principle be performed
for one-, two- and three-dimensional systems alike.
An important question remaining is: What diagrammatic contributions
account for the differences between G_0W_0 and EOM-CCSD in the long-wavelength limit?
Although, the present work does not provide an explicit answer,
we note that previous studies performed detailed investigations of the
relationship between the GW approximation and the
EOM-CC framework.
In the first study of this sort, performed by
Lange and Berkelbach<cit.>, it was found that
the G_0W_0@HF approximation features more higher-order ring
diagrams than the Green's function of IP- and EA-EOM-CCSD.
These diagrams are known to be particularly relevant
for the description of electronic correlation in the long-wavelength limit of the ground state
for metallic systems <cit.>.
Since then, different flavors of the GW approximation have been
reformulated in a CC-like fashion. Bintrim and Berkelbach <cit.>
presented the EOM-CC-like working equations for the G_0W_0 method
in the Tamm-Dancoff approximation (G_0W_0-TDA) resulting in a frequency-independent
method.
More recently, the exact equivalence between the G_0W_0@HF method and the unitary
IP/EA-EOM-CCSD method using the quasi-boson approximation has been derived <cit.>.
We also note that different Green's function methods have been formulated based
on the CC formalism <cit.>,
offering an alternative avenue to compare CC and GW methods.
The above relationships shall be further analysed in future work to develop computationally efficient and
accurate finite-size corrections for EOM-CCSD band gaps.
§ CONCLUSION
We have investigated the convergence of the EOM-CC band gap to the tdl
by a formal analysis of the correlation structure factor of the IP- and
EA-EOM-CCSD method S^IP/EA(𝐪) in the long-wavelength limit
(|𝐪|→ 0). As a result, we derived the convergence rate for
the one-, two- and three-dimensional case. In order to verify the validity of
that approach, we focused on the one-dimensional case to be able to
compare to numerical results, using a chain of LiH unit cells and a
chain of trans-Polyacetylene as examples. Visualizing the
EOM-CC structure factor and modeling the finite-size
convergence using the derived convergence rate for one-dimensional systems suggests
that the band gap converges to the tdl with AN_k^-3ln(N_k^-1)
in leading order for N_k →∞.
In analogy to the one-dimensional case, we expect a AN_k^-3/2
behavior for two-dimensional systems and a AN_k^-1 behavior
for three-dimensional ones.
Finally, we verified that our findings extend
beyond EOM-CC theory and apply to the G_0W_0 method as well.
We find that band gaps converge with the same rate in both
theories,
providing a formal justification to use the GW approach for the extrapolation
of single-k-point EOM-CC results to the tdl.
E.M. is thankful to Min-Ye Zhang and Sebastian Kokott for valuable discussions.
This project was supported by TEC1p [the European Research Council (ERC) Horizon 2020 research
and innovation program, Grant Agreement No.740233].
*
|
http://arxiv.org/abs/2409.02755v1 | 20240904143407 | Stability of standing periodic waves in the massive Thirring model | [
"Shikun Cui",
"Dmitry E. Pelinovsky"
] | nlin.SI | [
"nlin.SI",
"math-ph",
"math.AP",
"math.DS",
"math.MP",
"nlin.PS"
] |
5pt
=1.5em
myTheoTheorem
remarkRemark[section]
=msbm10 scaledtheoremTheorem[section]
propositionProposition[section]
lemmaLemma[section]
corollaryCorollary[section]
definitionDefinition[section]
|
http://arxiv.org/abs/2409.02507v1 | 20240904080655 | Thermoelectricity at a gallium-mercury liquid metal interface | [
"Marlone Vernet",
"Stephan Fauve",
"Christophe Gissinger"
] | physics.flu-dyn | [
"physics.flu-dyn",
"cond-mat.mtrl-sci",
"physics.app-ph"
] |
1]Marlone Vernet
2]Stephan Fauve
3,4]Christophe Gissinger
[1]Laboratoire de Physique de l'ENS, ENS, UPMC, CNRS; 24 rue Lhomond, 75005 Paris, France
[2]Laboratoire de Physique de l'ENS, ENS, UPMC, CNRS; 24 rue Lhomond, 75005 Paris, France
[3]Laboratoire de Physique de l'ENS, ENS, UPMC, CNRS; 24 rue Lhomond, 75005 Paris, France.
[4]Institut Universitaire de France
Vernet
The Seebeck effect is the conversion of heat into electricity, usually achieved by thermoelectric devices using solid electrical conductors or semiconductors. Here is reported the first evidence of this effect at the interface between two metals that are liquid at room temperature, gallium and mercury. The liquid nature of the interface significantly alters the usual temperature distribution, leading to an abnormally high current density near the boundaries. In the bulk, the thermoelectric current interacts with a magnetic field to produce efficient thermoelectric pumping of fluids. This effect may be of prime importance in several industrial and astrophysical systems, such as the promising liquid-metal batteries and Jupiter's magnetic field.
1M.V.(Author One) contributed equally to this work with S.F. (Author Two) and C.G. (Author Three).
2To whom correspondence should be addressed. E-mail: [email protected]
§ ABSTRACT
We present experimental evidence of a thermoelectric effect at the interface between two liquid metals. Using superimposed layers of mercury and gallium in a cylindrical vessel operating at room temperature, we provide a direct measurement of the electric current generated by the presence of a thermal gradient along a liquid-liquid interface. At the interface between two liquids, temperature gradients induced by thermal convection lead to a complex geometry of electric currents, ultimately generating current densities near boundaries that are significantly higher than those observed in conventional solid-state thermoelectricity. When a magnetic field is applied to the experiment, an azimuthal shear flow, exhibiting opposite circulation in each layer, is generated. Depending on the value of the magnetic field, two different flow regimes are identified, in good agreement with a model based on the spatial distribution of thermoelectric currents, which has no equivalent in solid systems. Finally, we discuss various applications of this new effect, such as the efficiency of liquid metal batteries.
(published article available at https://www.pnas.org/doi/abs/10.1073/pnas.2320704121)
This manuscript was compiled on September 9, 2024
<www.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX>
Thermoelectricity at a gallium-mercury liquid metal interface
[
==============================================================
firststyle
shortarticlesinglecolumn
4
Thermoelectricity describes the conversion of heat into electricity and vice versa. This captivating interplay has long intrigued physicists, as it offers a glimpse into the complex relationship between energy, temperature and matter <cit.>.
The thermoelectric Seebeck effect is perhaps the best illustration of this: when a temperature gradient is established at the junction of two electrically conducting materials, a thermoelectric current flows between the "hot" and "cold" regions. This configuration can be achieved very simply by layering two metals atop each other and applying a horizontal temperature gradient along the interface.
In addition to its implications for fundamental physics, thermoelectricity has left an indelible mark on modern engineering thanks to the many applications developed over the last century. For example, thermocouples are widely used as temperature sensors, while emerging applications include thermoelectric coolers for portable refrigeration <cit.>, or the use of thermoelectric materials in space missions for their ability to generate electricity from temperature differences in harsh environments <cit.>. Thermoelectricity is an environmentally friendly technology for converting waste heat into electrical energy.
Thermoelectricity also extends to liquid systems, such as electrolytes <cit.> , liquid metals, or semi-conductors. During the growth of a semiconductor crystal <cit.> or the solidification of a metal alloy <cit.>, a thermoelectric current naturally appears at the liquid-solid interface due to the Seebeck effect. When subjected to a magnetic field, these currents can then produce significant flow motions in the melt. This surprising effect traces back to the pioneering work of Shercliff <cit.>, who introduced the concept of thermoelectric magnetohydrodynamics (TEMHD) to describe the interaction between a liquid metal and the container wall: when a magnetic field B and a temperature gradient are applied to a solid-liquid interface, the thermoelectric current J generated by the Seebeck effect interacts with the magnetic field to produce a Lorentz force J× B, which drives significant flow motions. Since Shercliff, only a few studies have provided experimental data on this effect. In the context of fusion energy, where TEMHD-induced flows can provide an effective cooling blanket <cit.>, a single experiment has reported velocity measurements in a divertor made of liquid lithium <cit.> heated by an electron beam. More recently, temperature measurements have been reported in an experiment that suggests an interesting interaction between thermoelectricity and magneto-convection producing periodic oscillations <cit.>.
This paper reports the first experimental evidence of thermoelectricity at the interface between two liquid layers. This configuration is different from the classical thermoelectric effect, as the vessel walls, electrically insulating, are not involved in the generation of the current, which now occurs along a free interface between the two fluids. In particular, the temperature and current density distributions are different from the classical situation. The interest of our study is twofold. First, by using two liquid metals at room temperature, we aim to provide quantitative measurements of velocity, temperature, and electric potential associated with a simple theoretical model to describe precisely the dynamics of this new type of thermoelectricity. Second, these experimental results can be extrapolated to make predictions for several industrial and astrophysical systems where this effect can play a major role, in particular liquid metal batteries and Jupiter's magnetic field.
§ EXPERIMENTAL SETUP
The experiment consists of a cylindrical annulus with a rectangular cross-section. The height is h=50 mm, and the radii of the inner and outer cylinders are respectively R_i=37 mm and R_o=100 mm, corresponding to an aspect ratio close to one Γ = L/h ∼ 1.26 with L=(R_o-R_i) the cylindrical gap. (see Fig.<ref>).
The tank is filled with a layer of liquid gallium on top of an equally thick layer of liquid mercury. To avoid solidification of the gallium, which has a melting point of 29.7 ^∘ C, the tank is maintained at 35 ^∘ C at least. To our knowledge, this is the first experiment on the dynamics of a gallium-mercury interface, providing a direct study of a conducting liquid-liquid interface at room temperature, mercury and gallium are almost immiscible. To maintain the immiscibility of the two fluids, all our experiments are limited to T<80 ^∘ C.
To avoid mixing the two layers, the mercury is first introduced into the tank. The liquid gallium is then gently deposited on the surface of the mercury through a tube in which the flow is kept at a very low rate. The binary Hg/Ga phase diagram confirms the proper separation of the two liquid metals:
at this temperature, the mercury layer contains 3% mass gallium at most, and the interface remains well defined <cit.>. The inner and outer cylinders are made of copper and electrically insulated from fluids by an epoxy resin Duralco 128. The endcaps are 10 mm thick, electrically insulating PEEK plates. Both cylinders are connected to thermal baths to impose a radial temperature gradient. The inner cylinder is heated by water circulation controlled by a refrigeration circulator Lauda 1845 and the heat is removed from the outer cylinder by an oil circulation system controlled by a Lauda T10000 thermal bath. Some of our results are obtained in the presence of a magnetic field. For this purpose, the tank is placed between two large Helmholtz coils with an inner diameter of 500 mm powered by a DC current supply ITECH IT6015D 80 V-450 A, which produces a constant and homogeneous vertical magnetic field of 80 mT maximum. The experiment can thus be controlled by two external parameters, namely the applied magnetic field B_0 and the temperature difference Δ T_0=T_i-T_o imposed between the two cylinders.
Temperature is measured inside the inner and outer cylinders, and in the tank, using Pt100 platinum resistance sensors. Five sensors are evenly distributed along a vertical line in each cylinder, while 14 sensors are glued to the top endcap, in contact with the gallium, along a line running from the inner to the outer cylinder (labeled 2 to 15 in the following). Four holes are drilled in the top endcap for various measurements: flow velocity and thermoelectric currents are obtained using electric potential measurements, while Hall probes are used to measure the magnetic field. Temperature measurements are acquired using a Keithley 3706A signal-switching multimeter, while potential measurements, particularly weak, are processed using a nano-voltmeter (Keysight 34420A). All signals are then transmitted to the computer via a data acquisition card National Instrument 6212 controlled by scripts Python.
With two liquid layers, the temperature distribution responsible for the thermoelectric effect is entirely governed by fluid motions on either side of the interface. Indeed, the temperature gradient between the cylinders generates horizontal thermal convection in both layers, with typical Rayleigh numbers of the order of Ra=[10^4-10^5] (See SI Appendix for calculation), where Ra = α g Δ T_0 Δ R^3/κν and α is the thermal dilatation coefficient, κ is the thermal diffusivity and ν is the kinematic viscosity.
For the Rayleigh numbers reported here, vigorous convection is expected. Although determination of the exact regime would require a separate study, it is plausible that our intermediate values of Ra favor boundary-layer-dominated heat transfer, characterized by efficient turbulent heat transport in the bulk, and significant diffusive transport in the thin thermal boundary layers. This interpretation is confirmed by our temperature measurements:
Fig.<ref> shows the temperature profile measured in the gallium layer, at the top endcaps, for a series of runs at B_0=0 and Δ T_0 ranging from 0 to 37 K. It shows that the convective motions, although weak, are sufficient to transport heat and significantly flatten the temperature profile in the bulk. This scenario markedly contrasts with the typical diffusive thermal gradient observed in solids.
Most of the temperature drop is therefore confined to thin thermal boundary layers close to the cylinders. The inset in Fig. <ref> shows, however, that the temperature gradient in the volume Δ T_B=T_15-T_2 depends linearly on the applied temperature drop Δ T_0. As liquid metals are very good thermal conductors, we expect the interface temperature to follow this profile closely.
§ SEEBECK EFFECT
In each fluid layer, the Ohm's law in the presence of a thermal gradient reads:
j/σ = E - S∇T,
where j is the electric current density, E is the electric field, T is the temperature, σ is the electrical conductivity and S is the Seebeck coefficient. For gallium and mercury, the values are given as σ_Ga = 3.87× 10^6 S.m^-1, σ_Hg = 1.1× 10^6 S.m^-1, S_Hg=-6.5 μ V.K^-1 and S_Ga=0.5. μ V.K^-1 <cit.>.
The production of thermoelectric current is made possible because the Seebeck coefficient S depends not only on temperature but also the substance: in a uniform medium, the electric field is rearranged to compensate for the Seebeck effect and prevent the emergence of an electric current, E=-S∇T, a consequence of the fact that ∇×(S∇T)=0. To generate a net thermoelectric current, it is therefore necessary to misalign the temperature and Seebeck coefficient gradients, which can be achieved simply by generating a thermal gradient along an interface between two metals.
In the quasi-static limit, ∇×E=0 allows us to write E=-∇V.
In addition, charge conservation ∇·j=0 implies that the electric potential follows a Poisson equation in each layer:
∇^2 V = -S∇^2 T
Combined with the appropriate boundary conditions at the interface between the two metals, these equations describe the generation of a Seebeck effect between the Gallium and Mercury layers. The detailed solution of equation (<ref>) provides V, j, and the corresponding magnetic field B. It is tedious enough to have been left in the Supp. Mat. and simplified by using cartesian geometry and a temperature field independent of z. This simplified model shows that an electric current can flow through liquid metals in response to a horizontal thermal gradient, even with the unusual geometry involving complete short-circuiting of the two layers along the interface. More precisely, the thermoelectric current depends critically on the temperature profile at the interface and it exhibits a linear dependence on the effective conductivity, σ̃ = σ_Hgσ_Ga/(σ_Hg+σ_Ga) and the difference in Seebeck coefficients, Δ S = S_Hg - S_Ga. In addition, calculations show that the thermoelectric current loop induces a measurable voltage drop between mercury and gallium.
Experimentally, the thermoelectric effect can be evaluated directly via the electric potential difference between two points on either side of the liquid-metal interface (see Fig. <ref>), related to the current by:
δ V = -∫_A^Bj/σ·dl -∫_A^BS∇T·dl
where this integration of equation (<ref>) can be done along any path from A to B. In the experiment, we measure this voltage between two nickel wires, fully coated except at their ends, and placed so that the wire tips are located at mid-radius r=r_i+L/2, inside each layer, at approximately 3 mm from the interface.
Fig. <ref>(a) shows the evolution of voltage as a function of the imposed temperature gradient Δ T_0. The measured voltage displays a linear evolution with Δ T_0 and reaches about 15 μ V for Δ T_0∼ 37K, therefore demonstrating the existence of a thermoelectric effect generated at the interface between two liquid metals.
In agreement with the theoretical predictions of our simplified model, the voltage δ V is approximately linearly related to the temperature difference applied between the two cylinders. However, accurately determining the maximum voltage measured in the experiment is challenging due to several factors not accounted for in the theory. These include geometric effects, contact properties at the interface, oxidation of gallium, miscibility thickness, convective motions, and the vertical thermal gradient. Each of these factors can significantly influence the numerical value of δ V.
In fact, the liquid nature of the two layers is the key to understanding the magnitude of this thermoelectric effect. Unlike solid-state thermoelectricity and thermocouples, which involve connected electrical wires, the geometry of currents in this case is not prescribed, and thermoelectric currents are subject to the powerful convective motions of liquids. In the next section, we will show how turbulent convection, by modifying the temperature profile along the interface, leads to a complex distribution of thermoelectric currents in the bulk flow and particularly high current densities near thermal boundary layers.
§ GEOMETRY OF THE ELECTRIC CURRENTS
< g r a p h i c s >
Numerical integration of equation (<ref>) using the parameters of the experimental setup (see Method section) and a piecewise linear thermal gradient, for B_0=0.
(a) colorplot of the induced magnetic field and associated current streamlines, in the case of a purely conductive temperature solution. (b) radial profiles of the corresponding temperature (black) and the radial current induced at z=1 mm from the interface (red). (c) and (d) are the same, but
for a piecewise temperature gradient typical of convection. Near the cylinders, the thermal boundary layers generate a very large current density, 10 times larger than the value expected with solid-state conventional thermoelectricity. The dashed (resp. dashed-dotted) line shows the simple prediction (<ref>) for bulk (resp. boundary) density currents.
This complex dependence on the temperature profile contrasts sharply with what is observed in solid-state thermoelectricity, and even in classical thermoelectric MHD, where the two temperatures imposed at the conducting walls always drive the current measured in the bulk. This is because the temperature profile is extremely different from the linear thermal gradient observed in solid conductors, and the geometry of the current becomes different from the naive picture described above and sketched in Fig.<ref>. To understand how a liquid-liquid interface affects the distribution of thermoelectric currents, we carried out 2D axisymmetric numerical simulations of Ohm's relation (<ref>) in the cylindrical geometry of the experiment and using the physical properties of gallium and mercury (see the Method section).
Although only the numerical integration is discussed here, the Supplementary Materials show that identical results are obtained with the analytical calculation (see Supp. Mat. for a detailed description of the analytical model).
Fig. <ref>(a) shows a simulation computed using boundary temperatures obtained experimentally at Δ T_0=37K (namely T_h=82^∘ C and T_c=45^∘ C ) but with a temperature profile T=Alog(r)+B, solution of ∇^2T=0, as if the metals were solid. In this case, the field geometry is as expected, with an electric current predominantly horizontal at the center of the cell, forming a poloidal loop around the interface.
The order of magnitude of the bulk current can be simply recovered by performing the curvilinear integral along a closed loop 𝒞 of equation (<ref>), which leads to ∮_𝒞j·dl/σ≈ -Δ S Δ T
with Δ S assumed independent of T, and Δ T is the temperature difference between the two points where 𝒞 crosses the interface.
By assuming a predominantly horizontal current density in the bulk, away from the boundaries, so that charge conservation leads to an identical horizontal current |j| in each layer (ignoring curvature), this relation can be integrated and provides a simple estimate of the current density:
j∼Δ SΔ T/ℓσ̃
where Δ T=T_h-T_c is the temperature difference driving the currents with T_h (resp. T_c) representing the hot (resp. cold) temperature and ℓ is the typical length of temperature variation responsible for the thermoelectric current. As usual, the amplitude of the thermoelectric current thus depends on the jump of Seebeck coefficients between the two materials and the temperature difference between the "hot" and "cold" regions of the interface. In the case of solid metals, it is clear that ℓ=L and T_h-T_c=Δ T_0, and Fig. <ref>(b) shows that the radial current in the middle of the gap is of the order of J∼σ̃Δ SΔ T_0/L (blue dotted line), as expected in solid-state thermoelectricity.
But as shown in Fig.<ref>, the actual temperature profile for liquid metals is radically different and instead displays a piecewise constant gradient involving two very strong thermal gradients confined to thin boundary layers of thickness δ_BL, connected by a gentler linear variation in the bulk. Such a profile is forbidden in the presence of a solid boundary and is only possible here due to vigorous thermal convection in the two liquids on either side of the interface. In the presence of liquid layers, the choice of T_h, T_c, and ℓ is thus highly nontrivial. Fig. <ref>(c) shows a typical numerical integration using such an experimental profile (i.e. the piecewise linear fit shown in red in Fig.<ref>). Far from the boundaries, the geometry of currents remains relatively similar to the previous case. The corresponding radial profile in Fig. <ref>(b) shows that currents reach a plateau in the bulk, with a magnitude that corresponds exactly to the prediction J∼σ̃Δ SΔ T_B/L (dashed line). These simulations therefore show that the thermoelectrical current generated in the bulk is not directly due to the temperature drop Δ T_0 imposed at the boundaries but is rather driven by the lower thermal gradient that subsequently occurs in the bulk outside the boundary layers, characterized by the temperature difference Δ T_B.
On the other hand, Fig.<ref>(c) clearly shows that two additional thermoelectric current loops are induced by the large temperature gradient in the thermal boundary layers. These currents are located fairly close to the cylinders, but the current density is surprisingly high: for Δ T_0=37K, it can reach j∼ 3× 10^4 A/m^2 (see Fig.<ref>,d), 40 times higher than in the bulk. Interestingly, this value is also one order of magnitude higher than the one expected in the case of solid metals (Fig.<ref>,b). This high value can easily be understood as a local generation of thermoelectric currents by the strong temperature gradient Δ T_BL in the thermal boundary layer of thickness δ_BL. Hence, the estimate j∼σ̃Δ T_BLΔ S/δ_BL, where Δ T_BL is the temperature drop inside the boundary layer provides the correct value of this anomalously high density current (dotted line in Fig.<ref>(d)).
The liquid nature of the interface therefore produces a non-trivial distribution of thermoelectric currents, well illustrated by the saddle point formed by the currents at the interface (indicated by the blue point in Fig.<ref>(c) ). The radial position of this saddle point depends on the details of the configuration, but its existence is an unavoidable consequence of the non-linear temperature gradient produced in the liquids.
These high current densities cannot be directly detected in the experiment due to their confinement near the walls, where electrical measurements are unavailable. However, in the next section, we demonstrate that surface velocity measurements, conducted in the presence of a magnetic field applied to the layers, can infer the existence of these high current densities and provide an accurate estimate of the value of bulk currents. Note that the analytical calculation in Supp. Mat shows that this peculiar geometry of the currents is driven by the temperature at the interface and can not be observed in the case of a liquid in contact with a conducting wall, for which the thermal gradient is constant at the liquid/solid boundary. This highlights the essential role of the fluid motions near the interface for the dynamics of thermoelectric currents.
§ THERMOELECTRIC MAGNETOHYDRODYNAMICS
The experiment is now subjected to a vertical homogeneous magnetic field B_0 using the two coils. In the presence of a magnetic field, Ohm's law (<ref>) is modified as follows to take into account the magnetic induction:
j/σ = -∇V + u×B - S∇T,
where u denotes the velocity field and B is the magnetic field. In the presence of this field, the horizontal thermoelectric currents described above generate an azimuthal Lorentz force, directly proportional to the product of B_0 and the temperature difference Δ T_B producing the currents. In this configuration, the azimuthal velocity u_φ can be obtained by measuring the voltage between two wires both located in liquid gallium (12 mm above the mercury-gallium interface), so that the contribution of the thermoelectric current can be neglected <cit.>. In Fig.<ref>, we report the time-averaged value of u_φ as a function of B_0, for different fixed values of the temperature difference Δ T_0. Even a moderate temperature gradient can produce a relatively vigorous motion of the liquid gallium, which reaches nearly ∼ 15cm/s for B_0=56mT and Δ T_0=37K. Note that, as the current changes sign in each layer, this Lorentz force causes the two liquid metals to rotate in opposite directions, generating a strong azimuthal shear flow at the interface. In what follows, we only measure the velocity field generated in the upper layer of liquid gallium, but it should be kept in mind that a similar flow occurs in the bottom layer (albeit somewhat weaker due to the lower conductivity and higher density of mercury). If the applied magnetic field changes sign, the direction of the azimuthal velocity is reversed, as expected.
The flow has two distinct behaviors, depending on the relative magnitudes of the magnetic and velocity fields. At a small magnetic field, as long as u_φ<10cm/s or so, the velocity increases rapidly with the magnetic field, and most of the data collapse to the prediction u_φ∝(B_0)^2/3. This exponent has been reported in several recent experimental and numerical studies, in which a conducting fluid is driven by an electromagnetic force <cit.>. It is relatively simple to extend these previous studies to thermoelectric currents generated in the liquid gallium: as suggested by Fig.<ref>, the current density in the bulk is distributed over the entire layer h/2, so that the azimuthal Lorentz force balances the inertia j_TEB_0∼ρ u_ru_φ/r. Near the endcap and the interface, the imbalance between the pressure gradient and vanishing centrifugal force produces a radial flow u_r^BL in the viscous boundary layers, such that u_φ^2/r ∼ν u_r^BL/δ_B^2 with δ_B=√(ν r/u_φ) the thickness of the Bödewadt boundary layer. Combining these two relations and using an incompressibility condition 2u_r^BLδ_B∼ u_rh/2, we finally obtain a prediction for the mean azimuthal velocity field:
u_φ∼(j_TE(r)B_0h√(r)/4ρ√(ν))^2/3∼(σ̃Δ SΔ T_BB_0h√(r)/4Lρ√(ν))^2/3
where we used j_TE∼σ̃Δ SΔ T_B/L to obtain the final expression. This prediction is indicated by the red dashed line in Fig.<ref>. It shows reasonable agreement with the experiment, despite some scatter in the data. More importantly, this agreement confirms that the bulk temperature drop Δ T_B (and not Δ T_0) is responsible for driving the flow, at least in the middle of the gap.
At a sufficiently large magnetic field, the velocity field reaches a plateau, in which the flow no longer depends on the magnetic field and is driven solely by the temperature gradient at the interface. This regime is also relatively similar to what has been described for strongly magnetized flows subjected to external currents <cit.>. We briefly recall below the main derivation for this classical prediction, adapting it to the thermoelectric case. This plateau can be interpreted as a fully magnetized regime, in which the currents induced by the flow motions in the bulk become sufficiently large to oppose the applied thermoelectric currents, i.e. σ u_φ B_0∼ j. As a result, the thermoelectric currents flow through two thin Hartmann boundary layers generated at the endcap and at the interface (where the velocity must be zero due to the symmetry of the counter-rotating flow). The current density in these horizontal boundary layers can be estimated to j∼ j_TEh/(4δ_Ha) where δ_Ha∼√(σ/ρν)/B_0 is the thickness of Hartmann boundary layers. We then obtain a second prediction, independent of the magnetic field:
u_φ∼j_TE/4√(ρνσ)∼σ̃Δ SΔ T_B/4L√(ρνσ)
where again j_TE∼σ̃Δ S Δ T_B/L has been used.
For Δ T_B∼ 8 K this prediction gives u_φ∼ 13 cm.s^-1 (blue dashed line in Fig. <ref>), which is in good agreement with the plateau measured at high magnetic field.
To further test this prediction, we report in Fig.<ref> the azimuthal velocity u_φ as a function of the measured bulk temperature gradient, showing that the flow depends linearly on the thermal gradient generated in the bulk and follows closely prediction (<ref>) (blue dashed line in Fig.<ref>). Finally, note that the transition between the inertial-resistive regime (<ref>) and the fully magnetized regime (<ref>) should occur when magnetic and rotational effects are in balance, i.e. when the Elsasser number Λ = σ B_0^2/ρΩ is close to unity <cit.> where Ω = u_φ/r. The intersection of the two predictions in Fig.<ref> is obtained for Λ_c≃ 0.9, in agreement with this picture.
To go beyond these local measurements and demonstrate the existence of large current densities at the boundaries, we carried out a few runs without the top endcap, so that the gallium phase displays a free interface. To prevent excessive oxidation of the gallium, the latter is in contact with a thin layer of hydrochloric acid HCl, which then replaces the endcap. Using the presence of small oxides on the free surface, the velocity field is characterized by particle tracking using a CMOS camera with a resolution of 1080x2049 and an acquisition frequency of 30Hz.
This approach has several drawbacks compared with local potential measurements: the density of the oxides is quite different from pure gallium, and their motion is slowed down by the friction from the HCl layer. This considerably underestimates the magnitude of the flow immediately below the free surface. But it also offers some advantages. To our knowledge, this is the first direct visualization of the thermoelectric pumping of a liquid metal (see the movie in supplementary materials), which allows us to study the spatial structure of the flow.
Fig. <ref> shows the azimuthal velocity profile u_φ obtained for B_0=36mT and Δ T_0=37K. At the surface, the measured velocity of the oxides is relatively fast, reaching u_φ∼ 2 cm/s near the inner cylinder. Because of the drag produced by the HCl, it is difficult to deduce the absolute value of the velocity in the gallium phase immediately below this interface, but we expect the measured velocity profile to be a good proxy of the one in the bulk. Close to inner and outer radial boundaries, the azimuthal velocity u_φ sharply increases, that can only be explained by the presence of an increasing magnetic forcing near the boundary. This additional rotation therefore provides an indirect measure of the large thermoelectric current density predicted by our calculations in Fig.<ref>. In Fig.<ref>, we plot this theoretical profile of the radial current, averaged in z over the whole layer of Gallium (red solid line). This current, induced by the thermal boundary layers, combines with the homogeneous magnetic field to produce a Lorentz force much larger at the boundaries. Although it is difficult to extrapolate from these measurements, it is interesting to note that the boundary current density, about 10 times greater than that generated in the bulk, could lead to an azimuthal flow near the boundaries much faster than the one in the bulk.
§ DISCUSSION AND CONCLUSION
Although thermoelectric MHD has been discussed previously in the literature, the results reported here describe a different type of thermoelectricity. The liquid nature of the two conductors leads to a more complex temperature distribution, generating anomalously strong density currents near the boundaries and driving an azimuthal shear flow in the bulk. This situation can occur in a variety of contexts, and it is appropriate to conclude this paper with a brief discussion of these possible applications.
Liquid metal batteries (LMBs) comprise three layers of different conducting fluids (top and bottom electrodes and a middle electrolyte) that self-segregate based on density and immiscibility and are subjected to electric current flowing through the fluids. Designed to store energy very efficiently, these low-cost, high-capacity, long-lasting, and easy-to-manufacture batteries could one day play a vital role in the massive expansion of renewable energy.
Due to the high operating temperature of these systems, one could expect significant horizontal temperature gradients at the interfaces between liquid metals and the electrolyte.
A crude estimate can be made using the properties of lithium-bismuth batteries Li|| LiCl-KCl|| Pb-Bi, given in table <ref> <cit.>. The Seebeck coefficient of liquid lithium is S_Li = 26 μ V.K^-1 <cit.>. It is more difficult to estimate the Seebeck coefficient of the electrolyte, but values for LiCl around [100-1000] μ V.K^-1 can be used here as an estimate of typical molten salt electrolytes. For a typical battery delivering 100 A and operating at T>500 ^∘ C during charging and discharging, the vertical magnetic field can be estimated at 1G <cit.>. For a typical cell with moderate size r∼ h ∼ 20 cm, applying a typical horizontal temperature gradient in the range 10-20 K <cit.> could produce thermoelectrical flows of u_φ∼ 3 mm.s^-1 according to prediction (<ref>). Such a flow magnitude is comparable to, perhaps larger than other phenomena expected in LMBs, such as Benard-Marangoni <cit.> or flows induced by the Tayler instability <cit.>. Note that a similar flow in opposite direction is expected in the electrolyte layer. Unlike these other sources of motion, thermoelectric stirring does not rely on instability. With simple control of the horizontal thermal gradient in the cell, this shear flow could be used to significantly increase LMB efficiency by enabling the kinetic reaction and influencing the transfer of Li^+ ions through the electrolyte layer and into the Pb-Bi phase.
Note, however, that these considerations are only valid in the absence of an externally imposed magnetic field. Such a field, often considered as a means of suppressing some undesirable instabilities, could then become harmful: our flow predictions show that the Seebeck effect could produce a significant thermoelectric pumping, possibly capable of destabilizing the interface and thus short-circuiting the two electrodes.
The thermoelectric effect has also been proposed to explain some features of the magnetic fields of the Earth and Mercury <cit.>, where a thermoelectric interface is expected between liquid iron and semiconducting silicate rocks at the core-mantle boundary of these planets. The theoretical expressions reported here provide new quantitative predictions about the regimes eventually reached in these systems. Furthermore, the liquid-liquid interface specifically addressed here may be relevant to other astrophysical bodies. Jupiter is probably the best example. At 85% of its radius, it exhibits an abrupt transition between an inner region of metallic hydrogen and an outer atmosphere of liquid molecular hydrogen. Since non-negligible meridional temperature variations are expected along this interface, it bears many similarities to the configuration described here. Here again, coefficients are relatively difficult to estimate, but let's assume that Δ S and σ̃ are both dominated by values of the semiconducting molecular hydrogen close to the transition with the metallic layer, such that Δ S∼ 1 mV.K^-1 and σ̃∼ 10^4.
In this case, temperature variations of the order of 1 K would lead to a local azimuthal magnetic field B_φ∼μ_0σ̃Δ TΔ S of the order of 10 μ T, a non-negligible fraction of the non-dipole radial magnetic field reported recently <cit.>. In addition, this thermoelectric current, presumably meridional, can interact with the planet's radial magnetic field to generate complex zonal flows. Similar arguments could be made for stellar interiors at the transition between radiative and convective regions.
A final comment must be made on the very large current density induced by thermal boundary layers. The liquid-liquid interface increases the current density by a factor of L/δ compared with a conventional solid thermocouple, where L and δ represent the size of the thermocouple and the size of the thermal boundary layer respectively. In the context of a transition to sustainable energy sources, efficient waste heat recovery generally involves large-scale systems with a substantial temperature gradient, two ingredients that maximize L/δ. In this case, using a liquid metal interface to convert heat into electricity may increase the efficiency of thermoelectric devices by several orders of magnitude. As the Prandtl number is small in liquid metals, the thermal layer is thicker than the viscous layer, which ensures that the boundary currents efficiently drive the fluids in the presence of a magnetic field. This possibility obviously requires further theoretical study, but it could offer an interesting new mechanism for converting heat into mechanical energy.
§.§ Experimental measurements
As shown in Fig.<ref>, the experiment is equipped with 4 holes on the top endcaps, located at r=R_i+L/2 through which various probes can be immersed in the liquid metals. To measure the velocity field in the gallium layer, two nickel wires, completely insulated except at their conducting tips (noted A and B below) and separated by a distance d=8mm, are immersed in the liquid. The Seebeck coefficient of nickel is denoted S_Ni, and the electrical conductivity and Seebeck coefficient of the liquid metal are denoted σ_Ga and S_Ga.
The electromotive force between points A and B is directly given by Ohm's law integrated over the distance between the wires:
e = ∫_A^B ( -S_Ga∇ T + u× B - j/σ_Ga)· dl
By neglecting the induced currents, the voltage measured by the nano-voltmeter Keysight 34420A connected to the wires is:
e
=(S_Ni-S_Ga)(T_A-T_B) +UB_0d
With (S_Ni-S_Ga)∼ 10 μV.K^-1, the thermoelectric effect between the gallium and nickel wires introduces a velocity error δ U∼ (S_Ni-S_Ga)(T_A-T_B)/(dB_0). For B_0∼ 50 mT and (T_A-T_B)∼ [0.1-1]K, this leads to δ U∼2 cm.s^-1 at most.
This offset is significantly smaller than our measured velocities and in practice has been systematically subtracted using the potential e(B_0=0) measured in the absence of magnetic field.
As explained in the main text, the measurement of the thermoelectric potential is based on the same technique, except that the two conducting tips are now located at different heights, so the tip of one of the wires is now immersed in the mercury layer. In this case, the magnetic field from the coils is zero, so B_0 reduces to the Earth's magnetic field. In this case, uB_0d∼10^-8V, a value much smaller than the measured voltages, hence leading to the expression given in the main text.
§.§ Numerical modeling
The equation (<ref>) has been numerically integrated in an axisymmetric cylindrical geometry using the same dimensions as the experiment and the physical properties of gallium and mercury. Specifically, we integrate the curl of the equation, so that it becomes a modified Poisson equation for the azimuthal magnetic field B(r,z) :
∇^2 B=1/η∇ S×∇ T - ∂_zB∂_zη/η
where η=1/(μ_0σ) is the magnetic diffusivity. This equation is solved by a Finite Difference Method using a 2^nd order numerical scheme with the central difference in space. The magnetic field is set to zero at the boundaries to model an insulating vessel.
The interface between the two layers is modeled by taking η(z)=η_Hg-(η_Hg-η_Ga)(1+tanh(z/z_i))/2 and S(z)=S_Hg-(S_Hg-S_Ga)(1+tanh(z/z_i))/2 where z_i is the typical thickness of the effective interface, taken as small as possible and fixed at 2 mm in the results reported here. The temperature depends only on r and is taken either as the conductive solution in cylindrical geometry T(r)=Aln r +B (using the same boundary temperatures T(r_i) and T(r_o) as the experimental temperatures measured in the cylinders) or as a piecewise constant temperature gradient. In the latter case, we used the idealized profile shown in red in Fig.<ref>, using the four temperature values given by the experimental data. The typical thickness of the boundary layer is set at 3 mm. The resolution of the simulations reported in the main text is Nr× Nz=300×300.
We are grateful to L. Bonnet, N. Garroum, A. Leclercq, and P. Pace for their technical support and we thank S. Ismael and M. Sardin for machining the experiment. We also thank Martin Caelen, Basile Gallet and Francois Petrelis for their insightful discussions.
CG acknowledges financial support from the French program JCJC managed by Agence Nationale de la Recherche (Grant ANR-19-CE30-0025-01) and the Institut Universitaire de France.
[2]
Supporting Information
§ SIDEWALL CONVECTION
The presence of horizontal temperature gradient naturally leads to sidewall convection which appears at non-zero Δ T_0. The Rayleigh number Ra = αΔ T_0 Δ R^3/κν where α is the thermal expansion coefficient, Δ T_0 the temperature difference between the cylinders, Δ R = R_o - R_i, κ the thermal diffusivity and ν the kinematic viscosity. For liquid Gallium, α = 5.5· 10^-5 K^-1, κ = 1.3· 10^-5 m^2.s^-1, ν=3.18· 10^-7 m^2.s^-1. The Rayleigh number for Δ T_0 ∼ 2-37 K is Ra_Ga∼ 5.7· 10^3 - 1.06· 10^5. For liquid Mercury, α = 1.83· 10^-4 K^-1, κ = 4.9· 10^-6 m^2.s^-1, ν=1.49· 10^-7 m^2.s^-1. The Rayleigh number for Δ T_0 ∼ 2-37 K is Ra_Hg∼ 1.08 - 20.03· 10^5.
§ ANALYTICAL MODEL
We derive here a simple analytical model describing the generation of a thermoelectric current, the corresponding magnetic field, and electric potential, in a rectangular domain made of two dissimilar metals. The two electrically conducting regions, denoted by the indices '+' or '-', have electrical conductivity σ^± and Seebeck coefficient (or thermoelectric power) S^±. Both are supposed independent of temperature. A horizontal thermal gradient of arbitrary shape is applied across the two metals, which are separated by an electrically conducting interface located at z=0.
In the absence of a velocity field u and in the presence of a thermal gradient, Ohm's law reads:
j/σ = E - S∇T,
where j is the electric current density, σ is the electrical conductivity, E is the electric field, S is the Seebeck coefficient and T is the temperature field.
In the following we will use the magnetostatic approximation, relatively well satisfied here: in liquid metal, the magnetic field generally evolves on time scales much smaller than all the other variables such as the temperature or the velocity field. This is summed up by the dimensionless number ζ = μ_0σκ, with μ_0 the vacuum magnetic permeability. ζ is the ratio of the temperature evolution time scale due to thermal diffusion to the magnetic evolution time scale (also due to diffusion). The presence of convection implies that the temperature can evolve on time scale faster than Δ R^2/κ like the eddy turnover time, Δ R/U_ff and U_ff being a typical velocity scale due to convection such as the free-fall velocity U_ff∼√(αΔ T_0 gh). In that case, Rm=μ_0σ U_ffΔ R must also be small to fulfill the quasi-static approximation. In the present experiment, both ζ≪ 1 and Rm≪ 1, ensure that the evolution of the magnetic field produced by thermoelectricity follows adiabatically the evolution of temperature.
In the magnetostatic approximation and for steady state, the Maxwell-Faraday equation reads ∇×E = 0. For each layer, the electric field can then be decomposed as follows, E = -∇V^± where V^± is the electric potential in each subdomain.
Taking the curl of the Ohm's law (<ref>) in each subdomain:
∇×(j^±/σ^±) = -∇×(S∇T)=∇ S×∇ T
Because S(T) is a function of temperature only, ∇ S×∇ T=0. With the assumption that the electrical conductivity is constant in each domain, we get :
j^±=-σ^±∇ϕ^±
The charge conservation, in the magnetostatic approximation, implies ∇·j^± = 0. Therefore, in each domain, ϕ^± fulfills a Laplace equation ∇^2ϕ^± = 0. The boundary conditions for the current are prescribed by charge conservation:
j_x^±(x=0,z) = j_x^±(x=d,z) = 0,
j_z^+(x,z=h/2) = j_z^-(x,z=-h/2) = 0,
j_z^+(x,z=0^+) = j_z^-(x,z=0^-)
These boundary conditions can be translated for ϕ^± as:
∂_xϕ^±(x=0,z) = ∂_xϕ^±(x=d,z) = 0,
∂_zϕ^+(x,z=h/2) = ∂_zϕ^-(x,z=-h/2) = 0,
σ^+∂_zϕ^+(x,z=0^+) = σ^-∂_zϕ^-(x,z=0^-)
The quantity ϕ^± can then be obtained as a decomposition over the eigenfunctions of the Laplacian. It is clear that sin(nπ x/d), with n∈ℕ, fulfill the boundary conditions for ∂_xϕ^±, thus
ϕ^± = ∑_n cos(nπ x/d)g_n^±(z).
As ϕ^± respects a Laplace equation, it is easy to check that g_n^±(z)= a_n^±cosh(κ_n z) + b_n^±sinh(κ_n z) with κ_n = nπ /d for simplicity. The boundary conditions at z = ± h/2 then implies:
dg_n^±/dz (z=± h/2) = κ_n a_n^±sinh(±κ_n h/2) + κ_n b_n^±cosh(±κ_n h/2) = 0,
which is a constraint on the coefficients since b_n^± = ∓tanh(κ_n h/2) a_n^±. Injected in ϕ^±, it gives:
ϕ^± = ∑_n a_n^±cos(κ_n x)(cosh(κ_n z) ∓tanh(κ_n h/2)sinh(κ_n z)).
Finally, the boundary condition at z=0 for ϕ^± links the coefficients a_n^+ and a_n^-. Indeed, it is easy to check that a_n^- = -σ^+ a_n^+/σ^-.
The continuity of the electric potential at the interface between the two conductors gives:
V^+(x,z=0^+) - V^-(x,z=0^-) = 0,
Using the Ohm's law ∇ V^±=∇ (ϕ^±-S^± T) where S is considered constant in each phase, the previous expression can be recast in terms of ϕ^±:
ϕ^+(x,z=0^+) - ϕ^-(x,z=0^-) = Δ S T(x,0),
with Δ S = S^+ - S^-. Injecting the expression of ϕ^+ and ϕ^- gives:
∑_n a_n^+ σ^+ + σ^-/σ^-cos(κ_n x) = Δ S T(x,z=0),
multiplying this expression by cos(κ_m x) and integrating over the interval [0,d] enables to obtain the expression of a_n^+ (where the orthogonality relation for trigonometric function has been used):
a_n^+ = K_n σ^- Δ S/ d (σ^+ + σ^-)∫_0^d T(x,0)cos(κ_n x)dx.
with K_n = 1 if n=0 and K_n = 2 otherwise. Finally, this gives the potential:
ϕ^± = ±∑_n K_n σ^∓Δ S/d (σ^+ + σ^-)cos(κ_n x)(cosh(κ_n z) ∓tanh(κ_n h/2)sinh(κ_n z))∫_0^d T(x,0)cos(κ_n x)dx.
The potential ϕ which prescribes the thermoelectric current distribution is therefore completely determined by the temperature profile at the interface. The computation of j^± and B which is given by Maxwell-Ampère law's ∇×B = μ_0j, is straightforward:
j_x^± = ±∑_n K_n σ̃Δ S κ_n/ d sin(κ_n x)(cosh(κ_n z) ∓tanh(κ_n h/2)sinh(κ_n z)) I_n(T),
j_z^± = ∓∑_n K_n σ̃Δ S κ_n/ d cos(κ_n x)(sinh(κ_n z) ∓tanh(κ_n h/2)cosh(κ_n z)) I_n(T),
with σ̃ = σ^+σ^-/(σ^+ + σ^-) and I_n(T) = ∫_0^d T(x,0)cos(κ_n x)dx. The important point of this result is the fact that any variation of the temperature along z will be supported by V keeping ϕ, j, and B unchanged. The component of the magnetic field produced by the thermoelectric effect is orthogonal to the plane (x,z), B_y simply denoted B and is:
B^± = ∓∑_n K_n μ_0 σ̃Δ S / dsin(κ_n x)(sinh(κ_n z) ∓tanh(κ_n h/2)cosh(κ_n z)) I_n(T),
We now implement this expression using the geometry and properties of the metals used in the experiment, namely mercury and gallium, h=25 mm, d=60 mm. If the two metals were in a solid state, the temperature profile would be linear with a constant thermal gradient -Δ T_0/d, where Δ T_0 is the thermal gradient applied at the horizontal wall boundaries. Fig. <ref> shows the computed isoline of potential ϕ^± while Fig. <ref> shows a colormap of B for n_max = 400, using the value Δ T_0 = 37K obtained in the experiment at maximum heating power. The black lines correspond to the streamlines of the thermoelectric current. The resolution used to plot the solution is dx=5· 10^-4 d and dz=5· 10^-4 h.
In the more realistic case of an interface separating two liquid metals, as in the experiment, the temperature profile can be approximated as piecewise linear at the interface. Here again, we use the temperatures obtained in the experiment (the red profile shown in Fig.2 of the main text). The resulting solution is shown in Fig <ref> and Fig <ref>. The results are in excellent agreement with those obtained from the direct numerical simulations reported in the main manuscript, and confirm the existence of intense current loops near the boundaries and a saddle point at the interface.
Fig. <ref> shows the horizontal component of the thermoelectric current at z=+0.5mm for the two cases studied. Far enough from the vertical walls, a good estimate of j_x in the solid case is σΔ SΔ T_0/d while for the liquid case, σΔ SΔ T_B/d provides the correct estimate, in agreement with numerical predictions.
This agreement between theoretical predictions and numerical results confirms that the geometry of thermoelectric currents and magnetic field strength are controlled by the temperature profile at the interface, σ and Δ S. This also confirms that the liquid nature of the interface, which produces a complex non-linear temperature profile, can generate a non-trivial distribution of thermoelectric currents, particularly near the thermal boundaries.
|
http://arxiv.org/abs/2409.03304v1 | 20240905071955 | Inhomogeneous hysteresis in local STM tunnel conductance with gate-voltage in single-layer MoS$_2$ on SiO$_2$ | [
"Santu Prasad Jana",
"Suraina Gupta",
"Anjan Kumar Gupta"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
Department of Physics, Indian Institute of Technology Kanpur, Kanpur 208016, India
[email protected]
September 9, 2024
§ ABSTRACT
Randomly distributed traps at the MoS_2/SiO_2 interface result in non-ideal transport behavior, including hysteresis in MoS_2/SiO_2 field effect transistors (FETs). Thus traps are mostly detrimental to the FET performance but they also offer some application potential. Our STM/S measurements on atomically resolved few-layer and single-layer MoS_2 on SiO_2 show n-doped behavior with the expected band gap close to 2.0 and 1.4 eV, respectively. The local tunnel conductance with gate-voltage V_ g sweep exhibits a turn-on/off at a threshold V_ g at which the tip's Fermi-energy nearly coincides with the local conduction band minimum. This threshold value is found to depend on V_ g sweep direction amounting to local hysteresis. The hysteresis is, expectedly, found to depend on both the extent and rate of V_ g-sweep. Further, the spatial variation in the local V_ g threshold and the details of tunnel conductance Vs V_ g behavior indicate inhomogenieties in both the traps' density and their energy distribution. The latter even leads to the pinning of the local Fermi energy in some regions. Further, some rare locations exhibit a p-doping with both p and n-type V_ g-thresholds in local conductance and an unusual hysteresis.
Keywords: TMDs, MoS_2, Interface Traps, Hysteresis, Scanning Tunneling Spectroscopy.
Inhomogeneous hysteresis in local STM tunnel conductance with gate-voltage in single-layer MoS_2 on SiO_2
Santu Prasad Jana, Suraina Gupta & Anjan Kumar Gupta
Received 16 July 2024; accepted 04 September 2024
=========================================================================================================
§ INTRODUCTION
In recent years, transition metal dichalcogenides (TMDs) have gained significant interest for next-generation semiconductor devices <cit.>. Molybdenum disulfide (MoS_2) is a promising TMD with a direct bandgap of ∼1.9 eV in monolayer form and an indirect bandgap of ∼1.3 eV in bulk form <cit.>, making it an excellent candidate for transistor <cit.>, logic <cit.>, high-frequency <cit.>, circuit integration <cit.> and optoelectronic <cit.> applications. However, MoS_2 single layer exhibits decreased charge carrier mobility compared to graphene due to its poor dielectric screening, inherent structural defects, and quantum confinement effects <cit.>. Interface traps between atomically thin MoS_2 and dielectric substrate and intrinsic defects are prevalent issues that impact electrical transport characteristics. For instance, a persistent photoconductivity is observed in MoS_2 photo-transistors <cit.> due to the photo-charge trapping by intrinsic localized band-tail states and extrinsic interface trap states.
The traps are also responsible for the observed hysteresis in the transfer characteristics, i.e. drain-source current I_ DS Vs gate voltage V_ g, of MoS_2 field effect transistors (FETs). This has been studied by several research groups <cit.> and as a function of gate bias stress, gate sweeping range, sweeping time, sweeping direction and loading history in different conditions such as high and low temperature, vacuum and different gas environments. Hysteresis reported in single and multi-layer MoS_2 has been ascribed to several possibilities including the absorption of water and gas molecules on top of MoS_2 <cit.>, charge traps at the metal-semiconductor interface of the contacts <cit.>, the intrinsic structural defects <cit.>, the extrinsic traps at the MoS_2/SiO_2 interface <cit.>, oxide traps close to MoS_2/SiO_2 interface <cit.>, oxide traps close to p^ + Si/SiO_2 interface <cit.> and mobile ions (Na^ + and K^ +) in the oxide <cit.>. In bulk transport, the threshold V_ g value at which the channel conduction starts/stops and its change between the two opposite V_ g-sweep directions give an averaged information about the traps that are actually non-uniformly distributed. Traps also have potential in memory devices as some recent work illustrates <cit.>.
In the absence of traps, the oxide capacitance and the MoS_2 quantum capacitance dictate the carrier density of the MoS_2 channel <cit.> in response to V_ g. The presence of the interface charged-traps, lead to a spatially inhomogeneous electrostatic channel potential that dictates the local charge density and overall carrier mobility. A non-uniform density and energy-distribution of trap states lead to a spatially inhomogeneous, and V_ g dependent, screening of the gate electric field. This can cause a non-uniform change in carrier density and unpredictable changes in mobility with V_ g. For instance, a local peak in traps' density of states can pin the MoS_2 Fermi energy keeping the carrier density unchanged over significant V_ g range.
In this paper, we report on hysteresis and its inhomogeneities with gate sweep in the local tunnel conductance of atomically resolved single and few-layer MoS_2 on SiO_2 by using a scanning tunneling microscope (STM). The measured tunnel conductance spectra, i.e. dI/dV_ b Vs bias voltage V_ b, show the expected band-gap while the dI/dV_ b Vs V_ g curves, at a fixed V_ b, exhibit spatial inhomogeneities including that in V_ g-threshold. This threshold is found to differ during forward and backward sweeps of V_ g with this difference being dependent on the extent and rate of V_ g-sweep. A rare p-doped region, confirmed from spectra, is found to show unusual hysteresis. These findings are discussed using traps with an inhomogeneous density and energy distribution.
§ EXPERIMENTAL DETAILS
§.§ MoS_2 Exfoliation, characterization and device fabrication
Single and few-layer MoS_2 flakes were mechanically exfoliated from natural bulk crystal (from SPI) using a Scotch tape <cit.> and transferred on SiO_2/Si substrate by dry transfer technique <cit.> employing a PDMS membrane (Gel film from Gel Pak) as a viscoelastic stamp under an optical microscope. Acetone/IPA cleaned, highly doped Si acting as back gate with 300 nm thermal SiO_2 is used as substrate. The number of MoS_2 layers on SiO_2/Si substrate was determined by optical microscope and Raman spectroscopy.
Figure <ref>(b) shows a separation between the E^ 1_ 2g and A_ 1g Raman peaks as 18.5 cm^ -1, which is comparable to that reported <cit.> in single-layer MoS_2.
A clean surface, free of resist and wet chemicals' residue, is required for STM/S studies. Therefore, we use mechanical masking to make 50 nm thick gold film contacts by placing and aligning the MoS_2 flake underneath a 25 μm diameter tungsten wire using an optical microscope and then depositing Au. This is followed by a second wire alignment, nearly perpendicular to the previous, and Au deposition to surround MoS_2 with Au from all sides as shown in figure <ref>(a). This helps in aligning the STM tip on MoS_2 under an optical microscope. The contact resistance was estimated to be about 5 kΩ-μm^2 from separate multi-probe transport measurements on samples made by e-beam lithography.
§.§ STM measurement details
The STM/S measurements were done in a homemade room temperature STM in a cryo-pumped vacuum better than 10^ -4 mbar. Electrochemically etched and HF cleaned tungsten wire was used as STM tip with an apex radius between 20 and 50 nm as confirmed by scanning electron microscope. Figure <ref>(b) depicts the STM/S measurement schematic with the bias voltage V_ b applied to MoS_2 while the tip stays at virtual ground potential. Thus, at a positive sample bias electrons tunnel from the tip's filled-states to the vacant-states of MoS_2. The gate voltage V_ g is applied to the heavily doped Si-substrate with a 10 kΩ series resistance. Before doing the STM/S measurements on MoS_2, we have taken atomic resolution images on HOPG sample to ensure good tip conditions.
To acquire the tunnel conductance dI/dV_ b spectra, an AC modulation voltage of 20 mV and 2731 Hz frequency was added to the DC bias voltage and the in-phase component of the AC tunnel current was measured using a Lock-in amplifier. The dI/dV_ b curves presented here are averages of twelve such curves unless stated otherwise. For the local dI/dV_ b measurements, the feedback loop is turned off to keep a fixed tip-sample separation. A standby gate voltage V_ g=30 V is also used to ensure sufficient bulk conductivity of MoS_2 after the spectra acquisition is stopped and before the feedback loop is turned on. This used positive V_ g is above the typical threshold voltage for bulk conduction as found from separate transport measurements.
The STM images were taken in constant current mode and at a V_ b value well above the conduction band edge of MoS_2 and also at a positive V_ g above the bulk conduction threshold. The latter ensures the tunnel current conduction from the tunnel junction to the Au contacts with an in-between voltage drop much less than the applied V_ b. Further, the junction resistance is typically kept as 20 GΩ or higher for imaging which is significantly larger than the typical off-state bulk resistance of about 1 GΩ. The local dI/dV_ b at large negative V_ g values is found to be still measurable when MoS_2 is in the off-state. This is presumably due to the presence of defects and traps that lead to a non-infinite resistance and smaller than tunnel junction resistance. This made it possible to acquire dI/dV_ b data, as discussed later, in a rare p-doped region. Only a very few <cit.> STM studies of MoS_2 on insulating substrates have been reported so far as opposed to those on conducting metals <cit.> presumably due to the complication arising from lack of tunnel current conduction in the off state. The off-state of an ultra-clean and trap-free MoS_2 can be expected to be highly insulating which can make such STM/S studies even more challenging.
§ RESULTS AND DISCUSSIONS
§.§ STM topography and atomic resolution image and local dI/dV_ b spectra
Figure <ref>(a) shows a topographic image of a single-layer MoS_2 on SiO_2/Si substrate acquired using constant current mode with I=50 pA, V_ b = 2 V and V_ g = +30 V. This observed corrugation of about 1 nm is same as that of the thermal SiO_2 on Si as reported by AFM measurements <cit.>. This could indicate that single layer MoS_2 adheres well to the SiO_2 surface although some of this corrugation can also be attributed to the electronic inhomogeneities. We also found that few layer MoS_2 exhibits a substantially smaller corrugation. A zoomed-in atomic resolution image, see Fig. <ref>(b), taken on relatively flat region of a few-layer MoS_2 shows a triangular lattice of sulfur surface atoms with a lattice constant of about 0.3 nm, in good agreement with the literature <cit.>.
Figure <ref> shows the dI/dV_ b-V_ b spectra acquired on the single layer and few-layer MoS_2 surfaces at V_ g = 30 V. No hysteresis was found in these spectra with respect to the V_ b-sweep direction. These spectra were acquired several microns away from the metal contacts to avoid their influence. The spectra on both single as well as few-layer MoS_2 exhibit a clear gap. For the used bias configuration, see Fig. <ref>(b), the sharp rise at V_ b>0 in these spectra corresponds to the conduction band minimum (CBM) and the one at V_ b<0 represents the valence band maximum (VBM) with the separation between the two edges representing the band-gap. The marked band-gaps of about 1.4 eV and 2.0 eV for few-layer and single-layer MoS_2, respectively, in figure <ref>, agree with the reported values <cit.>. The V_ b=0 point corresponds to the Fermi energy of MoS_2, which is close to the CBM in figure <ref> indicating that both these few-layer and single layer MoS_2 flakes are n-doped semiconductors. Electron rich sulfur vacancies and other n-type dopant impurities in natural MoS_2 crystals are believed to be responsible for this n-type nature <cit.>.
The observed tunneling band-gap can actually be significantly larger than the actual one due to the tip-induced band bending (TIBB) <cit.>, the term used in the framework of 3D semiconductors. Further, the same effects, if significant, would also lead to hysteresis in dI/dV_ b-V_ b spectra as is the case with respect to V_ g which is discussed later. This can be expected as the local band bending will change the occupancy of the slow traps <cit.> in the vicinity. Lack of such hysteresis and the fact that the observed tunneling band-gap is close to the actual value indicate that effects due to TIBB are insignificant. This can be attributed to the following observations. 1) The occupancy of the traps, arising from defects, changes in response to V_ b change and amount to screening of tip's electric field that would reduce the band-bending. A significant density of states of fast traps ∼ 10^12 eV^-1cm^-2 can be inferred in such devices from the large values of the subthreshold swing <cit.>. A large density of slow traps can also be deduced from the typical hysteresis in the channel transport in such devices <cit.>. 2) The back gate is capacitively coupled to the channel with a capacitance equivalent to a quantum capacitance corresponding to a density of states κϵ_0/(e^2d) with κ as dielectric constant of SiO_2 gate, d as its thickness and ϵ_0 as free space permittivity. This works out to be 7.4×10^10 eV^-1cm^-2. This will also contribute towards reducing the TIBB as some of the electric-field lines due to the tip bias will terminate on the gate rather than the channel. 3) The STM tip size is much smaller than the typical screening length within the 2D-MoS_2. The latter will dictate the channel area over which the TIBB will spread. This will substantially reduce the local TIBB under the STM tip. In contrast, the back-gate voltage in the 2D-FET configuration affects whole of the 2D MoS_2 uniformly leading to significant band-bending. 4) It has been recently argued <cit.> that presence of lateral tunneling inside MoS_2 will also reduce the TIBB.
In figure <ref>, the sharpness of the spectra at the VBM is seen to be substantially more than that near CBM for both cases. This can be attributed to the band tail states (BTS) below the CBM. These BTS near CBM can reduce the effective band gap and severely affect the transport. The U-shaped spectra indicating BTS or band edge states near both VBM and CBM have been commonly seen in Si/SiO_2 and other conventional 3D semiconductors <cit.>. The presence of BTS near the CBM can arise from the presence of sulphur vacancies in MoS_2 <cit.> which are the most common defects. Although a single S vacancy state is expected to be 0.46 eV below the CBM by density functional theory calculations <cit.> but we did not find any peaks inside the band gap in our dI/dV_ b spectra even in the vicinity of defects inferred from the STM images. The observed continuum of BTS close to the CBM could arise from multiple S vacancies that interact with each other and possibly with other defects.
§.§ Spatial variation of hysteresis in local dI/dV_ b with gate-sweep.
Figure <ref>(a) shows a set of twelve unprocessed individual dI/dV_ b versus V_ g curves at a specific location of single layer MoS_2 at a fixed V_ b=2 V and for forward and reverse V_ g sweeps between -60 and 60 V. The schematics in Figure <ref> illustrate the underlying physics. At V_ b= 2 V, the Fermi level of MoS_2 is 2 eV below that of the tip. The Fermi level of the tip, kept at virtual ground, is used as zero energy reference. As V_ g is increased, at fixed V_ b, from negative to positive during the forward sweep, the bands of MoS_2 will shift downward, see fig. <ref>. At certain threshold V_ g value, namely V_ th, the tip's Fermi level will be within a few k_ BT below the MoS_2 conduction band edge, see Fig. <ref>(d), leading to a sudden rise in dI/dV_ b as the electron start tunneling from the tip to the MoS_2 CB states. In Fig. <ref>, the threshold voltage V_ thf for the forward V_ g sweep is seen to be smaller than V_ thb, i.e threshold for the backward sweep. This amounts to a significant positive local hysteresis. Comparing with the bulk transport in similar and those with passivated interface samples <cit.>, this observed hysteresis is attributed to the local traps at the interface of MoS_2 and SiO_2. When V_ g is increased from negative extreme value, during the forward sweep, the interface traps are positively charged which induces a negative charge on MoS_2. This brings MoS_2 CBM closer to its Fermi energy as compared to when there is no trap charge and thus a smaller V_ g is needed to reach the threshold condition. For the backward sweep the traps will be negatively charged, which makes the threshold higher.
A trap-state is characterized by a potential barrier, which determines how fast it exchanges electrons with the channel <cit.>. The fast traps, which are strongly coupled to the channel, exchange electrons faster than the experimental time scale. These fast-traps act like dopants as their occupancy gets equilibrated fast and is dictated by the Fermi-energy of MoS_2. Further, their local areal density and energy distribution will determine the local V_ th value of the dI/dV_ b-V_ g curves but the fast traps will not contribute to the hysteresis. The extremely slow traps, with a large potential barrier, also do not contribute to the hysteresis. The slow traps that exchange electrons with the channel at a time scale comparable to the V_ g sweep time dominantly contribute to the positive hysteresis.
Assuming only one electron is captured or released per trap, the trap density responsible for the observed hysteresis in figure <ref>(a) can be estimated as: N_ tr= (V_ thb-V_ thf)C_ ox/e≈ 7 ×10^ 11cm^ -2. Here, C_ ox =κϵ_ 0/d≈ 12 nF/cm^ 2 is the capacitance of SiO_2. The few-layer MoS_2 is found to exhibit larger hysteresis and less steep changes in dI/dV_ b-V_ g curves than mono-layer MoS_2 for the same sweep parameters. This is, presumably, due to more defects and inter-layer traps. Note that the interface trap-states do not directly contribute to the tunnel conductance as the direct tunneling matrix element between the tip and interface-trap states will be negligible as compared to that between tip and MoS_2 states. The tunneling electrons' equilibration rate is expected to be much faster than the typical electron transfer rate between MoS_2 and the trap states. Thus the interface-traps affect the local dI/dV_ b only through the filling or the band-shift of the MoS_2 channel.
Different scales of inhomogeneities were observed in the sense of seeing the same nature of curves over scales of 100×100 nm^ 2 order but with variation in V_ th values, while over larger scales a change in the nature of these curves is also observed. Figure <ref>(b) displays three typical dI/dV_ b-V_ g curves in different regions, more than 100 nm apart. Other than the variation in V_ th and Δ V_ th=V_ thb-V_ thf, the details of the dI/dV_ b spectra markedly change for the same V_ g sweep parameters. These regions do not have any correlation with the topography images. Figures <ref>(c) and (d) show representative dI/dV_ b-V_ g curves from regions `1' and `3', respectively, which are taken at points 20 nm apart and along a straight line. Region `1' exhibits a sharper turn-on than Region `3' while region '2' shows a plateau-like feature in backward V_ g sweep. The V_ th values of forward sweep in these regions differ, by more than 18 V, which is more than that within a region. This implies a variation in trap density of ≈ 1.3×10^ 12 cm^ -2 over large scales.
Figure <ref> illustrates how the bands shift relative to the tip Fermi energy when V_ g is increased. Note that we use a convention where the MoS_2 Fermi energy E_ F and that of the tip, i.e. E_ F+eV_ b, are kept fixed. The MoS_2 bands and the trap-states (not shown in Fig. <ref>) shift together downward in response to a V_ g increase. This dictates the Fermi energy of the combined trap and MoS_2-band system through the charge induced on this system by back-gate. Thus the trap states effectively screen the gate electric field and slow down the shift of the bands in response to V_ g. The dI/dV_ b value at a given V_ g reflects the thermally smeared density states of MoS_2 at energy eV_ b above E_ F. The traps that contribute to hysteresis are the slow ones but both, the fast and slow, will contribute to the shift of the bands and thus to the details of dI/dV_ b-V_ g curves.
In region '1', a sharp rise in dI/dV_ b at the threshold and near saturation afterwards implies a sharp rise or a peak in traps' density of states. When the Fermi energy of MoS_2 and traps is well above this peak the bands and the traps states move down fast, in response to V_ g increase. This leads to a sharp rise in dI/dV_ b, until the traps' peak reaches E_ F where the bands stop shifting with V_ g and dI/dV_ b stays nearly constant. The dI/dV_ b can also decrease in case of the delayed reaction of the slow traps which can cause an upward shift with time of the bands with a fixed, or even with an increasing, V_ g. In region '3', on the other hand, the rise in dI/dV_ b is slow and without any saturation indicating a relatively constant trap density of states leading to a slow downward shift in bands with increasing V_ g. In region '2' the plateau-like feature can arise from a small peak in the trap density of states that checks the shift of bands over a range of V_ g. Furthermore, traps with energies within a few k_ BT of E_ F can randomly exchange electrons with the channel leading to the generation-recombination (g-r) noise <cit.> in dI/dV_ b, see figure <ref>(b).
An inhomogeneous distribution of traps can be understood as arising from various non-uniform distribution of defects. These defects include amorphous SiO_2 surface defects, surface dangling bonds, surface absorbers, immobile ionic charges, and foreign impurities adsorbed on the surface. Two major SiO_2 surface structures have long been recognized: a high-polarization silanol group (Si-OH) and a weak-polarization siloxane group (Si-O-Si) <cit.>. Negatively charged silanol groups will induce a positive charge on MoS_2 and increase local V_ th. H^ + ions of water molecules hydrogen bond with the OH^ - of the silanol while exposing O^ 2- to MoS_2, which can exchange electron with MoS_2 and alter V_ th. Water molecules on MoS_2 cannot be completely removed by pumping in a vacuum at room temperature even for extended periods <cit.>. The weak polarization siloxane groups (Si-O-Si) lead to hydrophobic nature <cit.> as no hydrogen bonding with water molecule is possible. The MoS_2 in such regions will exhibit a lower V_ th as compared to that on the silanol surface.
In addition, foreign impurity atoms present on the SiO_2/MoS_2 interface can also act as traps. A shallow donor trap state is produced just below the CBM of the MoS_2 when a foreign impurity atom is absorbed on the siloxane surface <cit.>. Even at low temperatures and with a low activation energy, it can transfer an electron to the MoS_2 conduction band and affect V_ th. In addition, intrinsic defects like sulphur vacancies, as discussed earlier, also affect V_ th. These are more likely to act as fast traps as these will be well coupled to MoS_2.
§.§ Variation of hysteresis with gate voltage sweep range and sweep rate.
Figure <ref>(a) shows the dI/dV_ b-V_ g curves for different V_ g sweep ranges varying from ±20 V, i.e. Δ V_ g=40 V to ±80 V, i.e. Δ V_g=160 V. Negligible hysteresis is seen in the ±20 V sweep range and hysteresis, quantified by Δ V_ th=V_ thb-V_ thf, increases non-linearly with Δ V_ g, see the inset of figure <ref>(a). This is consistent with the bulk transport in similar devices <cit.>. For smaller Δ V_ g, the MoS_2 and traps Fermi energy changes by smaller amount and thus slow-traps in relatively narrower energy range change their charge state as compared to larger Δ V_ g. We can estimate the number of traps that change their charge state using Δ V_ th, as discussed earlier. Thus, for Δ V_g=160 V, 1.05× 10^ 12 cm^-2 traps change their charge state as compared to 6.58× 10^ 10 cm^-2 at Δ V_g=40 V in figure <ref>(a).
Figure <ref>(b) shows the change in hysteresis curves with V_ g sweep rate for fixed Δ V_ g and other parameters. At higher sweep rate more slow-traps will be able to change their charge state. This makes the hysteresis increase with reducing V_ g-sweep rate. A larger sweep-rate also increases the maximum value of the tunnel conductance as reduced traps' participation leads to higher change in channel Fermi energy or carrier density. Also, the slowly acquired dI/dV_ b-V _ g curves contain more generation-recombination noise, see Fig. <ref>(b), as more traps capture and release electrons randomly during the slow sweep
§.§ Observation of a rare hole-doped nature in local tunnel conductance
Figure <ref>(a) shows a tunnel spectra, i.e. dI/dV_ b-V_ b at certain location with the bias voltage sweep at a fixed V_ g = 30 V. This shows the expected band gap but more band tail states near both VBM and CBM and with MoS_2 Fermi energy close to the valance band edge. This indicates a local p-type doping as opposed to the n-type which is more common. If the underlying SiO_2 is defect-free, it has also no influence on MoS_2's local conductance since its valence and conduction bands are far apart from those of MoS_2 <cit.>. No charge transfer occurs between SiO_2 and MoS_2 due to the high hole and electron barrier. Hence, the observed n-or p-type local conductance of MoS_2 on SiO_2 must arise from defects, impurities and local disorder at the MoS_2-SiO_2 interface. Regions with sulfur-rich or Molybdenum-deficiency can also make it p-type locally. The traps may also give rise to this behavior depending on their energy relative to the MoS_2 bands. Thermally created undercoordinated oxygen atoms (Si-O*) on the silanol (Si-OH) terminated surface can serve as an electron trap leading to p-doping in MoS_2. For such a defect it is reported that the empty acceptor state is generated at 0.9 eV above the SiO_2 VBM <cit.>.
Figure <ref>(b) shows dI/dV_ b-V_ g curves at the same location with a rather unusual hysteresis and with access to both electron and hole doped regimes through V_ g change. During the forward sweep, local conductance becomes non-zero at V_ g = 10 V and increases till V_ g=60 V and then it quickly drops to zero at the beginning of the backward sweep followed by a p-type threshold at around V_ g =-10 V. On further decreasing V_ g, conductance increases till V_ g = -35 V and then decrease again reaching zero at -50 V which stays at zero at the begining of forward sweep of V_ g. The nono-monotonic change in dI/dV_ b, and particularly a maximum near V_ g=-32 V can arise from the delayed reaction of the slow traps with energy near VBM which depletes the hole-type carriers from the channel.
Thus in this region of MoS_2, all five scenarios of Fig. <ref> seem to be accessible, presumably, due to a lower overall trap density leading to a wide variation of E_ F with V_ g spanning both CBM and VBM. This is more likely from much lower fast-trap density as the hysteresis, arising due to the slow traps, is still quite significant. Also the observation of a finite dI/dV_ b at a negative V_ g implies that the bulk of the MoS_2 in its insulating state is not too insulating to forbid the transport of tunnel current.
§ SUMMARY AND CONCLUSIONS
Figure <ref> summarized how the MoS_2 bands shift in response to V_ g at a given location with certain trap charge density. The local traps' charge configuration determines the local charge density σ_ tr arising from traps and thus the local band shift of MoS_2. This is illustrated in the schematic in figure <ref>. The overall electrochemical potential stays constant throughout the MoS_2 channel while the local filling or electrostatic potential or Fermi energy is inhomogeneous due to variations in local σ_ tr. One can thus have locally n- or p-doped regions as seen in this STM study. These schematics, however, do not capture the detailed behavior of traps in terms of their response time and energy distribution. The latter will affect the hysteresis in local dI/dV_ b-V_ g curves as well as the detailed V_ g dependence.
In conclusion, our STM/S investigation on single-layer MoS_2 surface shows hysteresis in local tunnel conductance with gate sweep at a constant sample bias voltage. The Δ V_ th, V_ th and the shapes of dI/dV_ b-V_ b and dI/dV_ b-V_ g curves also show inhomogeneities due to traps. Further, the hysteresis changes with V_ g sweep range and sweep rate. Energy-dependent interface trap-state density is spatially inhomogeneous and even pins the MoS_2 Fermi energy in some places. Finally, the dependence of dI/dV_ b-V b and dI/dV_ b-V b on the traps can be a valuable tool for studying traps and defects in 2D materials.
§ ACKNOWLEDGMENTS
The authors acknowledge SERB-DST of the Government of India and IIT Kanpur for financial support.
§ REFERENCES
:
TMDsZhou Z and Yap Y K 2017 Electronics 6 53.
direct gapMak K F, Lee C, Hone J, Shan J and Heinz T F 2010 Phys. Rev. Lett. 105 136805.
direct gap1Radisavljevic B, Radenovic A, Brivio J, Giacomett I V and Kis A 2011 Nat. Nanotechnol. 6 147-150.
how goodYoon Y, Ganapathi K, and Salahuddin S 2011 Nano Lett. 11 3768-3773.
logicMartinez L M, Pinto N J, Naylor C H and Johnson A T C 2016 AIP Adv. 6 125041.
high frequencyKrasnozhon D, Lembke D, Nyffeler C, Leblebici Y, and Kis A 2014 Nano Lett. 14 5905-5911.
icRadisavljevic B, Whitwick M B and Kis A 2011 ACS Nano 5 9934-9938.
photodetectorsLopez-Sanchez O, Lembke D, Kayci A, Radenovic M and Kis A 2013 Nat. Nanotechnol. 8 497-501.
lightLopez-Sanchez O, Alarcon Llado E, Koman V, Fontcubertai Morral A, Radenovic A and Kis A 2014 ACS Nano 8 3042–8.
photoHao L, Liu Y, Gao W, Han Z, Xue Q, Zeng H, Wu Z, Zhu J and Zhang W 2015 J. Appl. Phys. 117 114502.
defect Qiu H, Xu T, Wang Z, Ren W, Nan H, Ni Z, Chen Q, Yuan S, Miao F , Song F, Long G, Shi Y, Sun L, Wang J, and Wang X 2014 Nat. Commun. 4 2642.
defect1McDonnell S, Addou R, Buie C, Wallace R M and Hinkle C L 2014 ACS Nano 8, 2880–2888.
photoconductivityFurchi M M, Polyushkin D K, Pospischil A and Mueller T 2014 Nano Lett. 14 6165–70.
acs nanoLate D J, Liu B, Matte H S S R, Dravid V P and Rao C N R 2012 ACS Nano 6 5635–5641.
scalling beheviorLi T, Du G, Zhang B and Zeng Z 2014 Appl. Phys. Lett. 105 093107.
hysteresis inversionKaushik N, Mackenzie D M A, Mukherjee B, Goyal N, Boggild P, Thakar K. Petersen D H and Lodha S 2017 npj 2D Mater Appl 1 34.
intrinsic originShu J, Wu G, Guo Y, Liu B, Wei X and Chen Q 2016 Nanoscale 8 3049-3056.
interfaceGuo Y, Wei X, Shu J, Liu B, Yin J, Guan C, Han Y, Gao S and Chen Q 2015 Appl. Phys. Lett. 106 103109.
cvd1Zhu W, Low T, Lee Y H, Wang H, Farmer D B, Kong J, Xia F, and Avouris P 2014 Nat. Commun. 5 3087.
oxide trapsPark Y, Baac H W, Heo J and Yoo G 2016 Appl. Phys. Lett. 108 083102.
oxide traps close to SiHe G, H. Ramamoorthy H, Kwan C-P, Lee Y-H, Nathawat J, Somphonsane R, Matsunaga M, Higuchi A, Yamanaka T, Aoki N, Gong Y, Zhang X, Vajtai R, Ajayan P M, and Bird J P 2016 Nano. Lett. 16 6445-6451.
mobile ionsBradley K, Cumings J, Star A, Gabriel J-C P and Grner G 2003 Nano. Lett. 3 639-641.
trap-block-trans S. P. Jana , S. Gupta , and A. K. Gupta, Phys. Rev. B 2023 108, 195411.
trap-memory M. Farronato, P. Mannocci, M. Melegari, S. Ricci, C. M. Compagnoni, and D. Ielmini, Adv. Mater. 35, 2205381 (2023).
quantum capacitanceMa N and Jena D 2015 2D Mater. 2 015003.
scotchtape method Novoselov K S, Geim A K, Morozov S V, Jiang D, Zhang Y, Dubonos S V, Grigorieva I V and Firsov A A 2004 Science 306 666-669.
XYZ Gomez A S, Buscema M, Molenaar R, Singh V, Janssen L, van der Zant H S J and Steele A G 2D Mater. 1 011002.
optical identificationGomez A C, Agraït N and Bollinger G R 2010 Appl. Phys. Lett. 96 213116.
cvdLiu X, He J, Liu Q, Tang D, Wen J, Liu W, Yu W, Wu J, He Z, Lu Y, Zhu D, Liu W, Cao P, Han S and Ang K-W 2015 J. Appl. Phys. 118 124506.
mid-gap statesLu C-P, Li G, Mao J, Wang L-M and Andrei E Y 2014 Nano Lett. 14 4628-4633.
zhou-STM-TIBBX. Zhou, K. Kang, S. Xie, A. Dadgar, N. R. Monahan, X.-Y. Zhu, J. Park, and A. N. Pasupathy, 2016 Nano Lett. 16 3148-3154.
gap Trainer D J, Putilov A V, Di Giorgio C, Saari T, Wang B, Wolak M, Chandrasena R U, Lane C, Chang T, Jeng H, Lin H, Kronast F, Gray A X, Xi X, Nieminen J, Bansil A and Iavarone M 2017 Sci Rep 7 40559.
AFMCullen W G, Yamamoto M, Burson K M, Chen J H, Jang C, Li L, Fuhrer M S and Williams E D 2010 Phys. Rev. Lett. 105 215504.
atomicHuang Y L, Chen Y, Zhang W, Quek S Y, Chen C, Li L, Hsu W, Chang W, Zheng Y J, Chen W, and Wee A T 2015 Nat. Commun. 6 6298.
atomic1Lu C I, Butler C J, Huang J-K, Hsing H-H, Chu Y H, Luo C-H, Sun Y-C, Hsu S-H, Yang K-H O, Wei C-M, Li L-J and Lin M-T 2015 Appl. Phys. Lett 106 181904.
native defects Komsa H-P and Krasheninnikov A V 2015 Phys. Rev. B 91 125304.
channel length Liu H, Neal A T and Ye P D 2012 ACS Nano 6 8563-9.
contact Liu D, Guo Y, Fang L and Robertson J 2013 Appl. Phys. Lett. 103 183113
TIBBFeenstra R M and Stroscio J A 1987 J. Vac. Sci. Technol B: Microelectronics Processing and Phenomena. 5 923-929.
TIBB1McEllistrem M, Haase G, Chen D and Hamers R 1993 Phys. Rev. Lett. 1993 70 2471.
TIBB2Feenstra R M, Dong Y, Semtsiv M and Masselink W 2007 Nanotechnology 2007 18 044015.
UWhite M and Cricchi J 1972 IEEE Transactions on Electron Devices 19 1280-1288.
Jana et alJana S P, Gupta S and Gupta A K 2023 arXiv:2303.13902
chemistryIler R K The Chemistry of Silica (Wiley-Interscience, New York, 1979), p. 622.
chemistry1Nagashio K, Yamashita T, Nishimura T, Kita K, and Toriumi A 2011 J. Appl. Phys. 110, 024513.
p-type Dolui K, Rungger I and Sanvito S 2013 Phys. Rev. B 87 165402.
g-r noise Kirton M J and Uren M J 1989 Adv. Phys. 38 367-468.
graphene hysteresisWang H, Wu Y, Cong C, Shang J and Yu T 2010 ACS Nano 4 7221-8.
|
http://arxiv.org/abs/2409.02182v1 | 20240903180003 | Metal line emission around z<1 galaxies | [
"Rajeshwari Dutta",
"Michele Fumagalli",
"Matteo Fossati",
"Marc Rafelski",
"Mitchell Revalski",
"Fabrizio Arrigoni Battaia",
"Valentina D'Odorico",
"Celine Peroux",
"Laura J. Prichard",
"A. M. Swinbank"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO"
] |
IUCAA, Postbag 4, Ganeshkind, Pune 411007, India [email protected]
Dipartimento di Fisica G. Occhialini, Università degli Studi di Milano Bicocca, Piazza della Scienza 3, 20126 Milano, Italy
INAF – Osservatorio Astronomico di Trieste, via G. B. Tiepolo 11, I-34143 Trieste, Italy
INAF - Osservatorio Astronomico di Brera, via Bianchi 46, 23087 Merate (LC), Italy
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching bei München, Germany
Scuola Normale Superiore, P.zza dei Cavalieri, I-56126 Pisa, Italy
IFPU-Institute for Fundamental Physics of the Universe, via Beirut 2, I-34151 Trieste, Italy
European Southern Observatory, Karl-Schwarzschildstrasse 2, D-85748 Garching bei München, Germany
Aix Marseille Université, CNRS, LAM (Laboratoire d'Astrophysique de Marseille) UMR 7326, F-13388, Marseille, France
Centre for Extragalactic Astronomy, Department of Physics, Durham University, South Road, Durham DH1 3LE, UK
R. Dutta et al.
We characterize, for the first time, the average extended emission in multiple lines (, , and ) around a statistical sample of 560 galaxies at z≈0.25-0.85. By stacking the Multi Unit Spectroscopic Explorer (MUSE) 3D data from two large surveys, the MUSE Analysis of Gas around Galaxies (MAGG) and the MUSE Ultra Deep Field (MUDF), we detect significant emission out to ≈40 kpc, while and emission is detected out to ≈30 kpc. Via comparisons with the nearby average stellar continuum emission, we find that the line emission at 20–30 kpc likely arises from the disk-halo interface. Combining our results with that of our previous study at z≈1, we find that the average surface brightness increases independently with redshift over z≈0.4–1.3 and with stellar mass over ≈10^6-12 , which is likely driven by the star formation rate as well as the physical conditions of the gas. By comparing the observed line fluxes with photoionization models, we find that the ionization parameter declines with distance, going from log q (cm s^-1) ≈7.7 at ≤5 kpc to ≈7.3 at 20–30 kpc, which reflects a weaker radiation field in the outer regions of galaxies. The gas-phase metallicity shows no significant variation over 30 kpc, with a metallicity gradient of ≈0.003 dex kpc^-1, which indicates an efficient mixing of metals on these scales. Alternatively, there could be a significant contribution from shocks and diffuse ionized gas to the line emission in the outer regions.
Metal line emission around z<1 galaxies
Rajeshwari Dutta,
1
Michele Fumagalli2,3,
Matteo Fossati2,4,
Marc Rafelski5,6,
Mitchell Revalski5,
Fabrizio Arrigoni Battaia7,
Valentina D'Odorico3,8,9,
Celine Péroux10,11,
Laura J. Prichard5,
A. M. Swinbank12
Received ; accepted
===============================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The majority of the baryons in the Universe are found in the form of gas in the intergalactic medium (IGM) and the circumgalactic medium (CGM) of galaxies. The gas acts as the fuel for the formation of stars in galaxies. The formation of stars leads to the creation of heavy elements or metals. Stellar winds and supernova explosions eject the metals out of stars into the interstellar medium (ISM) and even beyond into the CGM and the IGM. Galaxies thus form and evolve in complex ecosystems that are regulated by a cycle of gas flows <cit.>. This baryon cycle is believed to consist of: metal-poor gas accretion onto galaxies from filaments of the IGM; stellar feedback and active galactic nucleus-driven outflows of metal-enriched gas into the CGM; recycling of gas from the CGM back into the ISM through fountains; and transfer of gas between galaxies through interactions and mergers <cit.>. All these processes leave their imprint on the distribution and physical conditions of gas within and around galaxies, with the relative importance of each process evolving with cosmic time.
Strong rest-frame optical emission lines have long been used to trace the physical conditions of the ionized gas in the ISM. Several diagnostic line ratio diagrams have been developed to probe the chemical and ionization state of the ISM across redshifts <cit.>. Higher-redshift galaxies are found to have higher ionization parameters, harder ionizing spectra, and higher electron densities <cit.>. Furthermore, the metallicity of the gas in the ISM is found to decrease with increasing redshift at a fixed stellar mass <cit.>.
In the nearby Universe, the gas-phase metallicity in galaxies typically exhibits a negative gradient (i.e., the central regions are more metal-rich than the outskirts), which supports an inside-out picture of galaxy evolution <cit.>. Integral field unit (IFU) spectrographs, such as the Multi Unit Spectroscopic Explorer <cit.> and the K-band Multi Object Spectrograph <cit.> on the Very Large Telescope (VLT), and slitless grism spectrographs on the Hubble Space Telescope (HST) have enabled spatially resolved studies of the ISM up to z≈3. Such studies have revealed flatter metallicity gradients in z≳0.5 galaxies with a large scatter and some galaxies even having positive gradients <cit.>. Recently, James Webb Space Telescope (JWST) near-infrared (NIR) IFU observations found flat metallicity gradients in z≈6-8 galaxies <cit.>. The flattening of metallicity gradients could be due to several factors, including the inflow of metal-poor gas into the inner regions, outflows populating the outer regions with metals, recycling of metal-rich gas in the outer regions, and transfer of metals due to galaxy interactions and mergers <cit.>.
The above-cited studies probed the ionized gas within the stellar disks of galaxies (a few kiloparsecs, or a few effective radii). Beyond the stellar disks, galaxies are surrounded by halos of diffuse gas that extend out to a few hundred kiloparsecs (or a few virial radii). This is known as the CGM <cit.>, and it is relatively challenging to probe in emission due to the low gas density but is effectively probed in absorption against a bright background source such as a quasar. Along with the stellar and ISM properties of galaxies, the distribution and physical conditions of gas in the CGM can place crucial constraints on models of galaxy formation and evolution <cit.>. Based on several large observational campaigns carried out to characterize the CGM, we know that the CGM at z≲4 consists of metal-enriched, multiphase gas, which becomes more ionized with increasing distance from galaxies and which is influenced by both galaxy properties and environmental effects <cit.>.
Detecting emission in multiple lines from the diffuse gas in the extended stellar disk, disk-halo interface, and outer halo can provide us with additional and complementary insights into the distribution and physical conditions of the CGM. Sensitive optical IFU spectrographs such as MUSE and the Keck Cosmic Web Imager <cit.> have facilitated detections of the low surface brightness (SB) halo gas at cosmological distances in emission. There have been a few detections of metal line emission extending up to tens of kiloparsecs around both individual galaxies and stacks of galaxies <cit.>.
Recently, by stacking the MUSE data of a sample of ≈600 galaxies, <cit.> detected, for the first time, the average and line emission up to ≈30–40 kpc from a general population of galaxy halos at z≈1. The shallower radial SB profile of up to ≈20 kpc compared to suggests that the resonant emission is affected by dust and radiative transfer effects, while the constant ratio of to over ≈20–40 kpc suggests a non-negligible in situ origin of the extended metal emission. At z<1, the detection of extended emission in multiple lines (e.g., , , , and ) around a few galaxies and quasars has enabled studies of the physical conditions and excitation mechanisms in the gas beyond the stellar disks <cit.>. Such studies help bridge our understanding of the gas within the stellar disks and the gas beyond it in the halo.
Here we extend the stacking analysis of <cit.> to z<1 to investigate the average physical conditions in gas around a statistical sample of galaxies. The observations and the stacking procedure used in this work are explained in Sect. <ref>. The results of the stacked line emission and line ratios around galaxies are presented in Sect. <ref>. The results are summarized and discussed in Sect. <ref>. Throughout this work, we use a Planck 2015 cosmology with H_ 0 = 67.7 Mpc^-1 and Ω_ M = 0.307 <cit.>.
§ OBSERVATIONS AND ANALYSIS
This work is based upon MUSE observations from two large programs on the VLT: (1) MUSE Analysis of Gas around Galaxies <cit.>, which consists of medium-deep, single pointing (1×1 arcmin^2) MUSE observations of 28 fields that are centered on quasars at z≈3.2-4.5; (2) MUSE Ultra Deep Field <cit.>, which consists of very deep MUSE observations of a 1.5×1.2 arcmin^2 region around a pair of quasars at z≈3.2. The MUSE exposure time in the MAGG survey is ≈4 h per field, except for two fields that have deeper exposure time of ≈10 h. The MUSE observations in the MUDF survey amount to a total of ≈143 h, with the exposure time decreasing from the center of the field to the outer regions. The average image quality of the MUSE data is ≈0.7 arcsec full-width at half-maximum (FWHM), which corresponds to ≤5 kpc at z<1.
The observations and galaxy catalogs from these surveys are described in detail in Sect. 2 of <cit.>. We selected all the galaxies in the MAGG and MUDF catalogs that lie within the redshift range z=0.25-0.85. This selection ensured that we got simultaneous coverage of the λλ3727,3729, λ4863, λ4960 and λ5008 emission lines in the MUSE spectra. The total sample consists of 560 galaxies, 480 from the MAGG survey, and 80 from the MUDF survey. The total MUSE exposure time of the galaxies used for stacking in this work is ≈6264 h.
To obtain the physical properties of the galaxies such as stellar masses and star formation rates (SFRs), the MUSE spectra and photometry were jointly fit with stellar population synthesis models using the Monte Carlo Spectro-Photometric Fitter <cit.>. For the MUDF galaxy sample, HST optical and NIR photometry in five bands <cit.> and HAWK-I K-band photometry were used in addition to the MUSE spectra and photometry. We refer to Sect. 2 of <cit.> for details of the spectral energy distribution (SED) fitting process. Briefly, mc-spf adopts the <cit.> models at solar metallicity, the <cit.> initial mass function, nebular emission lines from the models of <cit.>, and the dust attenuation law of <cit.>.
The distributions of redshifts and stellar masses of the galaxy sample used for stacking in this work are shown in Fig. <ref>. The median redshift of the sample is ≈0.6. The typical uncertainty in the redshift estimates from MUSE spectra is ≈60 . The galaxy sample spans the stellar mass range of ≈10^6-11 , with a median stellar mass of ≈10^9 . The typical uncertainty in the estimates of stellar mass from SED fitting is ≈0.1–0.2 dex.
We followed the same procedure to stack the MUSE data of the galaxies as described in Sect. 3 of <cit.>. In brief, for each galaxy we first extracted a sub-cube of area 200×200 kpc^2 centered on the galaxy spanning 100 Å in rest wavelength around the emission lines of interest, after subtracting the continuum emission and masking out all the other continuum sources. We performed both mean and median stacking of all the sub-cubes in the rest-frame wavelengths of the galaxies. The results from both the stacks are found to be consistent within 1σ. Here we present the results based on median stacking, which is more robust against outliers. To subtract the continuum, we fit a low-order spline to the continuum emission around the lines for each spaxel in the cube. We repeated the above procedure 100 times with repetition to obtain the sample variance from the 16^ th and 84^ th percentiles of the bootstrapped sample.
We corrected for the Milky Way extinction following <cit.> and the <cit.> extinction curve. We applied a correction for dust to the fluxes of each galaxy before stacking using the dust extinction derived by mc-spf and the extinction curve of <cit.>. We checked that there is no significant difference in the results if instead we correct for dust after stacking using the median dust extinction of the sample. We note that there likely are radial gradients in the dust attenuation within a galaxy <cit.>, which we do not take into account in this work because spatially resolved dust attenuation distribution in high-z galaxies is still not well understood, particularly at the larger spatial scales probed in this work. For the median stellar mass of this sample, the dust attenuation is found to be weak and flat on average in the ISM of z≈1 galaxies <cit.>. To correct the flux for the underlying stellar absorption, we increased the flux by 3% based on the typical average correction found in the literature <cit.>.
To generate pseudo-narrowband (NB) images of the line emission, we summed the stacked cube from -250 to 250 in rest-frame velocity around the and lines, and from -250 to 500 around the λ3727 line of the doublet to cover both the lines of the doublet. This velocity range is found to encompass all the emission in the stacked spectra (see Fig. <ref>). For reference, the median virial velocity expected for the host halos of the galaxies in this sample is ≈100 . To make a comparison with the line NB images, we generated NB images of the stellar continuum emission by averaging the NB images over two velocity windows around ±1500 of each line with the same velocity width as for the line NB. The line and continuum NB images were used to estimate the average SB radial profiles. We did not take the inclination and orientation of the galaxies into account during stacking because we did not have HST imaging available for the full sample. Therefore, the stacked emission at any projected separation represents the azimuthally averaged emission of the galaxy sample at that separation.
§ RESULTS
§.§ Line emission around galaxies
First we characterized the average emission around z≈0.6 galaxies from the , , and lines. We extracted the 1D spectra from the 3D median stacked cubes by summing up the flux over different annular regions. Figure <ref> shows the spectra extracted within 0–5 kpc, 5–10 kpc, 10–20 kpc, 20–30 kpc, and 30–40 kpc around the λλ3727,3729 doublet, , and λ5008 lines. We note that the λ4960 line, not shown here, is found to have 1/3 the flux of the λ5008 line as per theoretical expectations. The line emission is detected out to ≈40 kpc at 3σ, while the and emission lines are detected out to ≈30 kpc at 3σ. It can be seen that, as expected, the flux of the emission lines decreases going outward from the center. We fit Gaussian profiles to the four emission lines with the constraint that the velocity centroid and velocity width of all the lines are equal. We fit a double Gaussian profile to the doublet, allowing the ratio of the line fluxes, λ3729/λ3727, to vary between the low density limit of 0.35 and the high density limit of 1.5. We find that the line fluxes decrease by a factor of ≈10–20 going from the central 5 kpc region to the outer 20–30 kpc annular region. In addition to the decrease in flux, the emission lines become broader in the outer regions, with the velocity dispersion (not corrected for line spread function) going from ≈75 within 5 kpc to ≈96 at 20–30 kpc. The doublet ratio increases from ≈1.3 within 5 kpc to the low density limit of 1.5 at 30–40 kpc.
The top row of Fig. <ref> shows the NB images of the line emission in the median stacked cube. The NB images of the continuum emission around the lines are shown in the second row for comparison. The combined NB image of the λ4960 and λ5008 lines is shown. The line emission is the most spatially extended, followed by the and emission. To quantify the average radial profile and extent of the emission, the azimuthally averaged SB in annuli of radius 10 kpc, as obtained from the NB images, are shown in Fig. <ref>. The SB profile is more radially extended than that of the continuum, while the and SB profiles are similar to that of the continuum. The continuum emission near the line extends out to ≈20 kpc at a 3σ SB level of ≈10^-20 . This suggests that we are probing the stellar disk or ISM up to ≈20 kpc. Beyond that at 20–30 kpc, the emission is significantly enhanced compared to the continuum, while and are marginally enhanced. The line emission in this region is most likely originating from the disk-halo interface, where the ISM is transitioning into the CGM. In the case of , we detect significant emission out to 30–40 kpc, which likely originates from the CGM.
§.§ Evolution of emission
Based on a similar stacking analysis of higher-redshift galaxies (z≈0.7-1.5) in the MAGG and MUDF surveys, <cit.> found that the line emission becomes brighter and more spatially extended with increasing stellar mass and redshift, and suggested that this was likely due to higher SFRs at higher stellar masses and redshifts. To further investigate the evolution of emission with redshift, we combined our sample with that presented in <cit.>, and conducted two control experiments. First, we formed four subsamples in the redshift ranges, 0.25 ≤ z < 0.5, 0.5 ≤ z < 0.7, 0.7 ≤ z < 1.0, and 1.0 ≤ z < 1.5, which are matched in stellar mass within a factor of two, such that the maximum difference between the cumulative distributions of the stellar mass in each of the redshift bins is ≲0.1 and the p-value is ≳0.8 based on two-sided Kolmogorov-Smirnov test. In this way, we were able to form a stellar mass-matched sample of 70 galaxies in each of the four redshift bins. Next, similar to above, we formed subsamples in the four redshift bins that are matched within a factor of two in both stellar mass and SFR. This led to a matched sample of 34 galaxies in each redshift bin.
In the left panel of Fig. <ref> we plot the average SB, corrected for cosmological dimming, as a function of redshift, for both the matched samples. The average values are estimated in two regions: within a circular aperture of radius 20 kpc that that probes the ISM and within an annular aperture between radii 20 and 30 kpc that likely probes the disk-halo interface. For the inner region, the average SB is found to increase by a factor of ≈20 from z≈0.4 to z≈1.3 in the case of the stellar mass-matched samples, while in the case of the stellar mass- and SFR-matched samples, the dependence of SB on redshift is weaker, with the average SB increasing by a factor of ≈4 from z≈0.4 to z≈1.3. For the outer region, the trend with redshift again becomes weaker for the stellar mass- and SFR-matched samples, although there is no significant difference between the two matched samples. This indicates that the higher SFR of galaxies at higher redshifts is likely a major factor behind the enhanced emission in the ISM at high redshifts. However, there are likely additional factors, such as gas density, that contribute to the enhanced extended emission at high redshifts. Indeed, studies have found that the density of the warm ionized gas is higher at z≈2-3 compared to local galaxies that are matched in stellar mass and star formation activities <cit.>.
Furthermore, we performed similar control experiments as above to investigate the dependence of emission on stellar mass using the combined sample of this work and <cit.>. Firstly, we formed four subsamples in the stellar mass bins, = 10^6-8, 10^8-9, 10^9-10, and 10^10-12 , which are matched in redshift within 0.3, such that the p-value from Kolmogorov-Smirnov test is ≳0.8. This led to a redshift matched sample of 51 galaxies in each of the stellar mass bins. Secondly, we formed four subsamples in the above stellar mass bins that are matched within a factor of two in SFR in addition to being matched in redshift as above, leading to samples of 25 galaxies in each mass bin. We then performed the line emission stacking for these subsamples. The average SB values in the inner 20 kpc and outer 20–30 kpc regions are shown in the right panel of Fig. <ref> for the matched samples. In the inner region, the SB increases by a factor of ≈8 from = 10^6-8 to 10^10-12 for the redshift matched samples, and by a factor of ≈2 when matched additionally in SFR. In the outer region, the increasing trend with stellar mass is consistent within the large uncertainties for both the matched samples. Therefore, the dependence of the emission on stellar mass is likely driven by a combination of dependence on star formation activity and other physical conditions in the gas. We note that in the above analysis, as mentioned in Sect. <ref>, we corrected the fluxes for dust before stacking using the stellar extinction obtained from SED fitting, assuming that the nebular and stellar extinction follow each other.
§.§ Line ratios around galaxies
Having detected emission from multiple lines in the stacked cube, we next investigated the average physical conditions in the ionized gas around z≈0.6 galaxies using different line ratios. In particular, the O32 line ratio, defined as λ5008/λλ3727,3729, is an indicator of the degree of ionization of the gas, while the R23 line ratio, defined as (λ4960,5008+λ3727,3939)/, is sensitive to the gas-phase metallicity. The O32 versus R23 diagram has been used to study the ionization and metallicity in the ISM using integrated spectra of galaxies. Star-forming galaxies generally follow a trend in the O32–R23 diagram, going from low R23 and O32 values at high metallicity to high R23 and O32 values at low metallicity, although R23 begins to decrease at very low metallicity <cit.>. We note that according to the mass-metallicity relation, the median stellar mass of the sample is approximately at the turn-over metallicity of R23.
Thanks to stacking of the MUSE 3D data, here we were able to spatially resolve the above line ratios and study how they vary on average around a statistical sample of star-forming galaxies. We note that physical conditions such as metallicity can vary with stellar mass and redshift <cit.>. Therefore, we created similar stacks in bins of stellar mass and redshift below and above the median values, and also by weighting with the stellar mass, to confirm that the trends with physical separation we find below remain similar over the stellar mass and redshift range probed here.
To compute the line ratios, we integrated the flux in the spectra extracted from the stacked cube over the velocity window marked in Fig. <ref>. For the doublet, we considered the total flux of both the lines. The error in the integrated flux was obtained by propagating the error in the spectra. If the integrated flux was detected at less than 2σ, we considered it an upper limit. The left panel of Fig. <ref> shows the O32 versus R23 line ratios estimated from the stacked spectra in different annuli (Fig. <ref>). The O32 ratio is found to decrease going from the central 0–5 kpc bin (log O32 ≈-0.1) to the outer 20–30 kpc bin (log O32 ≈-0.5). The R23 ratio is found to increase from the central 0–5 kpc bin (log R23 ≈0.8) to the 10–20 kpc bin (log R23 ≈0.9), and then decrease in the 20–30 kpc bin (log R23 ≈0.75).
For comparison, we also plot the O32 versus R23 ratios from integrated spectra of lower- and higher-redshift galaxies. The median value of the ratios in the sample of z≈0.1 galaxies from the Sloan Digital Sky Survey <cit.> are obtained using the Max-Planck-Institute for Astrophysics – Johns Hopkins University (MPA–JHU) catalog[<https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/>]. The line ratios at z≈2.3 and z≈3.3 are obtained from the composite spectra of star-forming galaxies in different stellar mass bins (≈2×10^9 - 4×10^10 ) in the MOSFIRE Deep Evolution Field (MOSDEF) survey <cit.>, with the line ratios decreasing with increasing stellar mass. The stacked line ratios in the central region, within 20 kpc, are similar to the average line ratios found in the ISM of z≈2-3 galaxies with stellar mass, log ≈10.2-10.6 . Furthermore, the stacked line ratios in the central region are consistent with the trend of R23 increasing at a fixed O32, and O32 decreasing at a fixed R23, from z≈0 to z≈3. On the other hand, the stacked line ratios at 20–30 kpc are more similar to those found in the ISM of local SDSS galaxies. This indicates a transition in the physical conditions or the ionization mechanism of the warm ionized gas from the inner to the outer regions (i.e., from the ISM to the disk-halo interface and the CGM).
To investigate this further, we compared our results with photoionization models. Strong line ratios – calibrated either theoretically via comparison with photoionization models or empirically via comparison with estimates from direct method in sources when possible – have been used extensively to infer the ionization parameter and chemical abundance of the photoionized gas in the ISM <cit.>. However, the O32 ratio, typically used as an estimator of the ionization parameter, is influenced by the gas-phase metallicity and gas pressure. Similarly, the R23 ratio, which is commonly used to determine chemical abundances, is sensitive to the ionization parameter and gas pressure, and is moreover double-valued, having both a low and a high abundance branch. On the other hand, Bayesian methods that use theoretical photoionization models to fit emission line fluxes can be used to simultaneously derive the ionization parameter (q) and gas-phase metallicity (Z), although they likely still suffer from some of the same limitations as the individual line ratio calibrations.
Here, we used the code izi <cit.> to simultaneously infer q and Z in a Bayesian method by comparing the emission line fluxes in the stacked spectra with photoionization models. We adopted the models of <cit.> that were calculated using the high mass-loss tracks, a constant star formation history, a Salpeter IMF with a 100 upper cutoff, an electron density of 100 , and an age of 6 Myr. We verified that the radial trends of q and Z remain similar using the <cit.> models with different parameters or different photoionization models <cit.>. We note that the photoionization models assume that the density of the emitting gas is constant across the scales of interest here, which may not be valid. The central panel of Fig. <ref> shows the q and Z values estimated by izi, which also takes into account upper limits on the line fluxes. The ionization parameter is found to decrease from log q (cm s^-1) ≈7.7 to ≈7.3 on going outward from the central 0–5 kpc region to the outer 20–30 kpc region, most likely due to a weaker radiation field in the outskirts of galaxies. While there appears to be a transition in the line ratios from the inner 20 kpc to the outer 20–30 kpc region in the O32–R23 diagram, there is no significant variation in the gas-phase metallicity between the inner and outer regions based on photoionization models, with 12 + log(O/H)[Solar metallicity, 12 + log(O/H) = 8.69 <cit.>] going from ≈8.3 at ≤5 kpc to ≈8.4 at 20–30 kpc. Using the empirical calibrations of <cit.>, we get a similar trend of metallicity, with 12 + log(O/H) ≈8.4 at ≤5 kpc and ≈8.5 at 20–30 kpc.
The above analysis assumes that the warm ionized gas is photoionized. While this assumption is likely valid in the ISM, it may not necessarily hold in the extended disk or CGM, where shocks and turbulence could also ionize the gas. To investigate alternate ionization mechanisms, we compared the line ratios from the stacked spectra with shock models. The right panel of Fig. <ref> compares the O32 and R23 line ratios from the stacked spectra with shock models assuming a Large Magellanic Cloud abundance and a pre-shock density of 1 for several magnetic parameters and shock velocities from <cit.>. The “shock only” models include contribution from radiative shocks only, while the “shock+precursor” models include contribution from the H ii region ahead of the shock front as well. The line ratios in the inner 5 kpc region are consistent with the “shock+precursor” models with magnetic parameter, Bn^-1/2 = 2–4 μG cm^3/2, and velocity 200 . This suggests that photoionization is the dominant ionization mechanism in the central region. On the other hand, at 20–30 kpc, the line ratios are consistent with the “shock only” models with magnetic parameter, Bn^-1/2 = 2–4 μG cm^3/2, and velocity 200 . This indicates that outer regions could be affected by radiative shocks arising due to gas flows and interactions, although the observed velocity dispersions of the emission lines are less than the required shock velocities.
§ DISCUSSION AND SUMMARY
We have presented the average line emission around a statistical sample of 560 galaxies at z≈0.25-0.85 based on stacking of the MUSE 3D data of two large surveys, MAGG and MUDF. The redshift range was selected to have simultaneous coverage of the λλ3727,3729, λ4863, λ4960, and λ5008 emission lines. The average line emission is detected out to ≈40 kpc, and the average and emission is detected out to ≈30 kpc. For comparison, the average stellar continuum emission is detected out to ≈20 kpc. Therefore, the extended emission likely probes the disk-halo interface or the transition region between the ISM and the CGM.
Combining our sample with that of <cit.>, who conducted a similar stacking analysis at higher redshifts, we find that the emission increases independently with both redshift over z≈ 0.4–1.3 and with stellar mass over ≈10^6-12 . Based on a control analysis, we find that the enhancement in the average SB in the inner 20 kpc region tracing the ISM is driven to a large extent by the higher SFRs at higher redshifts and stellar masses. On the other hand, additional factors, such as the evolution of physical conditions in the warm ionized gas, likely contribute to the observed enhancement in the SB in the outer 20–30 kpc region with redshift and stellar mass.
To investigate the average physical conditions in the extended disks or the disk-halo interface of galaxies, we analyzed the emission line ratios in the stacked spectra. Both the O32 and R23 line ratios are found to decrease from the center out to 30 kpc. The line ratios at <20 kpc occupy a similar region in the O32–R23 diagram as those obtained from integrated spectra probing the ISM of z≈2-3 galaxies. On the other hand, the line ratios at >20 kpc are shifted in the O32–R23 diagram, which indicates a transition in the physical conditions and/or ionization mechanisms of the warm ionized gas from the ISM to the CGM.
We simultaneously estimated the ionization parameter and gas-phase metallicity under the assumption that the gas is photoionized. For this purpose, we used the Bayesian code izi to compare the emission line fluxes in the stacked spectra with the photoionization models of <cit.>. The ionization parameter exhibits a declining trend from the central region out to 30 kpc. This is consistent with a picture in which the number of regions decreases with distance from galaxy center, resulting in a weaker radiation field in the outskirts. On the other hand, the metallicity does not show any significant trend, remaining more or less similar up to 30 kpc. As discussed in Sect. <ref>, several mechanisms acting simultaneously are likely influencing the distribution of metals within and around galaxies. For example, the mixing of metals due to the balance between the inflow of metal-poor gas, outflow of metal-rich gas, and gas transfer due to mergers could lead to a flat metallicity gradient. Several studies have found that the metallicity gradients in the ISM of z≳0.5 galaxies are generally flat with a large scatter <cit.>. Studies of the CGM metallicity have also indicated that the CGM is complex, with gas outflows, accretion, and recycling all at play <cit.>. Extending such studies into the disk-halo interface, we find that the average metallicity gradient is flat (≈0.003 dex kpc^-1) out to 30 kpc, indicating an efficient mixing of metals at these larger scales.
We point out that other ionization mechanisms in addition to photoionization by H ii regions might be altering the observed line ratios and metallicity measurements in the outer regions. For example, the gas could be ionized by radiative shocks arising due to stellar outflows, supernovae, gas accretion from the CGM, or interactions <cit.>. By comparing the line ratios from the stacked spectra with those from the shock models of <cit.>, we find that the line ratios at 20–30 kpc can be explained by shock velocities of ≈200 . However, it may be difficult to reconcile the observed narrow line widths (≲100 ) with the required shock velocities, although slow shock models with a velocity dispersion of ≈100 have been found to produce line ratios consistent with active galactic nucleus-like excitation due to shocks <cit.>. In addition, there could be a contribution from the diffuse ionized gas to the line emission in the outer regions, leading to line ratios significantly different from those produced by H ii regions <cit.>. The diffuse ionized gas could be ionized by photons leaking out of H ii regions, hot evolved stars, and shocks. In the future, additional multiwavelength observations such as NIR observations of , , and emission lines would facilitate distinguishing between different ionization mechanisms.
We note that we stacked the emission of galaxies at random disk orientations, washing out any azimuthally dependent signatures of metallicity gradients if present, such as those due to biconical outflows of metal-rich gas from the center of galaxies or the accretion of metal-poor gas along the plane of the galaxy disks <cit.>. In <cit.>, the average emission at z≈1 was found to be ≈30% more extended along the major axis than along the minor axis, while the average emission at z≈1 was found to be enhanced up to 10 kpc along the minor axis in the stacking analysis of <cit.>. In the current sample, the number of galaxies with orientation measurements from high-spatial-resolution HST imaging is not sufficiently large to probe extended emission from multiple lines in stacking.
Nevertheless, the average metallicity gradient in a statistical sample can still place useful constraints on galaxy formation models. For example, comparing different feedback prescriptions in cosmological hydrodynamical simulations, <cit.> found that an enhanced feedback model gives rise to flat metallicity gradients in galaxies. Recently, using EAGLE simulations, <cit.> found that the median metallicity gradient is close to zero at all redshifts, with mergers and gas accretion both influencing the gradients, while <cit.> found predominantly negative metallicity gradients at all redshifts using TNG-50 simulations. This shows that the gradient can be a useful tool for discriminating between different feedback models. The flat metallicity gradient (≈0.003 dex kpc^-1) out to 30 kpc found in this work supports a model in which feedback and interactions regulate the gradients. Future detections of extended emission in multiple lines from the gas around a larger sample of individual galaxies, as well as an extension of the stacking analysis to higher redshifts using IFU data from the JWST and the Extremely Large Telescope, are expected to constrain galaxy formation models even further by characterizing the radial gradients of the physical, chemical, and ionization conditions of gas from the ISM to the CGM.
We thank the anonymous referee for their useful comments.
RD thanks Palle Møller for helpful comments.
This work has been supported by Fondazione Cariplo, grant No 2018-2329.
This work is based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme IDs 197.A-0384 and 1100.A-0528.
Based on observations with the NASA/ESA Hubble Space Telescope obtained, from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS 5-26555. Support for Program numbers 15637, and 15968 were provided through grants from the STScI under NASA contract NAS 5-26555.
This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (<www.dirac.ac.uk>). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grants ST/H008519/1 and ST/K00087X/1, STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure.
aa
|
http://arxiv.org/abs/2409.03682v1 | 20240905163726 | A New First-Order Meta-Learning Algorithm with Convergence Guarantees | [
"El Mahdi Chayti",
"Martin Jaggi"
] | cs.LG | [
"cs.LG",
"math.OC"
] |
numbers, compressnatbib
=1
framedtheoremTheorem
framedlemmaLemma
framedcorollaryCorollary
framedexampleExample
framedassumptionAssumption
framedpropositionProposition
theoremTheorem
exampleExample
definitionDefinition
corrolaryCorrolary
corollaryCorollary
[ 9.5cm11.5cm⟨⟩#1figure <ref>#1Figure <ref>#1#2figures <ref> and <ref>#1#2#3#4figures <ref>, <ref>, <ref> and <ref>#1section <ref>#1Section <ref>#1#2sections <ref> and <ref>#1#2#3sections <ref>, <ref> and <ref>#1<ref>#1chapter <ref>#1Chapter <ref>#1#2chapters<ref>–<ref>#1algorithm <ref>#1Algorithm <ref>#1#2algorithms <ref> and <ref>#1#2Algorithms <ref> and <ref>#1part <ref>#1Part <ref>#1#2parts <ref> and <ref>#1⌈ #1 ⌉#1⌊ #1 ⌋1ϵ𝕀ηbcdefghijklnopqrstuvwxyzϵθ𝐚𝐛𝐜𝐝𝐞𝐟𝐠𝐡𝐢𝐣𝐤𝐥𝐦𝐧𝐨𝐩𝐪𝐫𝐬𝐭𝐮𝐯𝐰𝐱𝐲𝐳abcdefghijklmnopqrstuvwxyz𝐀𝐁𝐂𝐃𝐄𝐅𝐆𝐇𝐈𝐉𝐊𝐋𝐌𝐍𝐎𝐏𝐐𝐑𝐒𝐓𝐔𝐕𝐖𝐗𝐘𝐙ABCDEFGHIJKLMNOPQRSTUVWXYZ01μθabcdefghijklmnopqrstuvwxyzϕαβϵλωμψσθabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZβΦΛΣmslboldbxnABCDEFGHIJKLMNOPQRSTUVWXYZ𝒜ℬ𝒞𝒟ℰℱ𝒢ℋℐ𝒥𝒦ℒℳ𝒩𝒪𝒫𝒬ℛ𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵𝒪ℬℒ𝔸𝔹ℂ𝔻𝔽𝔾ℍ𝕀𝕁𝕂𝕃𝕄ℕ𝕆ℙℚℝ𝕊𝕋𝕌𝕍𝕎𝕏𝕐ℤΛABCDEFGHIJKLMNOPQRSTUVWXYZΣΛABCDEFGHIJKLMNOPQRSTUVWXYZnumbersnatbibA New First-Order Meta-Learning Algorithm
with Convergence Guarantees
El Mahdi Chayti
Machine Learning and Optimization Laboratory (MLO), EPFL
Martin Jaggi
Machine Learning and Optimization Laboratory (MLO), EPFL
September 9, 2024
================================================================================================================================================================
§ ABSTRACT
Learning new tasks by drawing on prior experience gathered from other (related) tasks is a core property of any intelligent system. Gradient-based meta-learning, especially MAML and its variants, has emerged as a viable solution to accomplish this goal. One problem MAML encounters is its computational and memory burdens needed to compute the meta-gradients. We propose a new first-order variant of MAML that we prove converges to a stationary point of the MAML objective, unlike other first-order variants. We also show that the MAML objective does not satisfy the smoothness assumption assumed in previous works; we show instead that its smoothness constant grows with the norm of the meta-gradient, which theoretically suggests the use of normalized or clipped-gradient methods compared to the plain gradient method used in previous works. We validate our theory on a synthetic experiment.
§ INTRODUCTION
One key aspect of intelligence involves the capability of swiftly grasping new tasks by leveraging past experiences from similar tasks. Recent research has delved into how meta-learning algorithms <cit.> can acquire such a capability by learning to efficiently learn a range of tasks. This mastery allows for learning a novel task with only minimal training data, sometimes just a single example, as demonstrated in <cit.>.
Meta-learning approaches can be generally categorized into three main types.
* Metric-learning approaches: These methods learn an embedding space where non-parametric nearest neighbors perform effectively <cit.>.
* Black-box approaches: These train a recurrent or recursive neural network to either take data points as input and produce weight updates <cit.> or to generate predictions for new inputs <cit.>, attention-based models <cit.> can also be used.
* Optimization-based approaches: These usually involve bi-level optimization to integrate learning procedures, such as gradient descent, into the meta-optimization problem <cit.>. The "inner" optimization involves task adaptation and the "outer" objective is the meta-training goal: the average test loss after adaptation over a set of tasks. This approach, exemplified by <cit.> and MAML<cit.>, learns the initial model parameters to facilitate swift adaptation and generalization during task optimization.
Additionally, hybrid approaches have been explored to combine the strengths of different methods <cit.>.
In this study, we concentrate on optimization-based methods, specifically MAML <cit.>, which has been demonstrated to possess the expressive power of black-box strategies <cit.>. Additionally, MAML is versatile across various scenarios <cit.> and ensures a consistent optimization process <cit.>.
Practical challenges. While meta-learning initialization holds promise, it necessitates backpropagation through the inner optimization algorithm, introducing challenges; this includes the requirement for higher-order derivatives, leading to significant computational and memory overheads and potential issues like vanishing gradients. Consequently, scaling optimization-based meta-learning to tasks with substantial datasets or numerous inner-loop optimization steps becomes arduous. We aim to devise an algorithm that mitigates these constraints.
These challenges can be partially mitigated by taking only a few gradient steps in the inner loop during meta-training <cit.>, truncating the backpropagation process <cit.>, using implicit gradients <cit.>, or omitting higher-order derivative terms <cit.>. However, these approximations may lead to sub-optimal performance <cit.>.
Theoretical challenges. One additional challenge such methods face is theoretical relating to the convergence analysis of such methods. While it was shown in <cit.> that even the MAML objective with one inner gradient descent step is not smooth, which entails the need for complicated learning rate schedules, other works such as <cit.> simply assume that it is, even when having more than one inner step.
Contributions.
1)To address the practical challenges, we propose a new First-order MAML variant that, as its name suggests, avoids any use of second-order information, but unlike previous first-order algorithms such as FO-MAML and Reptile <cit.>, our algorithm has the advantage of having a bias (to the true meta-gradient) that can be made as small as possible, which means that we can converge, in theory, to any given precision.
2) To address the challenges of theoretical analysis, we show that the general MAML objective satisfies a generalized smoothness assumption introduced in <cit.>. This result suggests that clipped gradient descent is better suited in this case as it was shown to outperform vanilla gradient descent under such an assumption, for example, this explains why meta-gradient clipping works in stabilizing the convergence of MAML in practice.
3) Finally, we provide convergence rates for our method.
§ PRELIMINARIES
§.§ Vanilla MAML
We assume we have a set of training tasks {𝒯_i }_i=1^M drawn from an unknown distribution of tasks P(𝒯), such that for each task 𝒯_i one can associate a training _i and test _i dataset—or equivalently a training f̂_i and test f_i objective. Then, the vanilla MAML objective is that of solving the following optimization problem:
^⋆ := _∈Θ
F() := 1/M∑_i=1^M [F_i():=f_i ( _i() = ( f̂_i,, ))] ,
where ( f̂_i,, ) is an optimization algorithm that takes as input the objective f̂_i (i.e., dataset and loss), the initialization and other hyperparameters denoted by (.e.g., learning rate, and the number of steps), then outputs an updated task-specific parameter _i() that is hopefully a better "approximate" solution for the individual task objective f̂_i.
For example, ( f̂_i,, ·) may correspond to
one or multiple steps of gradient descent on f̂_i initialized at . For
example, if we use one step of gradient descent with a learning rate α, then we have:
_i() ≡( f̂_i,, :={α}) = - α∇_f̂_i().
To solve (<ref>) with gradient-based methods, we require a way to differentiate through . In the case of multiple steps like (<ref>), this corresponds to backpropagating through the dynamics of gradient descent. This backpropagation through gradient-based optimization algorithms naturally involves higher order derivatives and the need to save the whole trajectory to compute the meta-gradient (i.e., the gradient of F), which is a big drawback to vanilla MAML.
Another (potential) drawback of MAML defined in (<ref>) is that it depends on the choice of the optimization algorithm (since we need its specific trajectory).
§.§ First-order MAML and Reptile
One option considered in the literature to address the computational and memory overheads encountered when differentiating through gradient-based optimization algorithms is to devise first-order meta-gradient approximations <cit.>. Two such approaches stand out: FO-MAML and Reptile.
FO-MAML simply ignores the Jacobian dd leading to the following approximation:
_FO-MAML = 1/M∑_i=1^M ∇ f_i(_i())
The Reptile approximation is less straightforward, but the crux of it is using an average gradient over the inner optimization algorithm's trajectory.
_Reptile = 1/M∑_i=1^M - _i()Kα
where in (<ref>), K is the number of steps of and α is the learning rate.
These two approximations avoid the prohibitive computational and memory costs associated with vanilla MAML. However, both approximations introduce bias to the true meta-gradient that is irreducible, at least in the case of FOMAML <cit.> (the bias of Reptile is not clear in general).
§.§ B-MAML : MAML as a fully Bi-Level Optimization Problem
Ignoring the computational overhead of MAML, the memory overhead is naturally a result of the dependence of the MAML objective in (<ref>) on the choice of the inner optimization algorithm and thus on the trajectory of ; then if one can break this dependence on , one would avoid this memory overhead; one idea to accomplish the latter is to make the MAML objective depend on the inner optimization problem rather than the specific optimization algorithm used to solve such an optimization problem. This will amount to framing MAML as the following purely bi-level optimization problem :
^⋆ := _∈Θ F() := 1/M∑_i=1^M [F_i():=f_i ( ^⋆_i() )],(outer-level)
where for i∈{1,⋯,M} we define (recall that f and f̂ denote validation and training objectives respectively):
^⋆_i() := _∈Θf̂_i() + λ/2 - ^2 (inner-level)
Where λ is a real hyperparameter that plays the role of the inverse of the learning rate α that MAML had, it helps control the strength of the meta-parameter or prior () relative to new data. We note that this hyperparameter can be a vector or a matrix, but we keep it as a scalar for simplicity.
We also note that the formulation (<ref>) is not new and was introduced, for example, in <cit.>.
To use gradient-based methods to solve (<ref>), we need to compute the gradient ∇ F. Using the implicit function theorem and assuming that λ + ∇^2 f̂_i is invertible, it is easy to show that
∇ F_i() = ( + 1/λ∇^2 f_i(_i^⋆()))^-1∇ f_i(^⋆_i())
What is noteworthy about (<ref>) is that the meta-gradient ∇ F_i() only depends on ^⋆_i() which can be estimated using any optimization algorithm irrespective of the trajectory said-algorithm will take.
A downside of (<ref>) is that it involves computing the Hessian and inverting it. To reduce this burden, one can equivalently treat ∇ F_i() as a solution to the following linear system (with the unknown ∈^d):
( + 1/λ∇^2 f_i(_i^⋆())) = ∇ f_i(^⋆_i()),
which only needs access to Hessian-Vector products instead of the full Hessian and can be approximately solved with the conjugate gradient algorithm, for example, which is exactly the idea of <cit.>.
In this work, we propose a different strategy that consists of writing the meta-gradient as the gradient of the solution of a perturbed optimization problem with respect to its perturbation parameter. This means we can approximate the meta-gradient using the solution of two optimization problems.
§ FIRST-ORDER B-MAML
We consider the B-MAML objective defined in (<ref>), (<ref>); our main idea relies on perturbing the inner optimization problem (<ref>). For a perturbation parameter ν∈, and each training task 𝒯_i, we introduce the following perturbed inner optimization problem, which interpolates between validation and training objectives:
^⋆_i,ν() := _∈Θ ν f_i() + f̂_i() + λ/2 - ^2.
We assume that there is a neighbourhood 𝒱_0 of 0, such that ^⋆_i,ν() is well-defined for any ν∈𝒱_0 and ∈Θ.
Then we can show the following result:
For any training task 𝒯_i, if ∇ F_i() exists, then ν↦_i,ν^⋆() is differentiable at ν=0 and
∇ F_i() = -λd_i,ν^⋆()/dν|_ν=0 .
Sketch of the proof. We use the fact that _i,ν^⋆() is a stationary point of ↦ν f_i() + f̂_i() + λ/2 - ^2 this will give us a quantity that is null for all ν∈𝒱_0, then we differentiate with respect to ν and ν=0.
Proposition <ref> writes the meta-gradient ∇ F_i() as the derivative of another function that is a solution to the perturbed optimization problem <ref>, thus presenting us with a way we can approximate the meta-gradient using the finite difference method <cit.>. We consider mainly two approximations: the forward and symmetric approximations.
^For_i,ν() = -λ(_i,ν^⋆() - _i,0^⋆()/ν)(forward approximation)
^Sym_i,ν() = -λ(_i,ν^⋆() - _i,-ν^⋆()/2ν)(symmetric approximation)
We note that more involved approximations (that need solving more than two optimization problems) can be engineered, but we limit ourselves, in this work, to (<ref>) and (<ref>).
Assuming that ν↦_i,ν^⋆() is regular enough near ν=0 (for example, three times differentiable on 𝒱_0 and its third derivative is bounded), then we should expect that
∇ F_i() - ^For_i,ν() = (λν) and ∇ F_i() - ^Sym_i,ν() = (λν^2)
We will provide conditions under which we get the first identity in Equation <ref> in Section<ref> and support the second experimentally in Section <ref>.
In practice, we can't realistically solve the optimization problem (<ref>) and have access to the true values of _i,ν^⋆(); instead, we will solve the problem (<ref>) approximatively using any algorithm of our choice and assume that for a given precision δ, we can get an approximate solution _i,ν() of (<ref>) such that
_i,ν() - _i,ν^⋆()≤δ .
We use g_i,ν^method for method∈{For, Sym}, to denote the estimator resulting from the use of the approximate solutions; this will introduce an additional bias (w/t ∇ F_i) to our estimators in (<ref>), (<ref>); it is easy to show that this bias should be bounded by 2λδν in the worst case. Notice that this bias term increases with small values of ν, which suggests a sweet spot for ν when including the bias terms in (<ref>).
The overall bias is then
∇ F_i() - g^For_i,ν() = (λν + λδν) and ∇ F_i() - g^Sym_i,ν() = (λν^2 + λδν) ,
minimizing for ν we get that for ν^For∼√(δ) and ν^Sys∼δ^1/3 we get:
∇ F_i() - g^For_i,ν() = (λ√(δ)) and ∇ F_i() - g^Sym_i,ν() = (λδ^2/3) ,
We summarize this in Algorithm <ref>.
In line 10 of Algorithm <ref>, we can use any optimization algorithm to update the meta-parameters , but our theoretical analysis in Section <ref> will only consider the Gradient Descent (GD) and Clipped Gradient (ClippedGD) algorithms (or Normalized GD which is equivalent). We will show that the B-MAML objective (<ref>) has a smoothness parameter that grows with the norm of its gradient; under this type of smoothness, it is known that ClippedGD is well-suited <cit.>, Adam <cit.> is also well-suited since it estimates the curvature and uses it to normalize the gradient. We also show that under stronger assumptions, the B-MAML objective is smooth in the classical sense (meaning its gradient is Lipschitz), and in this case, GD can be used but can have a worse complexity.
Now that we have presented our main algorithm, it is time to discuss its theoretical guarantees under common assumptions used in the Bi-Level optimization literature.
§ THEORETICAL ANALYSIS
In this Section, we provide theoretical guarantees of Algorithm <ref> using the forward approximation in equation (<ref>). We start by stating the assumptions that we make on the training tasks {_i}_i=1^M, then discuss the smoothness properties of the B-MAML objective defined in (<ref>) resulting from such assumptions. Finally, we discuss the convergence rate when using Gradient Descent (GD) or Clipped Gradient Descent as the meta-optimizer.
§.§ Assumptions
We will make use of the following assumptions
[training sets are well-behaved]
For all training tasks _i, the training objective f̂_i is twice differentiable, L̂_1-smooth and has L̂_2-Lipschitz Hessian.
[test sets are well-behaved]
For all training tasks _i, the test objective f_i is differentiable, L_0-Lipschitz, and has L_1-smooth Hessian.
We will also consider the following stronger assumption on the training objectives of our tasks
[Strong convexity]
There exists μ≥ 0 such that for all training tasks _i, the inner training objective f̂_i + λ/2·^2 is μ-strongly convex.
It is worth noting that for Assumption <ref> to hold, the functions f̂_i do not have to be strongly convex as well; in fact, we only need to choose the regularization parameter λ big enough; to be specific, under Assumption <ref>, f̂_i is L̂_1 smooth, which means that taking λ > l̂_1 will ensure Assumption <ref> holds with μ = λ - L̂_1.
We can also show that the perturbed problem (<ref>) will have a unique solution as long as ν∈𝒱_0=(-μ/L_1,μ/L_1).
Finally, we will need an additional assumption on the relationship between the individual tasks {_i} and their average used in the definition of B-MAML (<ref>).
[Bounded variance between given tasks]
There exists ζ≥ 0, such that for all ∈Θ, we have:
1/M∑_i=1^M∇ F_i()- ∇ F()^2≤ζ^2.
Note that in Assumption <ref>, the quantity ζ^2 can be interpreted as the variance resulting from task sampling.
§.§ Properties of B-MAML
We will first study the bias of the forward approximation in (<ref>) compared to the true meta-gradient. We show the following proposition:
[Bias of the forward approximation]
Under Assumptions <ref> and <ref>, for λ≥ 2 l̂_1, for any training task _i we have:
∇ F_i() - ^For_i,ν() = (L_0/λ(L_1 + L̂_2L_0/λ) ν).
From Proposition <ref>, the following corollary ensues:
[Bias of FO-B-MAML<ref>]
Under Assumptions <ref> and <ref>, for λ≥ 2 L̂_1, for a training precision δ defined as in (<ref>), choosing ν = √(λ^2δ/L_0(L_1λ + L̂_2L_0)), then for any training task _i we have:
∇ F_i() - g^For_i,ν() = (√( L_0(L_1/λ + L̂_2L_0/λ^2)δ)).
Discussion. As a reference, the bias of using the expression of the meta-gradient in (<ref>) while replacing _i^⋆ with an approximate solution up to precision δ leads to a bias of (δ). If, as in iMAML<cit.>, one solves the linear system in (<ref>) up to a precision δ^', then this leads to a bias of the order (δ+δ^').
Our bias is a (√(δ)), which is worse (for small values of δ); this is to be expected since we don't use any second-order information, unlike the other methods. We note that using more advanced (and costly) finite difference approximations of the gradient in Proposition <ref> should close this gap.
We will now discuss the smoothness of the B-MAML objective (<ref>). Proposition <ref> shows that this smoothness can grow with the norm of the gradient; functions satisfying such an assumption have been studied, for example, in <cit.> for classic optimization as opposed to meta-learning.
[Generalized Smoothness of individual meta-objectives]
Under Assumptions <ref> and <ref>, for λ≥ 2 l̂_1, for any training task _i we have for any ,^'∈Θ:
∇ F_i() - ∇ F_i(^' )≤min( (),(^')) - ^',
where () = L_1 + L̂_2/λ∇ F_i().
Combining Proposition <ref> and Assumption <ref>, we get:
[generalized smoothness of B-MAML]
Under Assumptions <ref>, <ref> and <ref>, for λ≥ 2 L̂_1, we have for any ,^'∈Θ:
∇ F() - ∇ F(^' )≤min((),(^')) - ^',
where () = _0 + _1∇ F(),_0=L_14 + L̂_2/4λζ and _1 = L̂_2/2λ.
If we additionally use strong convexity ( Assumption <ref>), then we can prove that the meta-gradient ∇ F() is bounded; this will make the B-MAML objective smooth in the classical sense.
[Classical smoothness]
Under Assumptions <ref>, <ref>, <ref> and <ref>, for λ≥ 2 L̂_1, we have for any ∈Θ: ∇ F()≤λ L_0/μ:= G .
Hence, for any ,^'∈Θ we have
∇ F() - ∇ F(^' )≤ - ^',
where := _0 + G _1, for _0,_1 defined in Corollary <ref>.
It is worth noting that the smoothness constant shown in Corollary <ref> might be much larger than _0 and _1 as μ might be small.
§.§ Convergence
Now we have all the ingredients to discuss the convergence rate of Algorithm <ref>. For simplicity, we will assume that we can sample all training tasks and leave the case where this is not possible (i.e., the number of tasks M is big) to the Appendix.
Given a gradient (estimate) , We consider three algorithms to update the meta-gradient:
GD. Which updates the parameter in the following way:
← - η .
ClippedGD. Which updates the parameter in the following way:
← - ηmin(1,c/) .
NormalizedGD. Which updates the parameter in the following way:
← - η/β + .
We note that ClippedGD and NormalizedGD are equivalent up to a change of the learning rate η as shown in <cit.>.
Throughout this section, we denote by _0 the initialization and assume that the objective F in (<ref>) is lower bounded by F^⋆ > -∞, and denote Δ = F(_0) - F^⋆.
[Convergence of FO-B-MAML <ref> using NormalizedGD]
Under the generalized smoothness shown in Corollary <ref> and its assumptions, Algorithm <ref>, using NormalizedGD with η=1/_1 and β=_0/_1, finds a meta-parameter satisfying ∇ F()≤ε + (√(δ)) in at most (_0 Δ/ε^2+ _1^2Δ/_0) outer steps.
[Convergence of FO-B-MAML using GD]
Under the smoothness assumptions as in Corollary <ref>, Algorithm <ref>, using GD with η=1/_0 + G _1 (G is defined in Corollary <ref>), finds a meta-parameter satisfying ∇ F()≤ε + (√(δ)) in at most (_0 Δ/ε^2+ G _1Δ/ε^2) outer steps.
Theorems <ref> and <ref> show that NormalizedGD has a better dependence on L_1 than GD. We conclude that judging from those upper bounds, it is (theoretically) better to use NormalizedGD than GD, even when strong convexity (Assumption <ref>) is satisfied.
The quantity (√(δ)) in both theorems is the bias of FO-B-MAML using the forward approximation <ref>. To ensure convergence to an ε-stationary point (i.e. such that ∇ F()≤ε) it suffices to take δ∼ε^2.
Overall Complexity. The total number of gradient calls needed by Algorithm <ref> is simply : M ×Outer-steps(ε) ×Inner-steps(δ). For example, when Assumptions <ref> to <ref> are satisfied, using Nesterov's accelerated gradient method as the inner optimization problem, we have
Inner-steps(δ) = (√(κ̂)log(1/δ)),
where κ̂=L̂_1/μ is the condition number f̂_i. hides logarithmic terms.
If we use NomalizedGD as the outer optimizer, then the total number of gradient calls is:
(M√(κ̂)log(1/δ)(_0 Δ/ε^2+ _1^2Δ/_0)) .
We compare our method to other methods in the literature in Table <ref>.
Comparison to iMAML. iMAML uses Hessian vector products to approximately solve the linear system in (<ref>), thus needing 2Mem(∇ f_i) of memory which is the same as ours. Also, hessian-vector products have about five times the cost needed to compute the gradient of neural networks <cit.>.
§ EXPERIMENTS
We compare our Algorithm <ref>to other methods such as iMAML<cit.>, MAML <cit.>, Reptile and FO-MAML <cit.>, in terms of the quality of the meta-gradient approximation and convergence. We consider a synthetic linear regression problem (details are in Appendix <ref>) that has the advantage of having a simple closed-form expression of the meta-gradient; we use gradient descent as the inner optimizer for all algorithms. Figure <ref> shows the quality of the meta-gradient approximation and evolution of the loss for different algorithms. We see that FO-B-MAML meta-gradient approximation benefits continuously from the increased number of inner steps (equivalent to a small δ), which is not the case for the other first-order methods like FO-MAML and Reptile. FO-B-MAML also compares favorably to iMAML, which uses the more expensive Hessian-vector products. As noted before, iMAML uses a number cg of Hessian-vector products (about five times more expensive than a gradient) on top of the inner iterations; thus, subtracting 4*cg from the inner steps should make the curves of FO-B-MAML and iMAML much closer. The same conclusions can be said about the loss curves where FO-B-MAML outperforms other first-order methods and is competitive with second-order methods: MAML and iMAML (cg=10), which is natural as they use more information. We note that FO-B-MAML outperforms iMAML with a small number of conjugate gradient steps cg∈{2,5}.
§ GENERAL DISCUSSION
Extension. In this work, we stayed limited to the very simple and specific form in (<ref>),(<ref>) (a specific type of regularization and hyperparameters). We note that our approach is more general and can very easily be extended to a more general form and can be used to meta-learn the hyperparameters as well.
For example, we can assume the following general inner problem
_i^⋆(:={,_d,λ})=_[g(,) :=f̂_i(_d,) + λℛeg(;)] ,
where _d can denote other shared parameters like a common decoder. To get the perturbed problem, we simply add the term ν f() to the function g. Then we can prove that:
∇ F_i() = d ∂_ g(^⋆_i,ν(),)/dν|_ν=0
and use finite differences to approximate the derivative in (<ref>).
Limitations. One of the main limitations of our work is that by using finite differences, we need to solve the inner problem for each task at least twice. At the same time, this comes with the benefit of avoiding the use of any second-order information (and parallelization), it is still an important (open) question whether we can devise new approximations that avoid this slight memory overhead.
One possible solution to avoid the use of the finite difference method is to use automatic differentiation to compute the derivative of ^⋆_ν with respect to ν since ν is just a real number, which means this should still be cheaper than MAML; however, prima facie, this should incur the same memory burden as MAML. We leave this exploration for future work.
§ CONCLUSION
We proposed a new first-order variant of MAML based on its bi-level optimization formulation. We equipped our method with a convergence theory that shows that it has an advantage over previously known first-order methods and is comparable to second-order methods, although it does not use any second-order information. We experimentally showed that our method can approximate the true meta-gradient to a high precision. We also show how to generalize our method to encoder-decoder networks and learn hyperparameters.
abbrvnat
§ MISSING PROOFS
§.§ Proof of Proposition <ref>
The fact that ν↦_i,ν^⋆() is differentiable at ν = 0 will be proven later.
We have that
_i,ν^⋆()∈ Argmin_ν f_i() + f_i() + λ/2 - ^2
This means
ν∇ f_i(_i,ν^⋆()) + ∇f_i(_i,ν^⋆()) + λ (_i,ν^⋆() - ) =
Taking the derivative of the above equation with respect to ν gives
∇ f_i(_i,ν^⋆()) + ν∇^2f_i(_i,ν^⋆())d_i,ν^⋆()/dν+ ∇^2 f_i(_i,ν^⋆())d_i,ν^⋆()/dν + λd_i,ν^⋆()/dν =
Which yields the following expression
d_i,ν^⋆()/dν = - (ν∇^2f_i(_i,ν^⋆())+ ∇^2 f_i(_i,ν^⋆()) + λ)^-1∇ f_i(_i,ν^⋆())
We set ν=0 and get:
d_i,ν^⋆()/dν|_ν=0 = - (∇^2 f_i(_i^⋆()) + λ)^-1∇ f_i(_i^⋆()) = -1/λ∇ F_i()
§.§ Proof of Proposition <ref>
Using Equation (<ref>), we have:
_i,ν^⋆() = - 1/λ( ν∇ f_i(_i,ν^⋆()) + ∇f_i(_i,ν^⋆()) )
Thus
λ_i,0^⋆() - _i,ν^⋆()ν = ∇ f_i(_i,ν^⋆()) + ∇f_i(_i,ν^⋆()) - ∇f_i(_i,0^⋆())ν .
The form of the meta-gradient in Equation <ref> implies that:
∇ F_i() = ∇ f_i(_i,0^⋆()) - 1/λ∇^2f_i(_i,0^⋆())∇ F_i().
(<ref>)–(<ref>) gives :
∇ F_i() - _i,ν^For = ∇ f_i(_i,0^⋆()) - ∇ f_i(_i,ν^⋆())_(I)
+ ∇f_i(_i,ν^⋆()) - ∇f_i(_i,0^⋆())ν - 1/λ∇^2f_i(_i,0^⋆())∇ F_i()_(II).
Let's define _ν = ∇ F_i() - _i,ν^For the bias of the forward approximation.
The norm of the first term (I) can be easily bounded using the smoothness of f_i by:
(I)≤ L_1 _i,ν^⋆() - _i,0^⋆() .
The second term (II) can be simplified using Cauchy's theorem which guarantees the existence of such that
∇f_i(_i,ν^⋆()) - ∇f_i(_i,0^⋆())ν = ∇^2f_i(_i,0^⋆())[_i,ν^⋆() - _i,0^⋆()ν] + 12ν∇^3f_i(_i,0^⋆())[_i,ν^⋆() - _i,0^⋆()]^2,
Thus, using Assumptions <ref> and <ref>, we get:
(II)≤L_1λ_ν + L_2/2ν_i,ν^⋆() - _i,0^⋆()^2.
Overall,
_ν≤ L_1 _i,ν^⋆() - _i,0^⋆() + L_1λ_ν + L_2/2ν_i,ν^⋆() - _i,0^⋆()^2 ,
Which implies :
(1 - L_1λ)_ν≤_i,ν^⋆() - _i,0^⋆()( L_1 + L_2/2ν_i,ν^⋆() - _i,0^⋆()).
All that is left is to bound the term _i,ν^⋆() - _i,0^⋆().
If we go back to (<ref>), then we can write:
_i,ν^⋆() - _i,0^⋆() = 1/λ( ∇f_i(_i,0^⋆()) - ∇f_i(_i,ν^⋆()) + ν∇ f_i(_i,ν^⋆()))
≤L_1/λ_i,ν^⋆() - _i,0^⋆() + ν L_0/λ.
Thus:
(1 - L_1/λ) _i,ν^⋆() - _i,0^⋆()≤ν L_0/λ.
For simplicity we assume, λ≥ 2 L_1 which gives:
_i,ν^⋆() - _i,0^⋆()≤2ν L_0/λ .
Plugging the result in Eq (<ref>) and using λ≥ 2 L_1 gives:
_ν≤L_0/λ( L_1 + L_0 L_2/λ)ν .
The overall bias resulting from using approximations of _i,ν^⋆() instead of their exact values is bounded by:
L_0/λ( L_1 + L_0 L_2/λ)ν + 2δ/ν,
which is minimized for ν = √(2λ^2δL_0(L_1λ + L_0 L_2)), using this value of ν in the overall bias (<ref>), gives the result of Corrolary <ref>.
§.§ Proof of Proposition <ref>
Let ,^'∈Θ. Using Eq (<ref>), we have:
∇ F_i() = ∇ f_i(_i,0^⋆()) - 1/λ∇^2f_i(_i,0^⋆())∇ F_i()
∇ F_i(^') = ∇ f_i(_i,0^⋆(^')) - 1/λ∇^2f_i(_i,0^⋆(^'))∇ F_i(^')
∇ F_i() - ∇ F_i(^') = ∇ f_i(_i,0^⋆()) - ∇ f_i(_i,0^⋆(^'))
+ 1/λ( ∇^2f_i(_i,0^⋆(^'))∇ F_i(^') - ∇^2f_i(_i,0^⋆())∇ F_i())
= ∇ f_i(_i,0^⋆()) - ∇ f_i(_i,0^⋆(^'))
+ 1/λ(∇^2f_i(_i,0^⋆(^')) - ∇^2f_i(_i,0^⋆()))∇ F_i(^')
- 1/λ∇^2f_i(_i,0^⋆())(∇ F_i() - ∇ F_i(^'))
Thus:
∇ F_i() - ∇ F_i(^') ≤ L_1 _i,0^⋆() - _i,0^⋆(^') + L_2/λ_i,0^⋆() - _i,0^⋆(^')∇ F_i(^')
+L_1/λ∇ F_i() - ∇ F_i(^')
which implies:
(1 - L_1/λ) ∇ F_i() - ∇ F_i(^')≤(L_1 + L_2/λ∇ F_i(^')) _i,0^⋆() - _i,0^⋆(^')
Let's define, () := L_1/4 + L_2/4λ∇ F_i(). Exchanging and ^', and assuming λ≥ 2 L_1, we get:
∇ F_i() - ∇ F_i(^')≤min ((),(^')) _i,0^⋆() - _i,0^⋆(^')/2
To finish the proof, we need to bound the quantity: _i,0^⋆() - _i,0^⋆(^').
We have
_i,0^⋆() - _i,0^⋆(^') = - 1/λ∇f_i(_i,0^⋆()) - (^' - 1/λ∇f_i(_i,0^⋆(^')))
= - ^' - 1/λ(∇f_i(_i,0^⋆()) - ∇f_i(_i,0^⋆(^')))
≤ - ^' + L_1/λ_i,0^⋆() - _i,0^⋆(^')
Again, choosing λ≥ 2 L_1, we get:
_i,0^⋆() - _i,0^⋆(^')≤ 2 - ^'.
Plugging the last inequality in Eq (<ref>), we get:
∇ F_i() - ∇ F_i(^')≤min ((),(^')) - ^',
which finishes the proof.
§.§ Convergence of NormalizedGD and ClippedGD
Consider a differentiable function F satisfying the following property for all ,^'∈Θ:
∇ F() - ∇ F(^')≤min ((),(^')) - ^',
and for any function .
Then we have:
F(^') = F() + ∇ F()^⊤ (^' - ) + ∫_0^1[∇ F( + t(^' - )) - ∇ F()]^⊤(^' - )dt
Thus
|F(^') - F() - ∇ F()^⊤ (^' - )| ≤ |∫_0^1[∇ F( + t(^' - )) - ∇ F()]^⊤(^' - )dt|
≤∫_0^1() t ^' - ^2 dt
= ()2^' - ^2.
In particular,
F(^')≤ F() + ∇ F()^⊤ (^' - ) + ()2^' - ^2
Let's consider a general GD update: ^' = - η∇ F(), where η might depend on . For this update, we have
F(^') ≤ F() - η∇ F()^2 + ()η^22∇ F()^2
= F() - η (1 - ()η2) ∇ F()^2
It is easy to see that η = 1/() minimizes the right-hand side of the above inequality, which leads to:
F() - F(^') ≥∇ F()^2/2 ()
One important observation here is that the optimal step size is the inverse of the generalized smoothness, thus if the smoothness depends on the norm of the gradient or other quantities, the optimal step size depends on them too.
Let's now discuss the special case where () = _0 + _1 ∇ F(.
In this case, we have:
F(^t) - F(^t+1) ≥∇ F(^t)^2/2 (_0 + _1 ∇ F(^t).
For a given precision ε, the goal is to bound the number of steps t necessary to reach ∇ F(^t≤ε.
Before reaching the goal above, we naturally have two regimes, a first one for which ∇ F(^t≥_0 / _1 and another one where ε≤∇ F(^t≤_0 / _1.
If we are in the first regime, we have
F(^t) - F(^t+1) ≥(_0 / _1)^2/4_0 = _0/4_1^2.
Whereas if we were in the second regime, then we would have:
F(^t) - F(^t+1) ≥ε^2/4_0 .
All in all, as long as ∇ F(^t≥ε, we have
F(^t) - F(^t+1) ≥min(ε^2/4_0,_0/4_1^2). .
Let K be the number of steps necessary to reach the first index t such that ∇ F(^t≤ε. Assuming that the function F is lower bounded and denoting Δ = F(^0) - inf F, then
Δ≥ K min(ε^2/4_0,_0/4_1^2).
Thus K ≤Δ/min(ε^2/4_0,_0/4_1^2)≤4_0Δ/ε^2 + 4_1^2Δ/_0.
§ DETAILS OF THE EXPERIMENTS
we can equivalently write a quadratic objective that represents the task loss as:
f_i() = f_i() = 1/2^⊤_i + ^⊤_i,
In this case, it is easy to show that:
^⋆_i,ν() = ((1+ν)_i + λ)^-1 (λ - (1+ν)_i)
and
^⋆_i() = ^⋆_i,0() = (_i + λ)^-1 (λ - _i)
The meta-gradient of task i has the expression:
∇ F_i() = λ (_i + λ)^-1∇_ℒ_i(^⋆_i())
= λ (_i + λ)^-1(_i(_i + λ)^-1 (λ - _i) + _i)
And it is not difficult to verify that indeed ∇ F_i() = -λd^⋆_i()/dν|_ν = 0.
Because we have the exact expression of the meta-gradient, we can compute it exactly and compare the relative precision of different approximation methods. This is what we did in Figure <ref>. ]
|
http://arxiv.org/abs/2409.02413v1 | 20240904033923 | Abstractive Text Summarization: State of the Art, Challenges, and Improvements | [
"Hassan Shakil",
"Ahmad Farooq",
"Jugal Kalita"
] | cs.CL | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
inst1]Hassan Shakilcor1
[email protected]
inst2]Ahmad Farooq
[email protected]
inst1]Jugal Kalita
[email protected]
[cor1]Corresponding Author: Hassan Shakil
[inst1]organization=Department of Computer Science, University of Colorado ,
city=Colorado Springs,
postcode=80918,
state=CO,
country=USA
[inst2]organization=Department of Electrical and Computer Engineering, University of Arkansas,
city=Little Rock,
postcode=72204,
state=AR,
country=USA
§ ABSTRACT
Specifically focusing on the landscape of abstractive text summarization, as opposed to extractive techniques, this survey presents a comprehensive overview, delving into state-of-the-art techniques, prevailing challenges, and prospective research directions. We categorize the techniques into traditional sequence-to-sequence models, pre-trained large language models, reinforcement learning, hierarchical methods, and multi-modal summarization. Unlike prior works that did not examine complexities, scalability and comparisons of techniques in detail, this review takes a comprehensive approach encompassing state-of-the-art methods, challenges, solutions, comparisons, limitations and charts out future improvements - providing researchers an extensive overview to advance abstractive summarization research. We provide vital comparison tables across techniques categorized - offering insights into model complexity, scalability and appropriate applications.
The paper highlights challenges such as inadequate meaning representation, factual consistency, controllable text summarization, cross-lingual summarization, and evaluation metrics, among others. Solutions leveraging knowledge incorporation and other innovative strategies are proposed to address these challenges. The paper concludes by highlighting emerging research areas like factual inconsistency, domain-specific, cross-lingual, multilingual, and long-document summarization, as well as handling noisy data. Our objective is to provide researchers and practitioners with a structured overview of the domain, enabling them to better understand the current landscape and identify potential areas for further research and improvement.
Automatic Summarization Abstractive Summarization Extractive Summarization Knowledge Representation Text Generation
§ INTRODUCTION
The need for automatic summarization has increased substantially with the exponential growth of textual data. Automatic summarization generates a concise document that contains key concepts and relevant information from the original document <cit.>. Based on the texts of the generated summaries, we can characterize summarization into two types: extractive and abstractive. In extractive text summarization, the generated summary is made up of content directly extracted from the source text <cit.>, whereas in abstractive text summarization, the concise summary contains the source text's salient ideas in the newly generated text. The generated summary potentially contains different phrases and sentences that are not present in the original text <cit.>.
Although the extractive method has long been used for summary generation, the abstractive approach has recently gained popularity because of its ability to generate new sentences that better capture the main concepts of the original text, mimicking how humans write summaries. This change in emphasis is due to the maturity of extractive summarization techniques and the desire to push boundaries and address capability limitations, which leaves the dynamic and largely uncharted field of abstractive summarization open to further research and advancement <cit.>.
To set the stage for continued progress in this emerging field, it is crucial to outline the characteristics that make an automatic summary not just functional but exceptional. A high-quality automatically generated summary should possess the following properties <cit.>:
* Concise: A high-quality summary should effectively convey the most important information from the original source while keeping the length brief.
* Relevant: The information presented in the summary should be relevant to the main topic.
* Coherent: A good summary should have a clear structure and flow of ideas that make it easy to understand and follow.
* Accurate: The summary's information should be factually correct and should not contain false or misleading information.
* Non-redundant: Summary sentences should not repeat information.
* Readable: The sentences used in the summary should be easily understood by the target audience.
* Fair: The summary should present the information objectively and without bias, maintaining an impartial perspective and avoiding any tonal or ideological leanings.
* Consistent: The summary should be consistent with the original source in terms of style, tone, and format.
* Resilience to input noise: The summary should be accurate and coherent despite noisy or poorly structured input text.
* Multilingual capability: The summary should be able to be generated in various languages to meet the demands of a worldwide audience.
* Adaptability to different output formats: The summary should be adaptable enough to be output in a variety of formats, including bullet points, paragraph summaries, and even visual infographics.
A generated abstractive summary provides an accurate, concise, and easy-to-understand representation of the source text by fulfilling these properties and doing so in its own words.
Abstractive summarization encounters various challenges in its quest to generate new text for a condensed and cohesive summary of the original text. The resulting summary maintains the same style and tone as the original text while ensuring factual and grammatical accuracy, fluency, and coherence. In addition, it can be difficult to summarize text that contains varying opinions and perspectives while still being concise and informative. This survey provides a comprehensive overview of the current state of the art in abstractive summarization, including the latest techniques, recent improvements that have been accomplished, issues, and challenges that need to be addressed. We categorize the state-of-the-art techniques into five groups based on the underlying methodologies and techniques used in text summarization approaches. Each category represents a distinct group of methods that share common features or principles. Traditional Sequence-to-Sequence (Seq2Seq) models <cit.> leverage encoder-decoder architectures to map input texts to summarized outputs. A fruitful approach involves using pre-trained Large Language Models (pre-trained LLMs) <cit.>, which capitalize on large-scale (unsupervised) training to capture general contextual and linguistic information, and then further specialized training to generate effective summaries. Reinforcement Learning (RL) approaches <cit.> also play a significant role, with models learning to optimize summary quality based on human-like preferences. Hierarchical approaches <cit.>, on the other hand, focus on exploiting the inherent structure of input texts to generate more coherent and informative summaries. Finally, Multi-modal Summarization <cit.> methods combine different data modalities, such as text and images, to generate comprehensive and context-rich summaries. By categorizing the state-of-the-art techniques in this way, we hope to provide a clear and structured overview of the current research landscape in abstractive summarization.
The integration of structured knowledge bases has significantly improved the accuracy and coherence of content generated by natural language models <cit.>. Despite this, challenges in capturing intricate meanings remain, leading to innovations like attention mechanisms <cit.> and advanced knowledge representation <cit.>. Traditional metrics often miss the semantic depth and have encouraged the adoption of BERTScore <cit.> and MoverScore <cit.>. Summarizing long documents is addressed using hierarchical models <cit.> and memory-augmented networks <cit.>. Emphasis on factual consistency has spurred strategies like knowledge integration <cit.> and reinforcement learning <cit.>. The emergence of models like CTRL <cit.> highlights the trend toward controllable summarization. With the digital realm diversifying, there is a push for multimodal summarization. As AI becomes more influential, transparency and interpretability are paramount. Overall, abstractive text summarization is continuously evolving, meeting challenges with innovative solutions.
§.§ Prior Surveys on Abstractive Summarization
Several prior surveys have explored the developments in automatic summarization methods. These survey papers offer vital insights into the methods, limitations, and potential future research directions in automatic summarization. A significant portion of these surveys, such as those conducted by Nazari et al. <cit.>, and Moratanch et al. <cit.>, primarily focused on extractive summarization methods. This focus can be attributed to the complexity inherent in abstractive summarization.
In recent years, a growing body of work has concentrated on the state of the art in abstractive summarization. For instance, Suleiman et al. <cit.>, Zhang et al.<cit.>, and Gupta et al. <cit.> have exclusively focused on abstractive text summarization. These studies delve into deep learning-based abstractive summarization methods and compare performance on widely used datasets. Lin et al. <cit.> explored existing neural approaches to abstractive summarization, while Gupta et al. <cit.> characterized abstractive summarization strategies, highlighting the difficulties, tools, benefits, and drawbacks of various approaches. Syed et al. <cit.> evaluated various abstractive summarization strategies, including encoder-decoder, transformer-based, and hybrid models, and also discussed the challenges and future research prospects in the field.
There are studies that cover both extractive and abstractive methods, providing a more comprehensive view of the field. Examples of such works include Gupta et al. <cit.> and Mahajani et al. <cit.>. These studies offer a comparative examination of the effectiveness of both extractive and abstractive techniques as well as giving an overview of both. Ermakova et al. <cit.> presented a study on the evaluation techniques utilized in summarization, which is fundamental for comprehending the viability and potential improvements in both extractive and abstractive summarization methods. These works go about as an extension between the two summarization approaches, displaying their individual benefits and possible cooperative synergies.
Previous research on automatic summarization has oftentimes focused on specific topics, for example, abstractive summarization utilizing neural approaches, deep learning-based models, sequence-to-sequence based models, and extractive summarization. Some have included the challenges and limitations of abstractive summarization, while others have focused on the best way to improve and evaluate it.
Nonetheless, there is a need for a more comprehensive analysis that covers state-of-the-art methods, challenges, solutions, comparisons, limitations, and future improvements - providing a structured overview to advance abstractive summarization research.
Table <ref> gives an overall comparison of our review with various survey papers that are available in the literature. Unlike prior studies, our work examines state-of-the-art methods, challenges, solutions, comparisons, limitations and charts out future improvements - providing researchers an extensive overview to advance abstractive summarization research. The following are the main contributions of this study:
* Overview of the state-of-the-art techniques in abstractive text summarization: This study incorporates information from a plethora of relevant studies to give an overview of the ongoing methodologies utilized in abstractive summarization. This can help researchers and practitioners familiarize themselves quickly with the most recent developments and patterns.
* Comparative Analysis of Models: This study includes a comparative analysis of models in abstractive summarization, focusing on scales, training time, and resource consumption, among other categories. This unique dimension offers practical insights, aiding in the selection of efficient models and enriching the field with valuable, often overlooked information.
* Identification of challenges in abstractive summarization: This study presents current issues and challenges in abstractive summarization by consolidating information from various research papers, for instance, the challenge of generating coherent and grammatically accurate summaries. Our work can help specialists focus on these areas and develop innovative solutions by highlighting such issues.
* Discussion of potential improvements in abstractive summarization: This study explores strategies to enhance abstractive summarization, for example, incorporating knowledge and various techniques to generate factually accurate and coherent summaries. This can aid researchers in finding better approaches to generate high-quality abstractive summarization frameworks.
* Exploration of future research directions: This study highlights emerging frontiers like personalized summarization, long-document summarization, multi-document summarization, multilingual capabilities, and improved evaluation metrics along with leveraging recent advances in Large Language Models (LLMs). It also highlights future directions to overcome limitations like inadequate representation of meaning, maintaining factual consistency, explainability and interpretability, ethical considerations and bias, and further related concepts to help advance the field.
* Holistic survey of abstractive summarization: Unlike prior works focused solely on extractive summarization, this review takes a comprehensive approach, encompassing state-of-the-art abstractive summarization methods, along with comparisons and analyses of complexities, challenges, and solutions. It provides researchers with a structured overview to advance abstractive summarization research.
Y>X
§.§ Organization
In this paper, we present a comprehensive survey of abstractive summarization, encompassing the state of the art, challenges, and advancements. Section II delves into automatic summarization, detailing its various types along with examples. Section III reviews the literature on the state of the art in abstractive summarization. Section IV explores model scalability and computational complexity in abstractive summarization. Section V addresses issues, challenges, and future directions for abstractive summarization. The concluding remarks are offered in Section VI.
§ AUTOMATIC SUMMMARIZATION
Automatic summarization is a technique used in Natural Language Processing (NLP) to generate a condensed version of a longer text document while retaining the most important information <cit.>. The aim of automatic summarization is to reduce the length of the text without losing the essence of the source content. The primary purpose of summarization is to help people get a quick understanding of the main topics and ideas covered in a large text document without having to read the entire document. As mentioned at the beginning of the paper, there are two main types of automatic summarization: extractive and abstractive summarization <cit.>. A fundamental comparison between extractive and abstractive summarization techniques is presented in Table <ref>.
§.§ Extractive Summarization
Extractive summarization is the process that entails cherry-picking the most salient sentences or phrases from the source text and fusing them into a summary <cit.>. The chosen sentences or phrases typically include important details pertaining to the subject being discussed. Extractive summarization refrains from any form of paraphrasing or rewriting of the source text. Instead, it highlights and consolidates the text's most crucial information in a literal way. For instance, Google News utilizes an extractive summarization tool that sifts through news articles and generates a summary by pulling out the most relevant sentences <cit.>. Table <ref> presents an extractive summary of a source text, which was obtained from the United Nations Climate Action website[https://www.un.org/en/climatechange/cop26]. This summary was generated by the ChatGPT-4 model, a product of OpenAI[https://openai.com/gpt-4].
§.§ Abstractive Summarization
Abstractive summarization involves creating a summary that is not just a selection of sentences or phrases from the source text, but is compromised of newly minted sentences that capture the essence of the original text <cit.>. The model generates new sentences that maintain the original text's meaning but are usually shorter and more to the point in order to achieve the abstraction of ideas. Abstractive summarization is more complex than extractive summarization because it necessitates the NLP model to comprehend the text's meaning and generate new sentences. The New York Times summary generator, which generates summaries that are very similar to those written by humans, is a great example of abstractive summarization. Table <ref> showcases an abstractive summary of the source text, sourced from the United Nations Climate Action website. This summary was synthesized using the ChatGPT-4 model, mentioned earlier.
§ STATE OF THE ART IN ABSTRACTIVE TEXT SUMMARIZATION
We present a comprehensive taxonomy of state-of-the-art abstractive text summarization based on the underlying methods and structures found in the literature; see Figure <ref>. At a fundamental conceptual level, summarization consists of transforming a long sequence of sentences or paragraphs into a concise sequence of sentences. Thus, all machine learning models that learn to perform summarization can be characterized as Sequence-to-Sequence (Seq2Seq) models. However, Seq2Seq models encompass a wide variety of approaches, one of which we call Traditional Seq2Seq models in the taxonomy. We classify state-of-the-art abstractive text summarization into five distinct categories: Traditional Sequence-to-Sequence (Seq2Seq) based Models <cit.>, Pre-trained Large Language Models <cit.>, Reinforcement Learning (RL) Approaches <cit.>, Hierarchical Approaches <cit.>, and Multi-modal Summarization <cit.>. Although distinguishing between approaches and systems in abstractive text summarization can be difficult, our taxonomy strives to provide a clear and well-defined division. In addition, most state of the art methods possess subclasses, as depicted in Figure
<ref>. This organized perspective on state-of-the-art abstractive text summarization methods enables researchers and practitioners to better understand the current landscape of the field and identify potential areas for further research and improvement.
§.§ Traditional Sequence-to-Sequence (Seq2Seq) Models
Traditional Seq2Seq models are a class of neural network architectures developed to map input sequences to output sequences. They are frequently employed in tasks involving the processing of natural languages, such as summarization and machine translation. The fundamental concept is to first use an encoder to turn the input sequence into a fixed-length vector representation and then to use a decoder to generate the output sequence from the vector representation <cit.>. The sequence diagram in Figure <ref> illustrates the flow of traditional Seq2Seq models, emphasizing their significance and applications in the realm of abstractive text summarization. To provide a more comprehensive understanding of Seq2Seq models in the context of abstractive text summarization, we have further divided them into four sub-classes: Basic Seq2Seq Models, Attention Mechanisms, Copy Mechanisms, and Pointer Networks. This classification makes it possible to give a clearer analysis of the various methods used in Seq2Seq models for abstractive text summarization. Table <ref> shows a comparison of various sub-classes of Traditional Sequence-to-Sequence (Seq2Seq) Models..
§.§.§ Basic Seq2Seq models
Basic Seq2Seq models, first introduced by Sutskever et al. <cit.>, utilize the encoder-decoder architecture to generate summaries. Although machine translation was the model's primary application, the Seq2Seq framework has since been used for abstractive text summarization. A convolutional neural network (CNN)-based Seq2Seq model for natural language phrases was proposed by Hu et al. <cit.>. Although the paper's primary focus was on matching phrases, the basic Seq2Seq model presented can be adapted for abstractive text summarization. However, these basic models face limitations in capturing long-range dependencies, leading to the development of attention mechanisms <cit.>.
§.§.§ Attention Mechanisms
Attention mechanisms enable models to selectively focus on relevant parts of the input during the decoding phase, improving the generated summaries' quality. A novel neural attention model for abstractive sentence summarization was proposed by Rush et al. <cit.>. The task of creating a condensed version of an input sentence while maintaining its core meaning is called a single-sentence summary, which was the focus of this study. This research is considered a trailblazer since it was among the first to use attention mechanisms for abstractive text summarization. The proposed model was based on an encoder-decoder framework. The encoder is a CNN that processes the input sentence, while the decoder is a feed-forward Neural Network Language Model (NNLM) that generates the summary. The decoder has an attention mechanism that allows it to selectively concentrate on various sections of the input text while generating the summary. As a result, the model may learn which words or phrases of the input text are crucial for generating the summary. A “hard" attention model and a “soft" attention model were the two variations of the attention model tested by the authors. The soft attention model computes a weighted average of the input words, with the weights representing the relevance of each word to the summary, whereas the hard attention model stochastically chooses a restricted group of input words to be included in the summary. The soft attention model performed better in the experiments because it makes it easier for gradients to flow during training. The proposed model was evaluated on the Gigaword dataset <cit.>, a large-scale corpus of news articles with associated headlines. The results showed that the attention-based model outperformed a number of baselines, including the fundamental Seq2Seq model and a state-of-the-art extractive summarization system. The experiments also showed that the model could generate concise and coherent summaries that capture the core idea of the input text.
An attentive encoder-decoder architecture for abstractive sentence summarization was proposed by Chopra et al. <cit.>, focusing on generating abstractive summaries for single sentences. By using Recurrent Neural Networks (RNNs) as the foundation for both the encoder and decoder components in the Seq2Seq model, the paper extended the earlier work by Rush et al. <cit.>. The bidirectional RNN encoder in the attentive encoder-decoder architecture analyzes the input sentence, and the RNN decoder with an attention mechanism generates the summary. The forward and backward contexts of the input sentence are both captured by the bidirectional RNN encoder, leading to a more thorough grasp of the sentence structure. While generating each word in the summary, the model may dynamically focus on various portions of the input phrase because of the attention mechanism in the decoder. This selective focus allows the model to generate coherent and meaningful summaries. The Gigawaord dataset was used to evaluate the performance of the proposed model in comparison to various baselines, such as the basic Seq2Seq model <cit.> and the attention-based model <cit.>. The results demonstrated that the attentive RNN-based encoder-decoder design generated more accurate and informative abstractive summaries and outperformed the baselines.
Nallapati et al. <cit.> investigated various techniques to improve the basic Seq2Seq model to advance abstractive text summarization. The authors addressed multiple aspects of the model, such as the encoder, attention mechanisms, and the decoder, to improve the model's overall performance in generating abstractive summaries. They proposed several modifications to the encoder including the incorporation of a bidirectional RNN encoder to capture both forward and backward contexts in the input text. They also used a hybrid word-character encoder to handle out-of-vocabulary (OOV) words and improve generalization. The authors explored both local and global strategies for the attention mechanism that allowed the model to concentrate on relevant input while generating the summary. Additionally, the paper introduced a switch mechanism for the decoder that enabled the model to choose between generating words based on the context vector and directly copying words from the input text. This method strengthens the model's capacity to generate coherent summaries and help tackle the issues of OOV words. On the Gigaword and DUC-2004 <cit.> datasets, the authors evaluated the performance of the model in comparison to a number of benchmark models. The results demonstrated that the suggested modifications improved the functionality of the basic Seq2Seq model, leading to more precise and insightful abstractive summaries.
A graph-based attention mechanism for abstractive document summarization was introduced by Tan et al. <cit.> that takes into account the relationships between sentences in a document. Traditional Seq2Seq models with attention mechanisms frequently concentrate on the words within a sentence but are unable to recognize the inter-sentence dependencies. The authors aimed to overcome this restriction by embedding the structural information of the document into the attention mechanism. The suggested approach first creates a sentence graph that represents the document, where the nodes are the sentences and the edges are the relationships between them. The attention mechanism then works on this graph, permitting the model to focus on both local and global sentence-level information. The model consists of a bidirectional RNN encoder to capture a representation of the input sentence and a decoder with a graph-based attention mechanism for generating the summary. On the CNN/Daily Mail and DUC-2004 datasets, the authors assessed the performance of the model in comparison with various state-of-the-art abstractive and extractive summarization models. The results showed that by precisely capturing the connections between sentences in a document, the graph-based attention mechanism improved the quality of generated summaries.
§.§.§ Copy Mechanism
Gu et al. <cit.> introduced copy mechanism, a novel approach for abstractive text summarization that addresses the challenge of handling rare and OOV words. OOV words, which are often included in real-world text data, make it difficult for conventional Seq2Seq models to generate accurate summaries. The authors provided a technique to address this problem that enables the neural network to selectively copy words from the input text straight into the generated summary. The copying mechanism was incorporated into the existing Seq2Seq framework, specifically into the attention mechanism. By removing the difficulties brought on by rare and OOV words, this method improves the model's capacity to generate more precise and coherent summaries. According to the experimental findings, adding a copy mechanism to Seq2Seq models considerably enhances their ability to perform these tasks when compared to more conventional Seq2Seq models without a copy mechanism.
For abstractive text summarization, Song et al. <cit.> suggested a unique structure-infused copy mechanism that uses both syntactic and semantic information from the input text to help the copying process. The primary motivation behind this approach is to improve the coherence and accuracy of the generated summaries by leveraging the structural information inherent in the input text. The structure-infused copy mechanism incorporates semantic information such as named entities and key phrases along with a graph-based representation of the input text's syntactic structure, notably the dependency parse tree. The model can more accurately detect salient information and thus generate summaries that more accurately capture the major concepts of the original text by including these structural features in the copying mechanism. The authors employed a multi-task learning framework that jointly learns to generate abstractive summaries and forecast the structural characteristics of the original text. This method enables the model to capture the interdependencies between the goal of generating summaries and the task of structure prediction, generating summaries that are more accurate and coherent. On the CNN/Daily Mail dataset, the authors evaluated their structure-infused copy mechanism and contrasted its effectiveness with various state-of-the-art summarization models. The experimental findings showed that, in terms of ROUGE <cit.> scores, their strategy outperforms the baseline models.
§.§.§ Pointer Networks
Vinyals et al. <cit.> proposed an enhancement to the Seq2Seq models by incorporating attention and pointers. This Pointer-Generator Network is useful for summarization tasks because it can either generate words from a preset vocabulary or copy them directly from the source, effectively handling rare or OOV words and leading to better abstractive summarization. Additionally, a coverage mechanism is integrated to monitor attention history, ensuring diverse attention and reducing redundancy in the summaries. The model's effectiveness was evaluated on the CNN/Daily Mail dataset with notable improvements in ROUGE scores.
SummaRuNNer, introduced by Nallapati et al. <cit.>, uses Pointer Networks for abstractive tasks in addition to being designed for extractive summarization. It extracts the top-ranked sentences for the summary by evaluating the input documents at the word and sentence levels using a hierarchical RNN. By allowing direct copying from the source text in its abstractive variant, the pointer mechanism overcomes the limitations of conventional sequence-to-sequence models. The pointer mechanism and RNN-based hierarchy work together to improve the model's summarization performance. The models were evaluated on the DUC 2002[https://www-nlpir.nist.gov/projects/duc/guidelines/2002.html] dataset by using various variants of the Rouge metric and contrasted with cutting-edge models.
Kryscinski et al. <cit.> integrated Pointer Networks with reinforcement learning for abstractive text summarization. Their model is trained using both supervised and reinforcement learning, and it is based on the architecture of the Seq2Seq model with attention. The element of reinforcement guarantees conformity with human preferences. In order to ensure accuracy and proper handling of OOV words, the Pointer Networks incorporate straight copying from the input. Their approach performed better in terms of ROUGE scores when evaluated on the CNN/Daily Mail dataset.
Chen et al. <cit.> combined reinforcement learning with Pointer Networks for abstractive summarization. Their model uses reinforcement learning to optimize the process of selecting and rewriting sentences from the input. Using the CNN/Daily Mail and DUC-2002 datasets, the model was assessed using METEOR <cit.>, standard ROUGE metrics, and human evaluations of readability and relevance.
Using a unified model, Hsu et al. <cit.> presented extractive and abstractive summarization techniques. They introduced an inconsistency loss function to ensure alignment between the extractive and abstractive outputs. The abstractive component employs pointer networks for direct copying from the input, improving the quality of the summaries. Evaluated on the CNN/Daily Mail dataset, the model demonstrated strong performance through ROUGE scores and human evaluations on Amazon Mechanical Turk[https://www.mturk.com/], assessing informativity, conciseness, and readability.
Hsu et al. <cit.> introduced extractive and abstractive summarization methods using a unified model. In order to guarantee alignment between the extractive and abstractive outputs, they also implemented the inconsistency loss function. Pointer networks are used by the abstractive component to copy directly from the input, improving the summaries' quality. When the CNN/Daily Mail dataset was used for evaluation, the model performed well according to ROUGE scores and human assessments on Amazon Mechanical Turk that measured readability, and conciseness.
§.§ Pre-trained Large Language Models
Pre-trained Large Language Models are large-scale neural networks that have learned contextualized representations of words, phrases, and sentences through training on enormous volumes of text data. The sequence diagram illustrated in Figure <ref> shows the interaction between a researcher and a pre-trained Large Language Model (LLM), showcasing the model's ability to process queries and generate contextually relevant responses based on its extensive training. In a variety of natural language processing tasks, including abstractive text summarization, these models achieve state of the art results. To provide a more comprehensive understanding of Pre-trained Large Language models in the context of abstractive text summarization, we have further divided them into four sub-classes based on their use, as shown in Figure <ref>: BERT (Bidirectional Encoder Representations from Transformers), GPT (Generative Pre-trained Transformer), T5 (Text-to-Text Transfer Transformer), and BART (Bidirectional and Auto-Regressive Transformers).
The classification makes it possible to give a clearer analysis of the various methods used in Pre-trained Large Language models for abstractive text summarization. Table <ref> shows a comparison of various Pre-trained Large Language Models (the selected versions have the highest number of parameters available).
§.§.§ BERT
BERT (Bidirectional Encoder Representations from Transformers), introduced by Devlin et al. <cit.>, is a pre-trained language model that has achieved state-of-the-art results in various natural language processing tasks. A key aspect of BERT's training process is the use of a Masked Language Model (MLM) objective. In this procedure, a predetermined portion of the sentence's input tokens is chosen to be masked or hidden from the model during training. Using the context that the other (non-masked words in the sentence) provide, the model is then trained to predict the original value of the masked words. BERT differs significantly from traditional unidirectional language models because it has a bidirectional understanding of context. BERT's strong contextual understanding and transfer learning capabilities give it a strong foundation for adapting to the abstractive text summarization task even though it was not designed specifically for it. In abstractive text summarization, BERT is used as an encoder to extract contextual information from the input text. The model is able to comprehend the intricate relationships between words and their meanings, which is essential to generate summaries that are accurate and coherent. This is made achievable by pre-training bidirectional transformers using the MLM training regimen. Its ability to capture both local contexts and long-range dependencies gives BERT a benefit over traditional sequence-to-sequence models.
Rothe et al. <cit.> proposed a two-step strategy for abstractive text summarization by harnessing BERT's capacity to understand context-oriented details. Leveraging the contextual embeddings of an already trained BERT, they applied extractive techniques on a large corpus, permitting the model to grasp the structure and semantics of the source text and successfully extract salient information. This initial extractive step resulted in summaries that were more accurate and coherent. The results of this step are then fed into the abstractive summarization model, which centers around generating summaries that convey the main ideas while keeping up with text coherence. The benefits of extractive and abstractive summarization are integrated with this strategy, generating quality summaries with expanded comprehensibility and informativeness.
Dong et al. <cit.> presented a study on a unified language model, UniLM, for Natural Language Understanding (NLU) and Natural Language Generation (NLG) tasks. Utilizing shared knowledge between the two types of tasks is made possible by this unified approach, potentially enhancing performance in a variety of applications. Similar to BERT, UniLM is pre-trained using a variety of training objectives and is based on the transformer architecture. These objectives include masked language modeling (as in BERT), unidirectional (left-to-right or right-to-left) language modeling, and a novel Seq2Seq language modeling objective. UniLM can learn to comprehend and generate text by combining these objectives, making it suitable for NLU and NLG tasks. The CNN/Daily Mail dataset was one of the benchmark datasets used by the authors to evaluate UniLM for abstractive text summarization tasks. The experiment's findings demonstrated that the unified pre-training method significantly boosts performance when compared to other cutting-edge approaches.
Song et al. <cit.> presented the Masked Sequence to Sequence Pre-training (MASS) method, a useful technique to enhance the ability of models to generate abstractive summaries. The MASS technique, which is based on the BERT architecture, employs the masked language model objective, which allows the model to learn contextual information from both input and output sequences. The authors developed the MASS approach primarily for tasks that required language generation, like abstractive summarization. By pre-training the model with the masked Seq2Seq method, they hoped to increase its ability to generate coherent, semantically meaningful summaries. In this technique, the input sequence is partially masked, and the model is trained to predict the masked tokens based on the unmasked ones. With the help of this pre-training technique, the model is better able to capture the context and structure of the input text, which is essential for generating accurate abstractive summaries.
§.§.§ GPT
Radford et al. <cit.> introduced the concept of Generative Pre-trained Transformers (GPT), a series of powerful language models designed for natural language understanding and generation. Unlike BERT, which is trained in a masked language model fashion where certain words in a sentence are hidden and predicted, GPT is trained using a generative approach. Specifically, it predicts the next word in a sequence given all previous words. Based on the transformer architecture, GPT and its successor, GPT-2, use unsupervised learning with a generative pre-training phase and fine-tuning on particular tasks. GPT-3, released in 2020, further scaled up the GPT approach to achieve strong performance on natural language tasks <cit.>. OpenAI released GPT-3.5[https://en.wikipedia.org/wiki/GPT-3#GPT-3.5] in 2022 by increasing the model size and training it on a larger dataset. Most recently, OpenAI released GPT-4[https://openai.com/research/gpt-4] in 2023, which is a significant improvement as compared to GPT-3.5 and also considers safety and ethics. Although the GPT models have shown excellent potential in a number of natural language processing tasks, including summarization, these works do not specifically focus on abstractive text summarization. Researchers have improved GPT models for abstractive text summarization tasks by employing the use of their generative nature, demonstrating their ability to generate coherent and contextually relevant summaries.
Zhu et al. <cit.> fine-tuned GPT-2 for abstractive text summarization in Chinese, a language that had not been extensively studied in relation to GPT-2's performance. They used a dataset of Chinese news articles and their summaries, leveraging the self-attention mechanisms and token-based representations of the transformer architecture to modify the GPT-2 model. Its performance was compared to baseline models, including Seq2Seq and Pointer-Generator Networks on a Chinese news summarization task. The authors found that the improved GPT-2 model surpassed the baselines in terms of ROUGE scores. This research underscores the importance of fine-tuning for language and domain-specific tasks, advancing the understanding of GPT-2's capacity for abstractive text summarization in non-English languages and enhancing its application in multilingual contexts.
The effectiveness of utilizing BERT and GPT-2 models for abstractive text summarization in the area of COVID-19 medical research papers was examined by Kieuvongngam et al. <cit.>, who used a two-stage approach to generate abstractive summaries. First, they employed a BERT-based extractive summarization model to select the most relevant sentences from a research article. In the second stage, the authors used the extracted sentences as input for the GPT-2 model, which generated abstractive summaries. The authors sought to generate summaries that are more informative and coherent by combining the benefits of extractive summarization provided by BERT and abstractive summarization provided by GPT-2. By contrasting the automatically generated summaries with human-written summaries, the researchers evaluated their method using a collection of research papers from the COVID-19 dataset. To evaluate the effectiveness of the generated summaries, they employed ROUGE metrics and human evaluation. The results show the potential of integrating BERT and GPT-2, as their two-stage strategy surpassed other cutting-edge techniques for automatic text summarization.
Alexandr et al. <cit.> fine-tuned GPT-3 for abstractive text summarization in the Russian language. They created a corpus of Russian news stories and accompanying summaries using a saliency-based summarization technique, which was subsequently used as input for GPT-3. The fine-tuned GPT-3 model's performance was compared to baseline models such as BERT, GPT-2, and the original GPT-3. They found that, as per the ROUGE metric, the improved GPT-3 model outperformed the baselines, emphasizing the significance of considering the unique challenges of different languages when adapting state-of-the-art models.
Bhaskar et al. <cit.> showcased GPT-3.5's ability in opinion summarization by summarizing a large collection of user reviews using methods like recursive summarization, supervised clustering, and extraction. They tested on two datasets: SPACE <cit.> (hotel reviews) and FewSum <cit.> (Amazon and Yelp reviews), with GPT-3.5 receiving high marks in human evaluations. However, standard metrics like ROUGE were found lacking in capturing summary nuances. The GPT-3.5 abstractive summaries were fluent but sometimes deviated from the original content or were over-generalized. To counter this, new metrics for faithfulness, factuality, and genericity were introduced. The study also examined the effects of pre-summarization and found that while GPT-3.5 was effective for shorter inputs, its accuracy decreased for longer reviews. Techniques like QFSumm <cit.> helped in brevity but made summaries more generic. The team proposed topic clustering to enhance relevance, albeit with minor trade-offs.
Due to the public recent availability of GPT-3.5/4 and their productization as ChatGPT[https://chat.openai.com/], their application in text summarization has become extensive. However, a significant limitation is that LLMs inherently cannot verify the accuracy of the information they generate. Addressing this, Chen et al. <cit.> introduced a novel approach to abstractive summarization that aim to overcome the truth comprehension challenges of LLMs. This method integrates extracted knowledge graph data and structured semantics to guide summarization. Building on BART, a leading
sequence-to-sequence pre-trained LLM, the study developed multi-source transformer modules as encoders, adept at handling both textual and graphical data. Decoding leverages this enriched encoding, aiming to improve summary quality. For evaluation, the Wiki-Sum dataset[https://paperswithcode.com/dataset/wikisum] is utilized. When compared to baseline models, results underscore the effectiveness of this approach in generating concise and relevant summaries.
Zhang et al. introduced <cit.> a novel framework named SummIt, which is grounded on LLMs, particularly ChatGPT. Unlike conventional abstractive summarization techniques, SummIt adopts an iterative approach, refining summaries based on self-evaluation and feedback, a process reminiscent of human drafting and revising techniques. Notably, this framework circumvents the need for supervised training or reinforcement learning feedback. An added innovation is the integration of knowledge and topic extractors, aiming to augment the faithfulness and controllability of the abstracted summaries. Evaluative studies on benchmark datasets indicate superior performance of this iterative method over conventional one-shot LLM systems in abstractive tasks. However, human evaluations have pointed out a potential bias in the model, favoring its internal evaluation criteria over human judgment. This limitation suggests a potential avenue for improvement, possibly through incorporating human-in-the-loop feedback. Such insights are pivotal for future research focused on enhancing the efficacy of abstractive summarization using LLMs.
§.§.§ T5
Text-to-Text Transfer Transformer (T5) is a unified text-to-text transformer model that was developed by Raffel et al. <cit.> to handle a variety of NLP tasks, including abstractive text summarization. Like BERT and GPT, the T5 model is based on transformer architecture and aims to simplify the process of adapting pre-trained models to various NLP tasks by casting all tasks as text-to-text problems. T5's training protocol is different from BERT and GPT. The authors trained T5 in two steps. First, a de-noising autoencoder framework is used to pre-train the model using a large unsupervised text corpus. Reconstructing the original text from corrupted input is a pre-training task that helps the model learn the structure, context, and semantics of the natural language. Second, by transforming each task into a text-to-text format, the pre-trained model is fine-tuned on task-specific supervised datasets, such as summarization datasets. The authors utilized two NLP benchmarks: CNN/Daily Mail and XSum <cit.>, to demonstrate T5's efficacy. On these benchmarks, T5 achieved state-of-the-art results showcasing its capabilities in the abstractive summarization domain. The study also investigated the impact of model size, pre-training data, and fine-tuning strategies on transfer learning performance, offering insightful information about the T5 model's scalability and adaptability.
The effectiveness of the T5 model for abstractive text summarization in the Turkish language is examined by Ay et al. <cit.>, who fine-tuned the T5 model on a dataset of news articles and corresponding summaries. They customized the model to generate abstractive summaries in Turkish by making use of the T5 architecture's capabilities. The researchers evaluated the fine-tuned T5 model and compared its performance with baseline models, such as BERT, GPT-2, and PEGASUS <cit.>. The findings showed that the fine-tuned T5 model outperforms the baseline models and achieves high ROUGE scores, proving its efficiency in Turkish text summarization. This study contributed to the understanding of state-of-the-art models like T5 for abstractive text summarization in languages other than English.
Garg et al. <cit.> compared the performance of T5 and BART alongside a custom-built encoder-decoder model and another model developed through transfer learning from T5, using a dataset comprising over 80,000 news articles and their corresponding summaries. The findings from the study corroborate their hypothesis, demonstrating that T5 indeed outperforms BART, the transfer learning model, and the custom encoder-decoder model. This research enhanced the comprehension of the effective application of pre-trained transformer models, such as T5, for both abstractive and extractive text summarization tasks, particularly within the domain of news articles.
Guo et al. <cit.> introduced LongT5, a model that analyzes the effects of simultaneously adjusting input length and model size. LongT5 integrates attention ideas from long-input transformers (Extended Transformer Construction <cit.>) and adopts pre-training strategies from summarization pre-training (Pre-training with Extracted Gap-sentences for Abstractive Summarization Sequence-to-sequence - PEGASUS) into the scalable T5 architecture. LongT5's main contributions feature a novel scalable attention mechanism known as Transient Global (TGlobal) attention. TGlobal attention emulates Extended Transformer Construction's local/global attention mechanism without the need for extra inputs. LongT5 also adopts a PEGASUS-style Principle Sentences Generation pre-training objective. This new attention mechanism is more effective and flexible because it can be used without requiring alterations to the model inputs. On a number of summarization and question-answering tasks, LongT5 outperformed the original T5 models and achieved cutting-edge results. To promote additional study and development, the authors have made their architecture, training code, and pre-trained model checkpoints publicly available.
Elmadany et al. <cit.> showcased the effectiveness of T5-style models for the Arabic language. The authors presented three robust Arabic-specific T5-style models and evaluated their performance using a novel benchmark for ARabic language GENeration (ARGEN), which includes a range of tasks, including abstractive text summarization. While there are numerous tasks in the ARGEN benchmark, the emphasis on abstractive text summarization demonstrates the model's capacity to generate concise and coherent summaries of Arabic text sources. On all ARGEN tasks, including abstractive summarization, the authors discovered that their models significantly outperformed the multilingual T5 model (mT5), setting new state-of-the-art results. The ability of T5-style models to cope effectively with languages with various dialects and intricate structures is demonstrated by the effectiveness of AraT5 model in abstractive text summarization for the Arabic language. The research contributed to the development of more powerful and efficient models for abstractive text summarization in Arabic and other languages, highlighting the importance of creating language-specific models and benchmarks in natural language processing tasks, such as abstractive text summarization.
Zolotareva et al. <cit.> compared the performance of T5 model with attention-based Seq2Seq models for abstractive text summarization. The authors concluded that the T5 model is effective in abstractive document summarization. They suggested that future research should explore the application of the Transformer method for multi-document summarization and test the T5 approach on other benchmark datasets.
§.§.§ BART
Lewis et al. <cit.> presented BART (Bidirectional and Auto-Regressive Transformers), a denoising Seq2Seq pre-training approach suitable for tasks such as natural language generation, translation, comprehension, and abstractive text summarization. Using the transformer architecture, BART is trained by reconstructing original texts from their corrupted versions. This corruption is introduced through strategies like token masking, token deletion, and text shuffling. Unlike T5, which views every NLP task as a text-to-text problem and pre-trains with a “fill-in-the-blank" task, BART adopts a denoising objective, aiming to restore corrupted text. This approach equips BART to handle tasks that demand understanding and reconstructing sentence structures. After this pre-training phase, BART can be fine-tuned on task-specific datasets, demonstrating its prowess in domains like abstractive text summarization. Notably, on the CNN/Daily Mail and XSum summarization benchmarks, BART surpassed prior models, underscoring its efficacy in the abstractive summarization domain.
Venkataramana et al. <cit.> addressed the problem of abstractive text summarization and aimed to generate a concise and fluent summary of a longer document that preserves its meaning and salient points. The authors used BART, which is fine-tuned on various summarization datasets to adapt to different domains and styles of input texts. They also introduced an attention mechanism in BART’s layers, which allows the model to focus on the most relevant parts of the input text and avoid repetition and redundancy in the output summary. The authors evaluated BART on several benchmark datasets and compared it with other state-of-the-art models such as RoBERTa <cit.>, T5, and BERT in terms of ROUGE scores, human ratings, and qualitative analysis. The paper demonstrated that BART is a powerful and versatile model for abstractive text summarization tasks, capable of generating high-quality summaries that are coherent, informative, and faithful to the original text.
Yadav et al. <cit.> discussed enhancement of abstractive summarization by fine-tuning the BART architecture, resulting in a marked improvement in overall summarization quality. Notably, the adoption of Sortish sampling has rendered the model both smoother and faster, while the incorporation of weight decay has augmented performance by introducing model regularization. BartTokenizerFast, employed for tokenization, further refined the input data quality. Comparative analyses with prior models underscore the efficacy of the proposed optimization strategy, with evaluations rooted in the ROUGE score.
La Quatra et al. <cit.> introduced BART-IT, a Seq2Seq model grounded in the BART architecture, meticulously tailored for the Italian language. BART-IT is pre-trained on an expansive corpus of Italian texts, enabling it to capture language-specific nuances, and subsequently fine-tuned on benchmark datasets for abstractive summarization. The paper also discussed the ethical considerations surrounding abstractive summarization models, emphasizing the importance of responsible application.
Vivek et al. <cit.> presented SumBART, an improved BART model for abstractive text summarization. Addressing BART's factual inconsistencies, three modifications were made to SumBART, resulting in better ROUGE scores and more accurate summaries. Evaluations on the CNN/Daily-mail and XSum datasets showed SumBART's summaries were more human-like than BART's.
§.§ Reinforcement Learning (RL) Approaches
Reinforcement learning (RL) is a type of machine learning where an agent interacts with the environment and learns to make optimal decisions by receiving rewards or penalties for its actions <cit.>. RL methods can be used for abstractive text summarization, where the model learns to generate concise summaries of documents by being rewarded for coherence, accuracy, and brevity <cit.>. The sequence diagram in Figure <ref> illustrates a researcher utilizing reinforcement learning approaches in abstractive text summarization, where the model interacts with an environment to receive rewards or penalties, guiding it to generate optimal summaries. To offer an in-depth examination of the diverse techniques deployed in RL for abstractive text summarization, we have categorized them into two sub-classes: Policy Optimization Methods and Reinforcement Learning with Semantic and Transfer Learning Techniques. Policy Optimization Methods focus on acquiring the most effective strategy that directs the agent in making optimal choices while interacting with its environment, usually with the purpose of generating concise, accurate, and coherent summaries. Reinforcement Learning with Semantic and Transfer Learning Techniques combines RL with semantic analysis and transfer learning, giving the model the ability to understand contextual significance and apply knowledge from one domain to another in addition to making optimal decisions. Table <ref> shows a comparison of Reinforcement Learning (RL) Approaches for Abstractive Text Summarization.
§.§.§ Policy Optimization Methods
Li et al.<cit.> presented an actor-critic <cit.> RL training framework for enhancing neural abstractive summarization. The authors proposed a maximum likelihood estimator (MLE) and a global summary quality estimator in the critic part, and an attention-based Seq2Seq network in the actor component. The main contribution was an alternating training strategy to jointly train the actor and critic models. The actor generated summaries using the attention-based Seq2Seq network, and the critic assessed their quality using Critic I (MLE) and Critic II (a global summary quality estimator). Using the ROUGE metrics, the paper evaluated the proposed framework on three benchmark datasets: Gigaword, DUC-2004, and LCSTS <cit.> and achieved state-of-the-art results.
Paulus et al. <cit.> presented an abstractive text summarization model that used RL to enhance performance and readability, especially for long input sequences. The model employed an intra-attention mechanism to avoid redundancy and utilized a hybrid learning objective combining maximum likelihood loss with RL. The model additionally included a softmax layer for token generation or a pointer mechanism to copy rare or unseen input sequence words. The approach optimized the model for the ROUGE scores without compromising clarity and showed cutting-edge results on the CNN/Daily Mail dataset, improving summary readability. The paper also highlights the limitations of relying solely on ROUGE scores for evaluation.
Chen et al. <cit.> proposed a hybrid extractive-abstractive model that adopts a coarse-to-fine approach inspired by humans, first selecting critical sentences using an extractor agent and then abstractly rewriting them through an abstractor network. The model also used an actor-critic policy gradient with sentence-level metric rewards to bridge the non-differentiable computation between the two neural networks while maintaining linguistic fluency. This integration was achieved using policy gradient techniques and RL. The system was optimized to reduce redundancy. On the CNN/Daily Mail dataset and the DUC2002 dataset used only for testing, the model generated state-of-the-art outcomes with significantly higher scores for abstractiveness. The model significantly improved training and decoding speed over earlier models and yielded summaries with a higher proportion of novel n-grams, a measure of greater abstractiveness. The model outperformed earlier available extractive and abstractive summarization algorithms in terms of ROUGE scores, human evaluation, and abstractiveness scores.
Celikyilmaz et al. <cit.> generated abstractive summaries for lengthy documents by utilizing deep communicating agents within an encoder-decoder architecture. Multiple working agents, each responsible for a subsection of the input text, collaborate to complete the encoding operation. For the purpose of generating a targeted and comprehensive summary, these encoders are coupled with a single decoder that has been end-to-end trained using RL. In comparison to a single encoder or multiple non-communicating encoders, the results showed that multiple communicating encoders generate summaries of higher quality. Maximum likelihood (MLE), semantic cohesion, and RL loss were optimized during the training. Intermediate rewards, based on differential ROUGE measures, were incorporated to encourage unique sentence creation. Experiments conducted on the CNN/DailyMail and New York Times (NYT) <cit.> datasets showed better ROUGE scores compared to baseline MLE data. Human evaluations favored the summaries generated by deep communicating agents.
Hyun et al. <cit.> introduced an unsupervised method for generating abstractive text summaries of varying lengths using RL. Their approach, Multi-Summary based Reinforcement Learning with Pre-training (MSRP), tackled the issue of generating summaries of arbitrary lengths while ensuring content consistency and fluency. MSRP's reward function is comprised of three components: content preservation, fluency, and length. The content preservation reward was determined using sentence similarity through SentenceBERT embeddings <cit.>. Fluency was gauged using a pre-trained GPT-2 model, and the length reward was based on the comparison of the generated summary's length to the target length. They employed a training method using Proximal Policy Optimization (PPO) <cit.> with an adaptive clipping algorithm, enabling quicker convergence and stability. The T5 transformer model served as the policy for training MSRP. When evaluated on the Gigaword dataset <cit.>, MSRP consistently outperformed other unsupervised summarization models in terms of ROUGE scores. Despite using a larger model and an autoregressive approach, MSRP's inference time remained competitive due to its reward-based training and beam search implementation during summary creation.
§.§.§ Reinforcement Learning with Semantic and Transfer Learning Techniques
Jang et al. <cit.> addressed the limitations of traditional ROUGE-L based methods, which often generate repetitive summaries, by introducing two new RL reward functions: ROUGE-SIM and ROUGE-WMD. These functions integrate Cosine Similarity and Word Mover's Distance <cit.> respectively, with the ROUGE-L score, ensuring summaries are semantically close to the source text while promoting novelty and reducing direct copying. To generate accurate, innovative, and grammatically sound summaries, a decoding method based on these reward functions was proposed. The Gigaword dataset was employed to evaluate the new methods. The authors used three metrics: ROUGE-PACKAGE, novel n-grams, and Grammarly [https://www.grammarly.com] to measure summary quality, originality, and grammatical integrity. The proposed models surpassed various baselines, offering consistent performance, higher semantic value, improved abstractiveness, and reduced grammatical errors.
Keneshloo et al. <cit.> introduced an RL framework for abstractive text summarization to address exposure bias and improve generalization to unfamiliar datasets. Exposure bias means that during the training phase, the model is provided with an accurate input at every decoder step, while in the testing phase, it must generate the next token based on its own output <cit.>. They used a Pointer-Generator <cit.> model combined with an RL objective and a transfer reinforcement learning approach. Traditional methods relying on cross-entropy (CE) loss often suffer from exposure bias and poor generalization. In contrast, the proposed RL objective minimizes the negative expected reward, enabling the model to train based on its own output, in line with metrics like ROUGE. This self-critical policy gradient approach emphasizes better-performing samples during training. However, the model faced challenges in transfer learning due to training on the source dataset distribution. To counter this, they introduced a transfer RL approach using a shared encoder and a trade-off parameter to balance source and target dataset training. The authors used four datasets: Newsroom <cit.>, CNN/Daily Mail, DUC 2003[https://www-nlpir.nist.gov/projects/duc/guidelines/2003.html], and DUC 2004[https://www-nlpir.nist.gov/projects/duc/guidelines/2004.html], for training and evaluating text summarization models. They evaluated the models using ROUGE-1, ROUGE-2, and ROUGE-L F1 scores. The proposed model achieved the best performance, outperforming baselines and state-of-the-art methods in generalizing to unseen datasets.
Wang et al. <cit.> presented a topic-aware Convolutional Seq2Seq (ConvS2S) model for abstractive text summarization, enhanced with RL. They introduced a new topic-aware attention mechanism that incorporates high-level contextual information, improving summarization. Their main contribution was using Self-Critical Sequence Training (SCST) <cit.> to address exposure bias, which arises when models train on ground-truth data distribution rather than their own. This bias often degrades test performance due to accumulated errors and inflexibility in generating summaries. To combat these issues, the authors applied SCST, a policy gradient algorithm that maximizes the non-differentiable ROUGE metric. This method allowed the model to learn its own distribution and optimize the evaluation measure. Using SCST, the model was incentivized to generate sequences with high ROUGE scores, mitigating exposure bias and enhancing test results. Three datasets, Gigaword, DUC-2004 and LCSTS, were used for evaluation. The proposed model achieved the best performance, getting the highest ROUGE-1, ROUGE-2 and ROUGE-L scores on Gigaword and LCSTS. It also had the top ROUGE-1 and ROUGE-L scores on DUC-2004.
§.§ Hierarchical Approaches
Hierarchical approaches involve breaking down tasks into sub-problems and combining solutions to these sub-problems. Abstractive text summarization poses several challenges, such as handling long and complex documents, preserving the semantic and syntactic structure of the summary, and avoiding repetition and redundancy. To address these challenges, researchers have proposed hierarchical approaches <cit.> that exploit the hierarchical structure of natural language and model the source text at different levels of granularity. The sequence diagram in Figure <ref> depicts the process of a researcher employing hierarchical models in abstractive text summarization, where the document undergoes multiple hierarchical encoding and decoding layers to generate a summarized output. To provide a more comprehensive understanding of Hierarchical Approaches in the context of abstractive text summarization, we have further divided them into three sub-classes: hierarchical LSTM-based approaches, hierarchical graph and network-based methods, and hierarchical attention and human-like methods. This classification makes it possible to better analyze the various techniques utilized in Hierarchical Approaches for abstractive text summarization. Table <ref> a comparison of Hierarchical Models/Frameworks for Abstractive Text Summarization.
§.§.§ Hierarchical LSTM-based Approaches
Nguyen et al. <cit.> presented a method for abstractive text summarization that used a hierarchical Long Short-Term Memory (LSTM) encoder-decoder model. The key contribution is the development of a two-level LSTM architecture, which successfully captures the hierarchical structure of documents and improves sentence and paragraph representation and understanding. The hierarchical encoder consists of two LSTM layers that collaborate to process and represent textual information. The first LSTM layer operates at the token level and captures the relationships among words within a sentence. This layer processes the individual words in the context of the words around them to generate sentence-level representations. The second LSTM layer, which acts at the document level, uses these sentence-level representations as inputs. This layer effectively captures the relationships among sentences within a document by processing the sentence-level representations sequentially to obtain document-level representations. The ROUGE-1, ROUGE-2, and ROUGE-L scores showed that the hierarchical LSTM encoder-decoder model outperformed strong baseline models on two benchmark datasets: Gigaword and Amazon reviews from Stanford Network Analysis Project (SNAP) [https://snap.stanford.edu/data/web-Amazon.html].
Song et al. <cit.> proposed a novel LSTM-CNN based abstractive text summarization framework called ATSDL that generated summaries by exploring fine-grained semantic phrases rather than just sentences. The ATSDL framework had two main stages – phrase extraction from source sentences using a technique called Multiple Order Semantic Parsing (MOSP), and summary generation using a LSTM-CNN model that learned phrase collocations. MOSP extracted subject, relational and object phrases by scattering sentences into fragments and restructuring them into a binary tree, providing richer semantics than keywords. The LSTM-CNN model took phrase sequences as input, learning phrase collocations and capturing both semantics and syntactic structure, overcoming limitations of extractive and abstractive models. The model had a convolutional phrase encoder and recurrent decoder that could generate or copy phrases, handling the problem of rare words. Refining and combining similar phrases before LSTM-CNN training reduced phrase redundancy and sparsity, improving learning. Experiments on CNN/DailyMail datasets showed that the model outperformed state-of-the-art abstractive and extractive models in ROUGE scores. The generated summaries were composed of natural sentences meeting syntactic requirements, while capturing semantics – demonstrating benefits of the hierarchical phrase-based approach.
§.§.§ Hierarchical Attention and Human-like Approaches
Zhang et al. <cit.> employed the CopyNet <cit.> mechanism and the hierarchical attention model for abstractive summarization. The authors developed an approach based on handling rare phrases and deep semantic understanding. This Seq2Seq model is enhanced with a multi-step attention mechanism that enables it to generate summaries that are more relevant and coherent and reflect the compositional structure of the document. The model can also deal with OOV and rare words by copying them directly from the source text using the CopyNet approach. The authors used two datasets for the experimental evaluation: Gigaword Corpus and Large-Scale Chinese Short Text Summarization Dataset (LCSTS) Corpus. The experiments were conducted on three different models, namely Words-lvt2k-1sent (basic attention encoder-decoder model), Words-lvt2k-2sent (trained on two sentences at the source), and Words-lvt2k-2sent-hieratt (improves performance by learning the relative importance of the first two sentences). ROUGE-1, ROUGE-2, and ROUGE-L were used for evaluation. Results showed that the proposed hierarchical attention model demonstrated effectiveness and efficiency in generating summarizations for different types of text, outperforming the baseline models on the Gigaword Corpus. However, on the LCSTS Corpus, the hierarchical attention model did not outperform the baseline attentional decoder.
Yang et al. <cit.> developed a method for abstractive text summarizing named Hierarchical Human-like Abstractive Text Summarization (HH-ATS). By incorporating three key elements—a knowledge-based hierarchical attention module, a multitask module, and a Dual Discriminative Generative Adversarial Networks (DD-GAN) framework—the model emulates the process of human reading cognition. These components correspond to the three phases of the human reading cognition process: rough reading, active reading, and post-editing. The knowledge-based hierarchical attention module in the HH-ATS model captures the global and local structures of the source document to focus on critical information. By simultaneously training the model on text categorization and syntax annotation tasks, the multitask module enhances the model’s performance. The DD-GAN framework refines the summary quality by introducing a generative model and two discriminative models that evaluate the informativeness and fluency of generated summaries. The authors experimented on CNN/Daily Mail and Gigaword datasets. ROUGE-1, ROUGE-2, and ROUGE-L were used for evaluation. A human evaluation was also conducted, in which the summaries generated by the HH-ATS model were compared to the best-performing baseline method. The results demonstrated that the strategy can generate summaries that were more informative and fluent. The performance of the HH-ATS model was examined using an ablation study, which revealed that all three of the components were essential to the model’s overall success. Additionally, the convergence analysis showed that the HH-ATS model converges more rapidly and stably than the baseline approaches, demonstrating the efficacy of the training strategy for the model.
§.§.§ Hierarchical Graph and Network-based Approaches
Using a hierarchical adaptive segmental network learning framework, Zhao et al. <cit.> developed method for addressing the issue of abstractive meeting summarization. The two key elements of the method are adaptive segmental encoder networks for learning the semantic representation of the conversation contents and reinforced decoder networks for generating the natural language summaries. By adaptively segmenting the input text depending on conversation segmentation cues, the adaptive segmental encoder networks are created to take advantage of the structure of meeting conversations. A conversation segmentation network that recognizes segment boundaries and provides this information to the encoder is used to identify these cues. This approach enables the encoder to learn the semantic representation of meeting conversations while considering their inherent structure, which is crucial for generating accurate summaries. The reinforced decoder network is based on segment-level LSTM networks. The authors used AMI meeting corpus <cit.> containing 142 meeting records and evaluated the results using ROUGE metrics.The authors identified that the maximum likelihood estimation used for training the decoder network can lead to suboptimal performance, and used an RL framework to train the decoder network. The methods, HAS-ML (trained with maximum likelihood estimation) and HAS-RL (trained with reinforcement learning), achieve high performance, indicating the effectiveness of the hierarchical adaptive segmental network learning framework. The HAS-RL method performs better than HAS-ML, which shows the effectiveness of RL in generating summaries in the problem of abstractive meeting summarization.
Qiu et al. <cit.> presented a Hierarchical Graph Neural Network (HierGNN) approach to improve abstractive text summarization by exploiting the hierarchical structure of input documents using three steps: 1) learning a hierarchical document structure by a latent structure tree learned via sparse matrix-tree computation; 2) propagating sentence information over this structure via a message-passing node propagation mechanism called Layer-Independent Reasoning (LIR) to identify salient information; and 3) concentrating the decoder on salient information via a graph-selection attention mechanism (GSA). HierGNN is used in two architectures: HierGNN-Pointer-Generator Network (HierGNN-PGN) and HierGNN-BART. To improve sentence representations, both models include a two-layer HierGNN on top of the sentence encoder. Experiments show that the HierGNN model outperforms strong LLMs like BART in average ROUGE-1/2/L for CNN/DailyMail and XSum. Furthermore, human evaluations show that the model’s summaries generated by HierGNN are more relevant and less redundant than the baselines. The authors also identified that HierGNN more effectively analyses long inputs and synthesizes summaries by fusing many source sentences instead of compressing a single source sentence. However, The HierGNN model is based on an inverted pyramid writing style, which may not be applicable to other sorts of input texts. In addition, the model’s complexity rises as a result of the HierGNN encoder.
§.§ Multi-modal Summarization
In many real-world scenarios, text documents are often accompanied by other modalities, such as images or videos, that provide complementary or supplementary information. Multi-modal summarization aims to leverage the information from multiple modalities to generate a coherent and comprehensive summary that reflects the salient aspects of the entirety of the input. The sequence diagram in Figure <ref> depicts a researcher employing multi-modal approaches in abstractive text summarization, integrating both textual and non-textual data sources to generate comprehensive and context-rich summaries. To provide a more comprehensive understanding of multi-modal approaches in the context of abstractive text summarization, we have further divided them into two sub-classes: Text-Image Summarization and Text-Video Summarization. Table <ref> presents a comparison of Multi-modal Models/Frameworks for Abstractive Text Summarization.
§.§.§ Text-Image Summarization
Chen et al. <cit.> developed a multi-modal attentional mechanism that pays attention to original sentences, images, and captions within a hierarchical encoder-decoder architecture. They extended the DailyMail dataset and introduced E-DailyMail corpus by extracting images and captions from HTML-formatted texts. During the encoding phase, a hierarchical bi-directional RNN using GRU is employed to encode the sentences and the text documents, while an RNN and a CNN are used to encode the image set. In the decoding stage, text and image encodings are combined as the initial state, and an attentional hierarchical decoder is used to generate the text summary while focusing on the original phrases, photos, and captions. To generate summaries, the authors propose a multi-modal beam search method. Beams scores are based on the bigram overlaps of the generated sentences and captions. Additionally, they created an OOV replacement mechanism, enhancing the effectiveness of summarization. The main evaluation metrics used were ROUGE scores. When compared to existing neural abstractive models, extractive models, and models without multi-modal attention, the model that attends to images performs significantly better. Furthermore, the experiments demonstrate that their model is capable of producing informative summaries of images.
Li et al. <cit.> introduced a multi-modal sentence summarization task using a sentence-image pair. The authors constructed a multi-modal sentence summarization corpus that consists of 66,000 summary triples (sentence, image, summary). To effectively integrate visual elements, the authors developed image filters and proposed a modality-based attention mechanism to focus on image patches and text units separately. They designed a Seq2Seq model with hierarchical attention mechanisms that concentrated on both image and text details. Visual elements were incorporated into the model to initiate the target language decoder, and image filter modules were used to reduce visual noises. ROUGE metrics were used for evaluation. They discovered that initializing the decoder using images improved performance, indicating that image features effectively captured key points of source texts. The multi-modal model was found to be more abstractive than the text-only model. Moreover, the multi-modal coverage technique effectively reduced word repetition.
Zhu et al. <cit.> introduced a novel task in multimodal summarization whose objective is to choose the image most pertinent to the abstractive summary from the multimodal input. The authors developed this task in response to their findings that multimodal outputs considerably increase user satisfaction upto 12.4% in terms of informativeness. The authors constructed a multi-modal dataset [http://www.nlpr.ia.ac.cn/cip/jjzhang.htm] from the DailyMail corpus. They developed a multimodal attention model, which can concurrently generate text summaries and choose the input’s most suitable image. The Multimodal Automatic Evaluation (MMAE) method considers both intra-modality salience and inter-modality relevance in order to evaluate the multimodal outputs. A text encoder, an image encoder, a multimodal attention layer, and a summary decoder are the four main parts of the model. During the decoding process, the multimodal attention layer combines textual and visual information. A unidirectional LSTM used in the summary decoder generates the text summary and selects the most relevant image based on the visual coverage vector. Their multimodal attention model achieved better MMAE scores compared to extractive methods.
For Multimodal Summarization with Multimodal Output (MSMO), Zhu et al. <cit.> used a multimodal objective function that combines the Cross-Entropy loss (CE) for image selection and the Negative Log-Likelihood loss (NLL) for summary generation. The MSMO dataset, which consists of online news stories combined with several image-caption pairs and multi-sentence summaries, is used by the authors in their experiments. ROUGE-ranking and Order-ranking are two additional techniques the authors presented for converting a text reference into a multimodal reference. They also developed an evaluation metric built on joint multimodal representation, which projects the model output and multimodal reference into a joint semantic space. Experiment results show that the model performs well in terms of both automatic and manual evaluation metrics, and has a better correlation with human judgments.
Li et al. <cit.> generated abstractive summaries for Chinese e-commerce products that incorporate both visual and textual information. The aspect-aware multimodal summarization model efficiently combines visual data from product photos and highlights the most crucial features of a product that are valuable for potential consumers. The model, based on pointer-generator networks, integrates visual data using three different methods: initializing the encoder with the global visual feature, initializing the decoder with the global visual feature, and producing context representations with the local visual features. The authors created CEPSUM, a large-scale dataset for summarizing Chinese e-commerce products, with over 1.4 million summaries of products that were manually written together with comprehensive product data that includes a picture, a title, and additional textual descriptions. The Aspect Segmentation algorithm, a bootstrapping method that automatically expands aspect keywords, is used to mine aspect keywords. A number of text-based extractive and abstractive summarization techniques, LexRank <cit.>, Seq2seq, Pointer-Generator, and MASS <cit.>, were compared to the model using the ROUGE score and manual evaluations, with the finding that the aspect-aware multimodal pointer-generator model outperformed the compared techniques.
§.§.§ Text-Video Summarization
Liu et al. <cit.> introduced two models for multi-modal abstractive text summarization of open-domain videos: Multistage Fusion Network with Forget Gate (MFFG) and Single-Stage Fusion Network with Forget Gate (SFFG). MFFG integrated multi-source modalities such as video and text by utilizing a multistage fusion schema and a fusion forget gate module, enhancing the model's ability to generate coherent summaries. SFFG, a simplified version of MFFG, reduced model complexity by sharing features across stages and used the source input text to improve summary representation. The authors used ROUGE (1,2,L), BLEU(1,2,3,4) <cit.>, CIDEr <cit.>, and METEOR metrics to evaluate model performance. Results revealed that MFFG and SFFG outperformed other methods in terms of Informativeness and Fluency. Specifically, SFFG excelled with ground truth transcript data in the How2 and How2-300h datasets <cit.>, while MFFG demonstrated superior anti-noise capabilities using automatic speech recognition-output transcript data.
Using data from three different modalities—audio, text, and video—Khullar et al. <cit.> introduced a Seq2Seq model for multimodal abstractive text summarization. Earlier studies concentrated on textual and visual modes, overlooking the potential of audio data to help generate more accurate summaries. A Trimodal Hierarchical Attention layer is used to fully leverage all three modalities. The model’s capacity to generate coherent and comprehensive summaries is improved by this layer, which enables the model to selectively attend to the most pertinent information from each modality. This specialized attention layer integrates audio, text, and video data from independent encoders. The output derived from the attention layer is utilized as the input for the decoder, which generates the abstractive summary. The authors used ROUGE scores and a Content F1 metric <cit.> for evaluation and showed that their model outperforms baselines. The study demonstrates that the inclusion of audio modality and the attention layer's ability to effectively extract relevant information from multiple modalities significantly improves the performance of multimodal abstractive text summarization.
Raji et al. <cit.> developed an LSTM attention encoder-decoder model to generate abstract summaries from text, image, and video data. To process image data and extract text, the system leverages the Tesseract-OCR engine <cit.>. It extracts audio from video files before converting them to text. It employs an RNN encoder-decoder architecture with an attention mechanism and LSTM cells to improve its handling of long-range dependencies. By removing several encoded vectors from the source data and generating abstractive summaries, this methodology enhances the summarization procedure. For training and validation, the authors use the Amazon fine food dataset [https://www.kaggle.com/datasets/snap/amazon-fine-food-reviews], which consists of reviews of foods, with the description serving as the input variable and the title serving as the target variable. The authors calculated F1, precision and recall scores for Rouge-1, Rouge-2, and Rouge-L. This approach performs well in a variety of applications, such as generating abstracts for lengthy documents or research papers and enhancing the overall accessibility and comprehension of information in various formats.
Fu et al. <cit.> developed a model that incorporates four modules: feature extraction, alignment, fusion, and bi-stream summarizing. They utilized a bi-hop attention mechanism to align features and an advanced late fusion method to integrate multi-modal data. A bi-stream summarization technique enabled simultaneous summarizing of text and video. The authors introduced the MM-AVS dataset, derived from Daily Mail and CNN websites, containing articles, videos, and reference summaries. They evaluated using ROUGE scores for text summarization and cosine image similarity for video summarization. The proposed model surpassed existing methods in multi-modal summarization tasks, emphasizing the effectiveness of the bi-hop attention, improved late fusion, and bi-stream summarization approaches.
Li et al. <cit.> introduced Video-based Multimodal Summarization with Multimodal Output (VMSMO) to automatically select a video cover frame and generate a text summary from multimedia news articles. Their model, the Dual-Interaction-based Multimodal Summarizer (DIMS), comprises three primary components: Dual Interaction Module, Multi-Generator, and Feature Encoder. This model conducts deep interactions between video segments and articles and subsequently generates both a written summary and a video cover decision. For this work, the authors compiled the first large-scale VMSMO dataset from Weibo[https://us.weibo.com/index], China’s largest social network website, including videos with cover images and articles with text summaries. The model was evaluated using standard Rouge metrics for text summary and mean average precision (MAP) <cit.> and recall at position (Rn@k) <cit.> for video cover frame selection. Rn@k measures whether the positive sample is ranked in the first k positions of n candidates. The DIMS model's performance was benchmarked against several baselines. Experimental results, based on both automatic evaluations and human judgments, indicated that DIMS outperformed state-of-the-art methods.
§ MODEL SCALABITY AND COMPUTATIONAL COMPLEXITY IN ABSTRACTIVE SUMMARIZATION
Table <ref> provides a concise comparative analysis of various models in abstractive text summarization, such as by highlighting their parameters, computational demands, and performance metrics. This comparison offers valuable insights into the strengths and limitations of each model type.
§.§ Traditional Seq2Seq based Models
* Model Scales and Parameters: Traditional Seq2Seq models, which primarily use LSTM or GRU units, typically contain parameters ranging from tens to hundreds of millions. These models marked the early advancements in neural network-based text summarization by learning to map input sequences to output sequences <cit.>.
* Computational Complexity and Resource Consumption: Seq2Seq models require a substantial amount of computational power for both training and inference, particularly when processing longer text sequences. They are less resource-intensive compared to LLMs but offer limited capabilities in handling complex contextual relationships in text <cit.>.
* Comparative Analysis of Performance and Resource Efficiency: While offering moderate performance in summarization tasks, these models are generally faster and more resource-efficient, making them suitable for scenarios with computational constraints <cit.>.
§.§ Pre-trained Large Language Models
* Pre-trained models such as BERT, GPT-3, and GPT-4 have significantly redefined the scale of model parameters in the field. BERT's base model, for instance, contains around 110 million parameters, setting a new standard for deep learning models. Following this, GPT-3 pushed the boundaries further with an unprecedented 175 billion parameters. Most recently, GPT-4 has surpassed its predecessors, boasting a staggering 1760 billion parameters, representing the continuous evolution and rapid growth in the size and complexity of these models <cit.>.
* Computational Complexity and Resource Consumption: The training of these models requires extensive computational resources, including high-performance GPUs and significant amounts of memory. Inference, while providing high-quality outputs, is resource-intensive, particularly for real-time applications <cit.>.
* Comparative Analysis of Performance and Resource Efficiency: These models deliver state-of-the-art results in summarization, achieving high levels of accuracy, coherence, and fluency. However, their training and operational costs are substantial, making them less accessible for smaller-scale applications <cit.>.
§.§ Reinforcement Learning (RL) Approaches
* Model Scales and Parameters: RL-based models in text summarization vary in size, but their complexity lies more in the training process, which involves learning to optimize a reward function, often based on human feedback or specific performance metrics <cit.>.
* Computational Complexity and Resource Consumption: RL models are computationally demanding, primarily during the training phase where they require numerous iterations to converge to an optimal policy. This makes them resource-intensive, both in terms of computational power and time <cit.>.
* Comparative Analysis of Performance and Resource Efficiency: RL approaches can tailor summaries more closely to specific user preferences or criteria <cit.> but at the cost of increased computational resources and training complexity <cit.>.
§.§ Hierarchical Approaches
* Model Scales and Parameters: Hierarchical models, which process text at various levels such as sentence and paragraph levels, are characterized by a higher number of parameters due to their multi-layered nature. For instance, the Hierarchical Attention Network (HAN) <cit.> model proposed by Yang et al. integrates two levels of attention mechanisms — one at the word level and another at the sentence level — which significantly increases the total parameter count of the model. This layered approach allows for a deeper understanding of the text structure but also results in larger model size, often encompassing millions of parameters to account for the complexity of processing and synthesizing information at different textual hierarchies <cit.>.
* Computational Complexity and Resource Consumption: These models necessitate substantial computational power for processing multiple layers of text information, often leading to increased training and inference times <cit.>.
* Comparative Analysis of Performance and Resource Efficiency: They are effective in capturing the overall structure and meaning of large documents but require more computational resources compared to simpler models <cit.>.
§.§ Multi-modal Summarization
* Model Scales and Parameters: Multi-modal summarization models integrate various data types (text, images, audio) and therefore have complex and large architectures to process and synthesize information from different modalities <cit.>.
* Computational Complexity and Resource Consumption: Integrating multiple data types in multi-modal summarization models, like the one by Li et al. <cit.>, significantly increases computational demands. These models require advanced processing power and extensive memory to analyze and synthesize text, images, and audio. This complexity in processing different modalities leads to higher resource requirements during both training and inference stages, making them more resource-intensive than unimodal models <cit.>.
* Comparative Analysis of Performance and Resource Efficiency: Multi-modal summarization models enhance the richness of summaries by combining various data types, but this advantage comes with increased computational costs. As exemplified by Sun et al. <cit.>, while these models achieve higher accuracy and coherence by integrating text and visual information, they demand greater computational resources, including longer training periods and more memory. Balancing enhanced performance with resource efficiency is a critical aspect of developing multi-modal summarization models <cit.>.
Y>X
§ ISSUES, CHALLENGES, AND FUTURE DIRECTIONS FOR ABSTRACTIVE SUMMARIZATION
Abstractive text summarization has witnessed substantial progress, but challenges persist, and new research directions are emerging. This section provides a comprehensive overview of the current challenges, strategies to overcome them, and potential future research directions. See Figure <ref> for the taxonomy associated with this section and its subclasses.
§.§ Fundamental Challenges
This section examines three fundamental challenges in abstractive summarization: inadequate representation of meaning, maintaining factual consistency, and temporal and causal reasoning as shown in Figure <ref>. Regardless of advancements, models often struggle with capturing semantic nuances and the balance between completeness and conciseness. Integration of external knowledge structures and attention mechanisms show promise in improving meaning representation. Regarding factual consistency, condensation risks generating inaccurate or misleading summaries with severe implications, hence techniques leveraging knowledge bases, attribute control, and reinforcement learning are being examined to steer factual accuracy.
§.§.§ Inadequate Representation of Meaning
Despite advances, many summarization models struggle to adequately represent the meaning of the source text <cit.>. This inadequacy is not merely a result of model design but is deeply rooted in the inherent limitations of the underlying language models they are built upon. These foundational models, while powerful, sometimes struggle to grasp intricate semantic relationships, especially when the summarization task involves complex narratives or multifaceted arguments <cit.>. Among these foundational models, Large Language Models (LLMs), for instance GPT, have demonstrated remarkable capabilities in capturing semantic nuances <cit.>. However, even these advanced models encounter challenges while trying to maintain the delicate balance between the original meaning of a text and the conciseness required for effective summarization. While LLMs are capable of processing large volumes of data and understand diverse linguistic patterns, they may be unable to accurately represent nuanced meanings or complex argumentative structures in the text <cit.>. This challenge is
further exacerbated by the complexity of the summarization task itself, which demands a delicate balance between conciseness and completeness.
To navigate these challenges, researchers have been exploring more advanced knowledge representation techniques. One promising approach is proposed by Banarescu et al. <cit.>, which explores the use of leveraging external knowledge structures to aid in the summarization process. By integrating such advanced knowledge representations and the profound learning capabilities of LLMs, models can be better equipped to understand and reproduce the deeper semantic relationships present in the source text, leading to summaries that are not only concise but also rich in meaning and context.
Furthermore, attention mechanisms <cit.> have been instrumental in helping models focus on relevant parts of the source text, thereby improving the representation of meaning. By weighing different parts of the input text based on their relevance, attention mechanisms allow models to capture more nuanced semantic relationships. Enhancing the capability of models to understand and incorporate complex semantic relationships and external knowledge structures will be crucial for the development of summarization models that produce summaries with greater depth, accuracy, and context awareness.
§.§.§ Maintaining Factual Consistency
Ensuring factual consistency in summaries is paramount <cit.>. As models condense vast amounts of information into concise summaries, there is a significant risk of distorting or misrepresenting the original content. Such distortions can generate summaries that, while grammatically correct, might convey misleading or factually incorrect information. The implications of these inconsistencies can be especially severe in areas like news dissemination, medical reporting, or legal documentation, where accuracy is crucial.
Reinforcement learning (RL) techniques offer a potential solution to this challenge <cit.>. In RL approaches, models are rewarded for generating factually consistent summaries. By establishing a reward mechanism that penalizes factual inconsistencies, models can be trained to prioritize factual accuracy over other aspects of summary generation.
As the field of abstractive summarization progresses, the focus on factual consistency will remain at the forefront, spurring innovations that emphasize truthfulness and accuracy in generated content. The development of more sophisticated RL techniques, which better incentivize factual consistency, is likely to be a key area of future research. Furthermore, integration of external knowledge sources, such as databases or knowledge graphs, and advanced reasoning capabilities will also play a crucial role in enhancing the factual accuracy of generated summaries. By cross-referencing generated summaries with these trusted sources, models can validate the factual accuracy of their outputs. Another technique involves the use of attribute control in text generation <cit.>. Models can be steered to generate content that aligns with the ground truth by explicitly defining specific attributes or facts that the summary must adhere to. These advancements will contribute to the creation of models that provide factually accurate, coherent, and contextually appropriate summaries.
§.§.§ Temporal and Causal Reasoning
Incorporating temporal and causal reasoning in models is essential for coherent summaries <cit.>. Recognizing this, there is a growing emphasis on integrating temporal and causal reasoning capabilities into modern summarization models. Techniques such as temporal logic offer a structured approach to capture and represent time-based relationships and sequences in texts <cit.>. By understanding the chronological order of events and their interdependencies, models can generate summaries that respect the natural progression of the narrative. On the other hand, causal models provide frameworks to discern cause-and-effect relationships within content <cit.>. However, there are inherent challenges in this domain. One of the primary issues is the ambiguity in natural language, where temporal and causal relationships might be implied rather than explicitly stated <cit.>. This makes it challenging for models to consistently identify and represent these relationships. Additionally, the vastness and variability of real-world events, especially in domains like clinical narratives, mean that models often need to deal with incomplete or conflicting information, which can further complicate temporal and causal reasoning <cit.>.
A significant future direction involves the development of Hybrid Models. Combining rule-based approaches with deep learning can help better capture explicit and implicit temporal and causal cues <cit.>. Another promising avenue is Knowledge Integration. Integrating external knowledge bases or ontologies that provide structured temporal and causal information can enhance the model's reasoning capabilities <cit.>. By identifying and representing these causal chains, summarization models can ensure that the underlying reasons and consequences of events are accurately reflected in the summaries. As the demand for high-quality, context-aware summaries grows, addressing these challenges and focusing on the aforementioned future directions will be pivotal in advancing the field and generating summaries that truly capture the essence and intricacies of the original content.
§.§ Technological Challenges
This section examines key technological challenges including handling long documents, OOV words, and rare phrases, and developing enhanced evaluation metrics as depicted in Figure <ref>. Standard Seq2Seq models have trouble handling lengthy texts because of memory constraints and the challenge of capturing long-range dependencies. Tailored long-document models may prove vital, but hierarchical models and memory-augmented networks show potential in extracting the essence from large documents. Pre-trained models provide rich embeddings encompassing broad vocabularies to precisely process rare or uncommon terms. And new semantic similarity metrics are being explored to enhance surface-level ROUGE evaluation.
§.§.§ Handling Long Documents
Models, especially Seq2Seq architectures, find it challenging to summarize long documents <cit.>. The inherent limitations of these models, such as memory constraints and the difficulty in capturing long-range dependencies, make them less adept at preserving the core essence of lengthy texts. Recognizing these challenges, researchers have been exploring alternative architectures and techniques. Hierarchical models, for instance, introduce a multi-level approach, where sentences are first encoded into sentence representations, which are then further encoded to generate the final summary <cit.>. This layered approach aims to better manage the intricacies of lengthy texts. Similarly, memory-augmented neural networks enhance the model's capacity to remember and utilize information from earlier parts of the text, ensuring that crucial details are not lost in the summarization process <cit.>.
While these methods have shown promise, there is a growing consensus in the research community about the need for models specifically tailored for long-document summarization. Such models would not just adapt existing techniques but would be fundamentally designed to handle the complexities and nuances of extensive texts <cit.>. As the digital world continues to produce vast amounts of lengthy content, from research papers to detailed reports, the demand for effective long-document summarization models will only intensify, making it a pivotal area for future research.
§.§.§ Out-of-Vocabulary Words and Rare Phrases
One of the persistent hurdles in natural language processing and machine learning is the effective handling of OOV words and rare phrases <cit.>. These terms, which may not be present in the training vocabulary of a model, pose a significant challenge, especially when they carry critical information or context. The inability to process or generate such terms can lead to summaries that either omit essential details or resort to approximations, potentially compromising the accuracy and fidelity of the generated content. Managing OOV words and rare phrases involves various issues. Firstly, conventional word embedding techniques, for example, Word2Vec <cit.> or GloVe <cit.>, frequently neglect to give representations to words not seen during training. This limitation can bring about the loss of information when these words are encountered in real-world scenarios. Secondly, rare phrases, which comprise of various words, can be contextually rich, and their absence can cause a deficiency of nuanced meaning in the generated text <cit.>.
To explore this problem, the advances in powerful pre-trained models like BERT have given a pathway to address this issue. These models, trained on vast corpora, offer rich contextualized embeddings that capture a wide range of vocabulary, including many rare terms. By leveraging such pre-trained models, summarization systems can benefit from their extensive knowledge, ensuring that even less common words and phrases are handled with precision and context <cit.>. As the demand for high-quality, comprehensive summaries grows, addressing the challenge of OOV words and rare phrases will remain central to the advancement of the field. The continued development and integration of advanced language models capable of understanding and representing a broader vocabulary are crucial for improving the accuracy and richness of generated summaries.
§.§.§ Evaluation Metrics
Evaluating the quality of generated summaries remains a challenge <cit.>. Traditional metrics, such as ROUGE, have been the cornerstone for evaluation due to their simplicity and ease of use. However, while ROUGE excels at doing surface-level comparisons between the generated summary and the reference, it often falls short of capturing deeper semantic similarities and nuances. This limitation becomes particularly evident when summaries, though semantically accurate, use phrasings or structures different from those in the reference text. Recognizing these shortcomings, the research community has been exploring alternative metrics. BERTScore <cit.>, for instance, leverages the power of pre-trained LLMs to evaluate summaries based on contextual embeddings, offering a more nuanced measure of semantic similarity. Similarly, MoverScore <cit.> introduced an approach by considering the movement of words in the generated summary, providing a different perspective on evaluating coherence and relevance.
Although these newer metrics show considerable promise, the quest for the perfect evaluation metric is far from over. The dynamic and multifaceted nature of the summarization task demands continuous innovation in evaluation methodologies, urging researchers to delve deeper into the intricacies of summary quality and develop metrics that can holistically capture both form and essence <cit.>. Concurrently, human evaluation remains an invaluable tool for assessing the quality of summaries <cit.>. By incorporating human judgments, researchers can gain insights into aspects of summary quality that automated metrics might overlook, such as fluency, coherence, and overall informativeness. This multifaceted approach combining advanced, context-aware computational metrics with nuanced human evaluation will likely be the cornerstone of future research efforts in the quest to develop more accurate and meaningful summary evaluation methods.
§.§ Modalities of Summarization
Modalities of summarization are addressed in this section: controllable, multi-document, and personalized, as illustrated in Figure <ref>. Controllable summarization allows customization to user requirements but maintaining coherence given rigid controls poses challenges. Multi-document synthesis requires aligning sources, eliminating redundancy, and representing perspectives in a balanced way - made difficult by large data volumes and conflicts. Personalized summarization aims to tailor summaries to individuals but handling dynamic user preferences is challenging. For all three domains, knowledge graphs and structured knowledge show the ability to enhance coherence, resolve conflicts, and anticipate user needs.
§.§.§ Controllable Text Summarization
There is a growing interest in controllable summarization, driven by the need for summaries tailored to diverse user needs and contexts <cit.>. The CTRL model, a pioneering effort in controllable text generation, demonstrated the efficacy of control codes in directing content creation <cit.>. However, it encountered challenges in achieving uniform performance across different controls and ran into potential overfitting issues. Controllable summarization meets this need by granting users the power to shape multiple facets of the summary, from its length and focus to its style and tone. This user-oriented methodology ensures summaries are not just succinct and logical but also pertinent to the context and tailored to individual preferences. One of the inherent challenges is to ensure that while a summary aligns with user-defined controls, it retains its coherence and fluency. See et al. <cit.> delved into this delicate balance, revealing that rigid control parameters might sometimes yield summaries that, despite complying with the controls, compromise on coherence or overlook essential details.
To realize this degree of personalization, future models must exhibit greater adaptability and greater sensitivity to data. Although controllable summarization aims to be user-centric, obtaining real-time feedback from users and incorporating it into the model can be challenging. This iterative feedback loop is crucial for refining and improving model outputs <cit.>. As the field progresses, focusing on the development of models that can better handle the balance between user controls and content coherence will be key. Enhancements in understanding user preferences and integrating real-time feedback will likely play a significant role in creating more sophisticated and user-responsive summarization systems.
§.§.§ Multi-Document Summarization
Although much research focuses on single-document summarization, multi-document summarization presents unique challenges <cit.>. Unlike its single-document counterpart, multi-document summarization involves synthesizing information from multiple sources, often necessitating the alignment of documents, identification, and resolution of redundancies, contradictions, and varying perspectives. These complexities introduce unique challenges such as ensuring coherence in the face of diverse inputs and maintaining a balanced representation of all source documents. The enormous amount of information that needs to be processed during multi-document summarization is one of the main challenges. The amount of data increases rapidly with many documents, causing computational difficulties and extending processing times <cit.>. The possibility of conflicting information across documents presents another challenge. Finding the most precise or relevant information can be troublesome, particularly if the source text comprises different authors or viewpoints <cit.>. Furthermore, the temporal part of the information can present difficulties. For instance, while summing up news articles, recent data may be more pertinent than older information, expecting models to have a sense of temporality <cit.>. Large Language Models (LLMs) can play a crucial role. Their ability to process large volumes of text and understand complex linguistic patterns makes them well-suited for tackling the challenges of multi-document summarization <cit.>. However, their application also introduces new dimensions to these challenges. For example, the computational resources required to process multiple lengthy documents using LLMs are significant, and the risk of perpetuating biases present in training data is heightened due to the models' extensive scope.
To address these issues, integrating knowledge graphs and structured knowledge representations has arisen as a promising strategy <cit.>. Knowledge graphs, with their interconnected nodes and connections, provide an organized system that can assist models, including LLMs, in understanding the connections between various text documents, recognizing key subjects, and generating summaries that capture the essence of the entire text document set <cit.>. Essentially, structured knowledge representations offer a deliberate method for coordinating and processing multi-document content, ensuring that the resulting summaries are comprehensive and well-structured. Recent advancements in attention mechanisms, particularly in transformer-based models, have also shown promise in effectively handling the intricacies of multi-document summarization <cit.>. As the demand for tools capable of distilling insights from vast and diverse data sets grows, multi-document summarization, strengthened by advanced knowledge structures and modern modeling techniques, including the use of LLMs, will undoubtedly play a pivotal role in shaping the future of information consumption.
§.§.§ Personalized Summarization
Traditional summarization techniques aim to distill the essence of texts for a general audience. However, with the increasing volume and diversity of information available, there is a growing recognition of the need for a more tailored approach: personalized summarization <cit.>. This approach acknowledges that every reader comes with a unique background, preferences, and information needs. Instead of providing a one-size-fits-all summary, personalized summarization seeks to generate content summaries that resonate with individual users, emphasizing aspects most relevant to them. One primary issue is the dynamic nature of user preferences, which can evolve over time based on their experiences, interactions, and changing needs <cit.>. This dynamic nature requires models to be adaptive and responsive to ongoing user feedback. Incorporating user-specific knowledge, such as their reading history, preferences, or feedback, can provide valuable insights into what they value in a summary <cit.>. Furthermore, leveraging knowledge graphs offers another layer of personalization <cit.>. These graphs can map out intricate relationships between different pieces of information, allowing the generation of summaries that not only cater to a user’s current interests but also anticipate their future queries or areas of interest.
LLMs like GPT and BERT greatly improve personalized summarization due to their advanced natural language understanding <cit.>. They can adapt effectively to user preferences by analyzing interactions and feedback, enabling them to generate summaries that are increasingly tailored to individual user profiles over time <cit.>. Moreover, a significant future direction in this realm is Adaptive Learning. Building models that can learn and adapt from continuous user feedback, especially those based on LLMs, will be crucial in ensuring that the summaries remain aligned with evolving user preferences <cit.>. As the digital landscape becomes increasingly user-centric, addressing these challenges and focusing on adaptive learning, augmented by the capabilities of LLMs, will be instrumental in ensuring that readers receive concise, relevant, and engaging summaries tailored just for them. These models will need to not only understand and reflect individual user preferences but also continually adapt to changing user needs and preferences over time. The integration of advanced techniques such as knowledge graphs to provide a deeper, more context-aware level of personalization will also be a key area of future research.
§.§ Language and Cross-Domain Considerations
As shown in Figure <ref>, this section examines cross-lingual, low-resource language, and domain-specific challenges. Cross-lingual translation risks meaning loss and linguistic structure differences hence advanced neural translation shows potential. For low-resource languages, limited datasets hinder supervised learning while unique nuances may be neglected. Data scarcity is addressed by techniques like transfer learning, data augmentation, and multilingual models. Domain adaptation poses challenges due to specialized lexicons and data limitations, hence domain knowledge graphs and transfer learning show potential.
§.§.§ Cross-Lingual Summarization
Cross-lingual summarization, which involves generating concise summaries in a target language different from the source, is becoming increasingly vital due to the globalization of information <cit.>. This task faces challenges such as potential loss of meaning during translation and the intricacies of different linguistic structures <cit.>. The quality of the source document is important because ambiguities or poor structure can make the summarization process more difficult <cit.>. One common solution to this problem is to summarize the machine-translated content after the source documents have been translated into the target language <cit.>. However, this approach occasionally introduces mistakes because errors in the translation stage can trickle down to the summarization stage. A more direct approach is provided by advanced neural machine translation models combined with summarization methods, which guarantee that the essence of the original content is retained in the summarized output <cit.>. To train and refine cross-lingual summarization models, parallel corpora—datasets that combine text in one language with its translation in another—have become increasingly popular. By utilizing such corpora, models are able to learn complex language mappings and generate summaries that are both precise and fluent <cit.>. Another promising path is the creation of multilingual models, like those trained using the MASK-PREDICT method <cit.>. These models, trained on data from multiple languages, possess the capability to understand and generate text across a wide linguistic spectrum.
In an era of increasing globalization and a rising need for accessible information in multiple languages, the progress made in cross-lingual summarization, supported by machine translation methods, will have a crucial impact on overcoming language barriers and promoting global comprehension. The development of advanced neural translation models that can effectively combine with summarization methods to retain the essence of the original content will be a key area of future research. Additionally, the utilization of parallel corpora and the creation of robust multilingual models will play a significant role in advancing the field. This will ensure that the generated summaries are not only linguistically accurate but also maintain the semantic integrity and fluency of the original content across different languages.
§.§.§ Low-Resource Language Summarization
In our linguistically diverse world, major languages like English, Chinese, and Spanish dominate the digital realm, leaving many low-resource languages underrepresented in computational models <cit.>. This imbalance challenges natural language processing tasks, such as text summarization. The limited availability of parallel corpora hinders supervised training <cit.>, and nuances unique to these languages can be overlooked, resulting in potentially inaccurate summaries <cit.>. Pre-training models on related languages, like using Spanish to aid Catalan summarization, can be effective <cit.>. Data augmentation, including back-translation, improves training data <cit.>, while transfer learning bridges the data gap by adapting models from high-resource to low-resource languages <cit.>. Unsupervised methods, like Google's “zero-shot translation," use high-resource languages as intermediaries <cit.>.
The future of summarization for low-resource languages will likely involve cross-lingual pre-trained models, such as multilingual BERT <cit.>, and few-shot learning techniques that maximize limited data <cit.>. Collaborations among communities, linguists, and technologists can produce more annotated datasets <cit.>, and models that respect cultural nuances will be essential <cit.>. As the digital age strives for inclusivity, these strategies will ensure that every language and culture is represented. The continuous development of models trained on a broader range of languages and the incorporation of cultural and linguistic nuances will be crucial in making the benefits of advanced summarization technologies accessible to all language communities.
§.§.§ Domain-Specific Summarization
In the vast landscape of information, content varies not only in style but also in substance, depending on the domain from which it originates. Whether it is the intricate jargon of medical literature, the precise terminology of legal documents, or the nuanced language of academic research, each domain has its unique lexicon and conventions <cit.>. Generic summarization models can struggle with these domain-specific nuances. A significant challenge is the scarcity of specialized training data, with limited datasets in areas like bioinformatics compared to general news <cit.>. Additionally, the dynamic nature of many fields means that domain knowledge can quickly become outdated, requiring models to be continually updated with the latest information <cit.>. To address this, there is a shift towards domain-specific knowledge graphs that capture field-specific relationships and terminologies <cit.>. By incorporating these into summarization models, summaries become more domain-aware and resonate with experts.
A significant future direction involves the application of Transfer Learning. Leveraging models pre-trained on general domains and fine-tuning them on domain-specific datasets can help bridge the data gap in specialized fields <cit.>. As the demand for specialized content summarization rises, these strategies will ensure summaries remain accurate and contextually apt. The continual adaptation of summarization models to incorporate up-to-date domain knowledge and terminologies will be crucial in maintaining the relevance and accuracy of the summaries. The development of advanced domain-specific knowledge graphs and the enhancement of model adaptability to dynamic domain changes will play a pivotal role in refining the capability of summarization technologies to cater to specific field requirements, ensuring that the generated summaries are both informative and expertly aligned with domain-specific needs.
§.§ Emerging Frontiers
This section examines the emerging frontier of multimodal summarization as depicted in Figure <ref>, which aims to summarize both textual and visual data to provide comprehensive representations. Standard techniques often fall short with such multifaceted inputs. The challenges encompass aligning textual and visual modalities effectively and ensuring coherence. Promising directions involve techniques designed to integrate cross-modal data through unified textual-visual algorithms.
§.§.§ Multimodal Summarization
With the rise of multimedia content, summarizing visual and textual data is becoming increasingly crucial <cit.>. Traditional text-based summarization techniques, while effective for pure textual content, may fall short when faced with the task of distilling information from both visual and textual sources. This is due to the inherent complexity of visual data, which often contains rich, non-linear information that does not always have a direct textual counterpart <cit.>. Challenges arise in aligning visual and textual modalities, handling the vast variability in visual content, and ensuring that the generated summaries maintain coherence across both modalities <cit.>.
Recognizing this gap, there is an emerging focus on multimodal summarization, which seeks to generate concise representations that capture the essence of both visual and textual elements. Techniques that integrate visual and textual content are at the forefront of this endeavor <cit.>. By developing algorithms that can understand and interrelate images, diagrams, videos, and text, it becomes possible to generate summaries that truly reflect the composite nature of multimedia content. As the digital landscape continues to evolve, with visuals playing an ever-increasing role, addressing these challenges and focusing on the aforementioned future directions will be paramount in ensuring that users receive a holistic understanding of the content they consume. Advancements in algorithms capable of effectively integrating and summarizing cross-modal data through unified textual-visual representations will be crucial in meeting the growing need for comprehensive, coherent multimodal summaries.
§.§ Ethical and Interpretability Issues
This section examines ethics and interpretability in abstractive summarization as shown in Figure <ref>. Since models run the risk of sustaining data biases, tools for mitigating bias and transparent knowledge graphs are promising. Interpretability is also paramount for trust in high-stakes applications. By mapping model reasoning, knowledge graphs facilitate interpretability, and integrated methods that further enhance transparency.
§.§.§ Explainability and Interpretability
The black-box nature of many advanced models, including Large Language Models (LLMs), has raised significant concerns, especially when these models are used in high-stakes domains such as healthcare, finance, or legal decision-making <cit.>. In these contexts, a model's outputs must be not only accurate but also understandable to stakeholders. The need for understanding how and why specific decisions are made is crucial for establishing trust, ensuring accountability, and, in some cases, meeting regulatory compliance requirements. The interpretability of LLMs, with their complex and often opaque internal mechanisms, presents unique challenges in abstractive summarization <cit.>. Discerning how these models arrive at specific summarization decisions can be difficult, particularly when the summaries need to be explainable and justifiable, as in legal or medical contexts. To address these challenges, recent research has focused on integrating explainability features such as “attention highlights" into these models. These highlights, which closely mirror the model’s decision-making process and align with the user’s mental model of the task, have been shown to significantly increase user trust and efficiency <cit.>. However, ensuring the explainability and interpretability of models, including LLMs, becomes a paramount concern. While simpler models are often more interpretable, they may not achieve the performance levels of more complex models, presenting a trade-off between complexity and interpretability <cit.>. Many current interpretability methods provide post-hoc explanations, which might not accurately reflect the actual decision-making process of the model <cit.>. Researchers are exploring more transparent knowledge representation methods and utilizing knowledge graphs to demystify the inner workings of these models and provide tangible and visual representations of their decision-making processes <cit.>. Furthermore, the development of methods like Layer-wise Relevance Propagation (LRP) and other deep learning visualization techniques is underway to provide clearer insights into the intricate patterns used by LLMs in generating summaries <cit.>.
A significant future direction is Integrated Interpretability, where future models, including LLMs, might be designed with interpretability as an intrinsic feature of their architecture, rather than relying solely on post-hoc methods <cit.>. As we continue to integrate AI-driven solutions into critical sectors of society, addressing these challenges and focusing on integrated interpretability will be instrumental in ensuring these technologies, particularly LLMs, are adopted responsibly and ethically. The development of models with in-built interpretability will not only enhance trust and transparency but also ensure that these models are aligned with the needs and values of society, contributing to the responsible and ethical use of AI in decision-making processes.
§.§.§ Ethical Considerations and Bias
The generation of unbiased and objective summaries is of paramount importance. As models often learn from large amounts of data, they can inadvertently adopt and perpetuate the biases present within these data sets <cit.>. Such biases, whether they are related to gender, race, culture, or other social factors, can skew the content of summaries, leading to misrepresentations and reinforcing harmful stereotypes. Addressing this challenge requires a multifaceted approach. The use of transparent knowledge representation methods can help in bias mitigation <cit.>, by making the decision-making processes of models more interpretable, allowing researchers and users to identify, understand, and rectify potential biases in the generated summaries.
A significant future direction in this realm is the development of Bias Detection and Correction Tools. The creation of tools that can automatically detect and correct biases in summaries will be crucial. These tools can provide real-time feedback to models, allowing them to adjust their outputs accordingly <cit.>. As society becomes increasingly aware of the ethical implications of artificial intelligence, ensuring fairness and eliminating biases in text summarization will be crucial in building trustworthy and equitable systems. The advancement of these tools, along with the continuous improvement of transparent knowledge representation methods, will play a pivotal role in addressing ethical considerations and reducing bias in AI-driven text summarization technologies, contributing to the creation of more just and fair AI systems.
In conclusion, although considerable progress has been made, challenges persist in abstractive text summarization, and the field offers many opportunities for innovation. By addressing these challenges and exploring the highlighted research directions, we can pave the way for more effective and reliable summarization systems.
§ CONCLUDING REMARKS
In conclusion, this survey paper has provided a comprehensive and well-structured analysis of the field of abstractive text summarization. We discussed state-of-the-art methods, highlighting the variety of approaches and the rapid advances being made. Furthermore, we have identified the areas that require improvement for more efficient summarization models by identifying and analyzing the current issues and challenges in the field. Our examination of strategies for overcoming these challenges has highlighted the significance of integrating knowledge and other techniques in the creation of abstractive summarization models.
Additionally, we have identified areas of interest that have the potential to significantly advance the field and have explored promising avenues of future research. Enhancing factual consistency, developing cross-lingual and multilingual summarization systems, concentrating on domain-specific summarization, dealing with noisy data, and enhancing long-document summarization are a few of these research directions.
We have also provided vital comparison tables across techniques in each summarization category, offering insights into model complexity, scalability and appropriate applications.
We hope that by providing this well-organized overview of abstractive text summarization, researchers and practitioners will be motivated to tackle the challenges and pursue the future research directions presented in this paper. It is essential to keep focusing on the development of more efficient, trustworthy, and beneficial summarization models that can be used for a variety of purposes as the field grows.
§ ACKNOWLEDGEMENT
During the preparation of this work, the author(s) used ChatGPT in order to improve flow of writing in approximately 10% of the document. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
elsarticle-num
|
http://arxiv.org/abs/2409.02568v1 | 20240904093751 | Fully polarized Fermi systems at finite temperature | [
"Krzysztof Myśliwy",
"Marek Napiórkowski"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas",
"cond-mat.stat-mech"
] |
§ ABSTRACT
We propose a simple model of an interacting, fully spin–polarized Fermi gas in dimensions d=2 and d=3, and derive the approximate expression for the energy spectrum and the corresponding formula for the Helmholtz free energy. We analyze the thermodynamics of the system and find the lines of first–order phase transitions between the low and high density phases terminating at critical points. The properties of the corresponding phase diagrams are qualitatively different for d=2 and 3, and sensitively depend on the interparticle attraction, which marks a departure from the standard van der Waals theory. The differences originate from the Pauli exclusion principle and are embeded in the fermionic nature of the system under study.
Fully polarized Fermi systems at finite temperature
Krzysztof Myśliwy and Marek Napiórkowski
Institute of Theoretical Physics, Faculty of Physics, University of Warsaw
Pasteura 5, 02-093 Warsaw, Poland
September 9, 2024
=============================================================================================================================================================
§ INTRODUCTION
The properties of matter are largely determined by the mutual interactions between the constituent particles. If, however, the system in question is composed of fermions the Pauli exclusion principle <cit.> is, under certain conditions, of no minor importance. Examples cover systems within a broad range of energy and length scales and pertain to such fundamental questions as the stability of matter, the structure of atoms and nuclei or the existence of thermodynamic limit <cit.>. This provides a strong motivation for the study of simple fermionic systems whose fundamental properties are induced by the Pauli principle <cit.>.
One interesting situation arises when the fermions become fully spin—polarized, i.e., all particles are in the same spin state. The exclusion principle is then manifested entirely in the position space, and the corresponding many—body wave function is necessarily antisymmetric in the spatial coordinates of the particles. Under such circumstances, the Pauli principle has a direct effect on the actual interactions in the system. For instance, the short—range interaction of fermions is at low temperatures determined by the p—wave scattering processes, in contrast to the case of mixtures composed of fermions in different spin states which are governed by the much stronger s—wave interactions <cit.>.
In this work, we analyze the thermodynamics of a model of interacting fully polarized dilute Fermi gas, which we propose on the basis of a Hartree—Fock type microscopic calculation. When the diluteness assumption is exploited, the resulting model is fairly simple and allows for the finite temperature treatment, which appears to have been less explored in the existing literature. In particular, the resulting equation of state agrees in the zero temperature limit with recent rigorous results obtained for the ground state of the repulsive gas <cit.>.
The model incorporates both the short—range repulsion of the p—wave type and the long—range attraction, and the interplay between the two interactions leads to the emergence of a first—order transition between a highly degenerate liquid phase and a gas phase. While the resulting thermodynamics bears certain resemblance to the standard van der Waals theory, it also displays remarkable differences in comparison to it. In particular, the phase transition turns out to exist only for strong enough attractions and, more importantly, it is sensitive to the embedding dimension of the gas. This latter aspect points at the crucial role played by the Pauli principle, as the dimensionality dependence is rooted in the different behavior of the Fermi pressure as function of the density in different dimensions.
These aspects make the model interesting from the point of view of the theory of phase transitions and critical phenomena, while the analysis might be of relevance for a variety of physical systems where spin polarization is present, ranging from adequately prepared cold Fermi gases <cit.> to modelling the interior of neutron stars exposed to strong magnetic fields <cit.> and similar problems arising in nuclear matter <cit.> to which phenomenological van der Waals equations of state modified by Fermi statistics have also been applied <cit.>.
The article is organized in the following way. Sec. <ref> contains the necessary definitions and the microscopic derivation of the approximate energy levels and the Helmholtz free energy. The latter is further closely studied in Sec. <ref>, where we discuss the resulting thermodynamics of the model, beginning with a short analysis of the purely repulsive case and subsequently the study of the phase transition which results if the attraction is included, wherein the cases of d=2 and d=3 are subject to a separate treatment. The results are then summarized in Sec. <ref>. The main text is accompanied by an Appendix where we include some calculations relevant for the two–dimensional problem.
§ THE MODEL
§.§ Preliminaries
We consider a one–component, fully polarized, interacting Fermi gas composed of N fermions of mass m enclosed in a box of volume V in d dimensions, with d=2,3. As mentioned, fully polarized means here that all fermions occupy the same spin state, which in particular means that one can ignore spin altogether in their description. We assume that the fermions interact with each other via an isotropic two–body potential v which generically can be written as
v(x)=v_s(x)+γ^d v_l(γ x) ,
where v_s is a compact-support function modelling the short–range repulsion of the particles, while v_l is integrable and models the attraction at longer distances. γ is a suitable scaling parameter, and the particular choice of scaling in Eq.(<ref>) is known as the Kac model <cit.>, wherein γ is sent to zero, corresponding to a very weak and long–ranged interaction. The Kac scaling ensures that the γ-independent integral
-a:=∫γ^d v_l(γ x) d^dx < 0
remains finite in the limit γ→ 0. Its absolute value is denoted by a, which we call the Kac parameter. The negative sign of the above integral reflects the interparticle attraction at long distances. The choice of the Kac scaling leads to a substantial simplification of the treatment of long-range attraction while still producing non trivial effects, as we shall see.
The fact that the fermions are assumed to be fully polarized affects here mostly the short–range repulsive interactions. The polarization imposes the antisymmetry condition on the positional part of the wave function and precludes the s-wave scattering processes. The effects of the short-range interparticle repulsion are quantified by a single parameter called the p–wave or odd wave scattering length, which we denote by b and which is conveniently defined as <cit.>
c_d b^d := inf_ψ{∫(ħ^2/m|∇ψ|^2 +v_s|ψ|^2)|x|^2 d^d x } ,
where the infimum is taken with respect to functions ψ(x) such that lim_|x|→∞ψ(x) = 1. The coefficients c_d are defined as
c_d=
12πħ^2/m, d=3
4πħ^2/m, d=2.
This variational definition is equivalent to the standard one using the zero–energy scattering equation <cit.>, and is very useful in the present context. For instance, in analogy to the Born approximation for the s–wave scattering length <cit.>, one finds
c_d b^d ≤∫ v_s(x) |x|^2 d^dx .
We note a slight abuse in the notation when denoting the two– and three–dimensional p–wave scattering lengths by the same symbol b; this should not lead to any confusion in the remainder of the analysis.
The parameters a and b thus encode certain information contained in the long–range attractive and short–range repulsive parts, respectively, of a generic interparticle potential v, according to the splitting in Eq.(<ref>).
Below we perform a Hartree–Fock calculation whose result is that in the dilute limit the parameters a and b account for the interaction effects completely. In particular, since the short–range part of v is taken to be purely repulsive in our model, we do not cover effects such as p–wave Cooper pairing of fermions <cit.>.
§.§ Approximation of the energy levels and the free energy
Equipped with the representation of both parts of the interaction by means of a and b, we now introduce the approximate expression for of the energy levels of the system which we then employ in the statistical mechanical analysis. We rely on the following observation: since the p–wave interaction effects are weak, the interacting gas can be treated as consisting of free fermions occupying their one–particle energy levels whose exact form is modified by the p–wave interactions. Accordingly, we identify the microstate of the system with the set of the single-particle energy-level occupancies. The energy of any such microstate can be found by computing the expected value of the hamiltonian H on the appropriate plane–wave Slater determinants Ψ, i.e. by performing the Hartree–Fock approximation. Note that Ψ is characterized completely by the occupation numbers
{ n_k }, where n_k=1 if the mode k is occupied and zero otherwise.
The Hartree–Fock energy of the Slater determinant Ψ constructed from N orthonormal functions |j⟩ equals
⟨Ψ| H|Ψ⟩ = ∑_j n_j ⟨ j |-ħ^2/2mΔ| j⟩
+ 1/2∑_j≠ k n_j n_k(⟨ jk|v|jk⟩-⟨ jk |v|kj⟩) ,
where -ħ^2/2mΔ stands for the kinetic energy operator and v is the interparticle interaction potential energy in Eq.(<ref>). We employ the periodic boundary conditions and the plane–wave basis |k⟩=e^ikx/√(V). Then one obtains
⟨ jk|v|jk⟩=1/V∫ v(x) d^dx=1/V(-a+∫ v_s(x)d^dx) ,
where we have employed Eq.(<ref>). The above direct term leads to the mean-field expression for the energy. However, it is the exchange term discussed below that induces the energy renormalization beyond the above mean–field shift. First, we observe that in the Kac limit the contribution from the attractive part of the interaction to the exchange term vanishes
⟨ jk |v_l |kj⟩ =1/V^2∫ d^dx d^dy γ^d v_l(γ (x-y)) e^i(k-j)(x-y)
= 1/Vv̂_̂l̂((k-j)/γ) 0
because k≠ j and the Fourier transform v̂_̂l̂(k) vanishes at infinity.
In the case of the repulsion part v_s the situation is different. In order to proceed with our approximation scheme, we use the fact that the potential v_s is short–ranged and assume that the gas is dilute and at low temperatures. Accordingly, the typical momenta contributing to the integral can be estimated to lie in the range set by the Fermi energy ∼ n^1/d, where n=N/V, while the interparticle distances are at most of the order of R, the range of the potential v_s, and thus |k-j||x|∼ n^1/dR≪ 1. One may thus expand the exponential factor to second order (which actually is one of the crucial steps behind the rigorous bounds developed in <cit.>) and obtain
∑_j≠ kn_j n_k ⟨ jk |v_s |kj⟩
=∑_j≠ k n_j n_k 1/V^2∫d^dx d^dyγ^d v_s(x-y) e^i(k-j)(x-y)
≈∑_j≠ kn_j n_k 1/V∫d^dx v_s(x)(1-((k-j)· x)^2/2)
=∑_j≠ kn_jn_k/V(∫ v_s(x) d^dx - (k-j)^2/2d∫ x^2 v_s(x) d^dx),
where in the last step we used the rotation invariance of v_s. Introducing the total momentum of the Slater determinant P({ n_k })≡∑_k ħ k n_k and denoting b̃=∫ x^2 v_s(x) d^dx, one obtains
⟨Ψ| H|Ψ⟩ = ∑_k ħ^2 k^2/2mn_k + 1/2V b̃/2d∑_j,k n_k n_j (k^2+j^2)
-a/2V∑_j≠ k n_k n_j - b̃/2dVħ^2 P({ n_k })^2.
This can be further simplified as follows. First, it can be shown that the term involving the total momentum of the system in Eq.(<ref>) (the last term on the rhs of Eq.(<ref>)) vanishes in the thermodynamic limit. Second, the upper bounds Eq.(<ref>) show that the integral
b̃ is related to the p-wave scattering length
<cit.> in an analogous way as the integral of the potential is related to the s-wave scattering length. Thus, in the spirit of the original Bogoliubov calculation of the spectrum of the interacting Bose gas <cit.>, we make here the replacement b̃→ c_d b^d, with c_d defined in Eq.(<ref>). This is essentially the Born approximation, which is known to provide necessary cancellations for the Bogoliubov approximation to work in the bosonic case <cit.>. We find
the expression for the system energy corresponding to a given microstate { n_k} in the following form
E({ n_k})=∑_k ħ^2 k^2/2m(1+ N/V B) n_k - a/2VN(N-1) ,
where N=∑_jn_j, and
B=2m/ħ^2c_d b^d/2d=
4π b^3, d=3
2π b^2, d=2 .
In order to simplify the notation we suppress the subscript d and denote by B a quantity which depends on d and takes different values for two- and three-dimensional systems, Eq.(<ref>). This, however, does not lead to any ambiguity in the analysis and conclusions.
In particular, the expression in Eq.(<ref>) evaluated for a microstate corresponding to the Fermi ball agrees with the leading order correction to the ground state energy of a suitably dilute polarized Fermi gas in the thermodynamic limit as developed in <cit.>.
For the subsequent thermodynamic analysis, we need to evaluate the expression for the Helmholtz free energy F(T,V,N). The
canonical partition function
Z(T,V,N)=∑_{ n_k}, ∑_k n_k=N e^-β E({ n_k})
evaluated for the system energy given in Eq.(<ref>) straightforwardly leads to the following form of the Helmholtz free energy F(T,V,N)
F(T,V,N) = (1+nB)F^id(T/1+nB,V,N)-aN^2/2V ,
where F^id(T,V,N) denotes the Helmholtz free energy of the corresponding ideal Fermi gas (i.e., consisting of non-interacting particles of the same mass as in our model) and n=N/V. Expression Eq.(<ref>) forms the starting point of thermodynamic analysis.
§ THERMODYNAMICS
§.§ Purely repulsive gas
We start with a brief analysis of the purely repulsive case a=0.
It follows from Eq.(<ref>) that the equation of state takes the form
p(T,n;B)=(1+d+2/2n B)p^id(T/1+nB,n ) ,
where p^id(T,n) is the pressure of the corresponding ideal Fermi gas as function of temperature and density, see Eq. (<ref>) below. When deriving Eq.(<ref>) the relation U^id(T,n)=d/2Vp^id(T,n) was used.
This simple result shows that apart from the multiplication by the prefactor (1+d+2/2n B) the pressure is essentially that of the free Fermi gas evaluated at the reduced temperature
T̅=T/1+n B .
On physical grounds, because of the interparticle repulsion, one expects p(T,n)≥ p^id(T,n). This is not easily seen directly from Eq.(<ref>) because on one hand the prefactor (1+d+2/2n B) increases the pressure, but on the other hand the reduced temperature T̅ is lower than the actual one, thus lowering the pressure at a given density. It is of interest to see how the overall increase of pressure can be deduced from Eq. (<ref>). This also serves as a check of consistency of our model with the thermodynamic formalism.
Let us compute
(∂ p/∂ B)_T,n=n(Dp^id(T̅,n)-1+D nB/1+nBT̅(∂ p^id(T̅,n)/∂T̅)_n) ,
where
D=d+2/2.
With the help of thermodynamic identity
(∂ U/∂ V)_T,N = -p + T(∂ p/∂ T)_n, where U is the internal energy, the above relation can be rewritten in the following form
(∂ p/∂ B)_T,n = D-1/1+nB(n^2B D[n(∂ p^id/∂ n)_T̅ - p] .
. + n^2 (1+ nB + n^2DB^2) (∂ p^id/∂ n)_T̅).
It follows from Eq.(<ref>) that (∂ p/∂ B)_T,n >0. Indeed, the pressure is an increasing function of the density at constant temperature, and thus p^id(T̅,n) is an increasing function of the density at constant T̅. Thus the last term in Eq.(<ref>) is positive, while one can verify by direct computation using Eq. (<ref>) below that the first term is also positive. This, together with p(T,n)|_B=0=p^id(T,n), shows p(T,n)≥ p^id(T,n), as expected.
We plot the resulting isotherms for different values of B in Fig. <ref>.
Incidentally, the above argument also verifies that the isothermal compressibility is positive. This follows from
(∂ p/∂ n)_T=(∂ p/∂ n)_T̅+B/n(∂ p/∂ B)_T,n .
§.§ Equation of state and the chemical potential equation
In the previous discussion, we verified that is thermodynamically consistent to treat the repulsive polarized Fermi gas at finite temperatures by means of the ground state correction and the effective reduced temperature T→ T/(1+nB). In what follows, we investigate this system with the attraction term included.
The (canonical) pressure reads then simply
p(T,n)=(1+d+2/2n B)p^id(T/1+nB,n )-an^2/2.
This form is similar to the classical van der Waals theory, with the hard–core contribution replaced by the combined effect of the Fermi pressure and the short–range repulsion quantified by parameter B. In Fig. <ref>, we plot the isotherms corresponding to p(T,n) and observe that they do not satisfy the stability condition (∂ p/∂ n)_T>0 for sufficiently large a and low T, which, just like in the classical van der Waals theory, marks the existence of a phase transition between high and low density phases in the system. The order parameter here is the difference in the bulk densities. We note that, at least far away from the critical temperature, the coexisting densities are such that the system changes is behavior from essentially classical to highly degenerate at the transition point. This is evident when the relevant fragments of the isotherms are compared with the high and low degeneracy asymptotics of the pressure in Fig. <ref>. This points at the possibility of observing exotic effects pertinent to highly degenerate Fermi systems in the high density phase.
Although the equation of state is very similar in appearance to the standard van der Waals theory, we wish to investigate whether the details of the transition, as quantified e.g. by the values of the critical parameters, reveal differences in comparison to the standard case stemming from the degeneracy. To overcome the lack of stability and describe the transition within the proper thermodynamic formalism, we work in the grand canonical formalism by fixing the chemical potential μ and temperature T. The actual equilibrium pressure p(T,μ) is then obtained via the Legendre transform
-p(T,μ)= inf_n≥ 0(f(T,n)-μ n) ,
where f(T,n) = F(T,V,N)/V is the Helmholtz free energy density. With the help of the relation
f(T,n)=-p(T,n)+n μ(T,n)
the Legendre transform takes the following form
-p(T,μ) = inf_n≥ 0{-(1+nB)p^id(T/1+nB,n).
.+(1+Bn)nμ^id(T/1+nB,n)-an^2/2-μ n} .
The ideal Fermi gas functions are given implicitly by
nλ^d = f_d/2(e^βμ^id(T,n)),
p^id(T,n) = k_B T/λ^df_d/2+1(e^βμ^id(T,n)),
where k_B is the Boltzmann's constant, λ=√(2πħ^2/mk_B T) is the thermal de Broglie wavelength, and
f_j(e^z):= - ∑_l=1^∞(-e^z)^l/l^j =-Li_j(-e^z)
is the Fermi function.
In order to perform the Legendre transform and find the equation of state, we compute the derivative of the function in the brackets in Eq.(<ref>) with respect to n and set it equal to zero. Keeping in mind that the n dependence is also present in the first argument of both p^id and μ^id and using Eq.(<ref>) together with the identity
d f_j(-e^z)/d z=f_j-1(-e^z)
we obtain the chemical potential equation
μ = (1+nB)μ^id(T/1+nB,n) +
+ (D-1) B p^id(T/1+nB,n) - a n,
whose solutions n(T,μ) are the main object of our further discussion.
In the remainder of the article, we analyze the case a>0
more closely, focusing on low temperatures and covering the cases of d=3 and d=2 separately.
§.§ Fully polarized Fermi gas in d = 3
To gain some insight we first consider T=0. In this case the chemical potential equation Eq.(<ref>) in 3D reduces to
μ=(1+nB)ε_F+3/5Bn ε_F - an ≡φ_0^(3)(n),
where ε_F=C_3 n^2/3 is the Fermi energy, with C_3=ħ^2/2m(6π^2)^2/3.
The T=0 equation has three solutions n(T,μ) as long as the derivative of φ_0^(3)(n) is negative in a finite interval of the densities. This corresponds to two-phase coexistence and a first-order transition between them. The existence of such an interval is possible only if the interparticle attraction represented by parameter a is strong enough. The critical point beyond which only one solution exists is given by
∂φ_0^(3)/∂ n =0 , ∂^2 φ_0^(3)/∂ n^2 =0.
The above equations give the critical density
n_c=1/8 B
and the critical value of the Kac parameter a_c
a_c=2 C_3 B^1/3=ħ^2/m(6π^2 B )^1/3
The first-order transition exists only for a > a_c. The critical value a_c is such that the typical attractive interaction between a pair of fermions localized in volume B, equal to a/B in the Kac scaling, and evaluated at a=a_c, is of the order of their kinetic energy ∼ħ^2/m B^2/3, which is necessarily non–zero due to the Pauli principle. This gives a_c∼ħ^2/mB^1/3 as in (<ref>). Note that no such bound is present in the classical van der Waals equation of state with hard core particles, where the transition takes place for arbitrary values of a, regardless of the radii of the particles.
The critical value of the chemical potential at T=0 is
μ_c=C_3/20 B^2/3=ħ^2/40m(6π^2/B)^2/3>0.
We also note that the function φ_0^(3)(n) is bounded from below, meaning that solutions to Eq.(<ref>) exist only for large enough values of the chemical potential. This is not surprising, as the chemical potential of the ideal gas at T=0 is positive. Non–trivial solutions of Eq.(<ref>) appear for μ≤ 0 only for a≥ a_m=1.134 a_c. This in turn imposes the constraint a_c<a<a_m=1.134 a_c for the phase transition to occur at zero temperature. For a > a_m, only the high density phase exists as a stable entity, with n>1/2B .
We note that if the critical density n_c at T=0, Eq.(<ref>), is expressed via the p-wave scattering length b one obtains n_cb^3=0.00994 which means that the gas is clearly dilute in these circumstances. This is consistent with one of our initial assumptions leading to the derivation of the model.
We now turn to the case T>0 and introduce the dimensionless variables
n̅=nB, t=π/√(12)k_B T B^2/3/C_3, μ̅=μ B^2/3/C_3,
a̅=a /2C_3 B^1/3 = a/a_c, p̅= p B^5/3/C_3.
The chemical potential equation Eq.(<ref>) can be solved numerically. For a̅>1 the equation admits three distinct roots, provided that t is small enough and μ̅ lies in a suitable range. The equilibrium density is given by the root that yields the the smallest value potential term in the brackets in Eq. (<ref>), which in turn yields the equilibrium pressure.
Fig.<ref> displays the isotherms p̅(t,μ̅) of the three-dimensional fully polarized Fermi gas corresponding to different values of parameter a. In particular, the T=0 isotherms are also included in these plots and they illustrate the discussion presented above. The behavior of the isotherms depends on the value of parameter a̅. In general, the qualitative behavior of the isotherms differs for small a̅ and for large a̅. In particular, for a̅ < 1 the isotherms are smooth while for a̅ >1 one observes lines of first-order phase transitions terminating at the critical points.
The characteristics of the critical points can be determined analytically in the low temperature domain. We consider the regime
a̅ >1 with a̅ close to the boundary value 1. Then, the phase transition is present at non-zero temperatures with very low critical temperature. In this regime, the density difference between the coexisting phases is small. The densities of both coexisting phases are of the order of n_c, and the gas can be considered dilute, in agreement with our assumptions. A further advantage of restricting analysis to this regime is that we can effectively treat the problem analytically by virtue of the well–known Sommerfeld expansion applied to the pressure and chemical potential of the Fermi gas:
μ̅^id(t,n̅)≈n̅^2/3(1-t^2/n̅^4/3),
p̅^id(t,n̅)≈2/5n̅^5/3(1+5 t^2/n̅^4/3),
and thus the chemical potential equation, Eq.(<ref>), reads
μ̅≈ (1+n̅)n̅^2/3(1-t^2/n̅^4/3)
+3/5n̅^5/3(1+5 t^2/n̅^4/3)-2 a̅ n̅ ≡φ^(3)(t,n) .
We are first interested in determining the critical point at which both the first and second derivative of the function
φ^(3)(t,n̅)
with respect to n̅ vanish, see Eq.(<ref>). In the regime a̅≈ 1 one expects that the critical density is very close to the zero–temperature critical density n̅_c=1/8 and that the critical temperature is close to zero.
Thus, we expand the right–hand side of Eq.(<ref>) to the third order in n̅-n̅_c (this is the minimal order which yields three different solutions to Eq.(<ref>)) and then solve the resulting equations treating a̅-1,
n̅-n̅_c and t as small parameters. In this way one obtains
t_c≈9/16√(3/11)(a̅-1)^1/2
n̅_c≈1/8+21/22(a̅-1)
μ̅_c≈1/20-5/11(a̅-1) .
These equations determine the critical point in the regime a̅≈ 1, see Fig. <ref>. Note that t_c vanishes as a approaches a_c while n_c tends to 1/(8B), in accordance with the zero temperature calculation.
Importantly, we note the power law behavior of t_c∼√(a-a_c) and n̅_c in d=3, in contrast with the classical van der Waals gas where the critical temperature is linear in a. Moreover, in the next section we shall show that in the case of
t_c the power law behavior does not hold in d=2. This difference is rooted in the different scaling of the Fermi energy with the density in different dimensionalities, and marks the sensitivity of the transition to the Pauli pressure.
§.§ Fully polarized Fermi gas in d=2
In order to discuss the two-dimensional systems we introduce the dimensionless variables
t=k_B T B/a_0, n̅=n B, a̅=a/a_0, μ̅=μ B/a_0,
where a_0=2πħ^2/m. At this point we note that the dimensionless parameters a̅ are defined in d=2 and d=3 cases in such a way that their critical value is 1 both for d=2 and d=3. In the T=0 case the chemical potential equation takes the simple form
μ̅=3/2n̅^2+(1-a̅)n̅.
For a̅≤ 1 only one solution n̅(μ̅) exists and corresponds to μ̅>0. For a̅>1 two solutions exist for a certain range of negative μ̅ values but only one of them corresponds to a stable minimum of the expression in the curly brackets on the right hand side of Eq.(<ref>) evaluated at T=0. Hence, in the d=2 case there is no phase transition at T=0, in contrast to the d=3 case.
In the T>0 case Eq.(<ref>) reads
μ̅ = t ln(exp(n̅(1+n̅)/t) -1)
+t^2/(1+n̅)^2f_2(exp(n̅(1+n̅)/t)-1)-a̅n̅≡φ^(2)(t, n̅) .
The solutions of this equation represent the equilibrium density as a function of t and μ̅. In Fig. <ref> we plot the resulting isotherms for different temperatures; for comparison with the d=3 case see Fig. <ref>. We observe the existence of a first–order transition provided a̅>1 and the temperature is small enough. Similarly as in 3D and in contrast to the classical van der Waals theory, we observe that a high enough attraction is needed for the transition to occur. However, in contrast to the d=3 case, the critical value of a does not depend on B, as it drops out when the kinetic energy ∼ħ^2/m B is equated with the Kac attraction of a pair of particles enclosed in a 2D volume B, a/B. Another difference with the 3D gas is that the density of the gaseous phase tends to zero as t→ 0, regardless of the value of parameter a̅. This is in accordance with the discussion of t=0 case above. In fact, in d=3 the gaseous phase density attains a non–zero value at T=0 provided 1<a̅<1.134. Thus one observes a qualitative difference in behavior of two- and three-dimensional systems, sumarized in Fig. <ref> where we present the coexistence lines in the T,μ space in the two dimensionalities, for different values of a.
This different behavior of two- and three-dimensional systems is also seen in the asymptotic behavior of the critical temperatures and densities expressed as functions of corresponding parameter a̅ close to its minimal value a̅=1. From the technical point of view the distinction between d=2 and d=3 cases is seen already at the level the Sommerfeld expansion. In d=2 it does not provide the correct asymptotic behavior because it neglects the exponential corrections which turn out to be relevant and are responsible for the appearance of the low–density solution. In order to find the critical point we equate to zero the first- and second-order derivatives of the function φ^(2)(t,n̅) in Eq.(<ref>) and solve the resulting equations in which we keep the exponential factors and the polynomial terms in leading order in n̅ and t. The details of this calculation are provided in the Appendix. The result is
t_c ≈ - a̅-1/3W_-1(1-a̅/e)≈ - a̅-1/3ln(a̅-1)
n̅_c ≈a̅-1/3 +a̅-1/3ln(a̅-1)
where W_-1(z) denotes the lower branch of the Lambert function <cit.>, i.e. the negative valued solution of we^w=z for small z, behaving asymptotically as ln(- z) for small |z|.
In Fig. <ref>, we plot the values of the critical temperature found numerically together with the asymptotic formulae Eq.(<ref>). We observe that n_c approaches zero as a̅ reaches its critical value, in accordance with the lack of the transition at T=0, where the low density phase reduces to vacuum. Remarkably, the asymptotic form of the expression for the critical temperature does not reduce to a simple power law characterizing three-dimensional systems, Eq. (<ref>), still less the classical van der Waals critical temperature T_c∼ a. This highlights the sensitivity of the system's properties to the embedding dimensionality, rooted in different scaling laws of the Fermi pressure in different dimensions and, together with the complete absence of the transition at T=0 in d=2, marks an intriguing departure from the standard van der Waals theory which is due to the Pauli principle.
§ SUMMARY
We have constructed and analyzed a simple statistical mechanical model of a fully polarized dilute Fermi gas with short–range repulsion and long–range attraction. It relies on the introduction of suitably modified one–particle energy levels whose form agrees with recently obtained bounds on the ground state energy of the repulsive gas and is here heuristically justified via a Hartree–Fock type calculation. The repulsion and the attraction are quantified by two parameters: the p–wave scattering length b, and the integral of the attractive part of the potential -a. The derived form of the approximate system's energy enables us to find the Helmholtz free energy, which is then employed to find the equation of state via the Legendre transform. Our analysis shows that the system undergoes a first–order phase transition in dimensions d=3 and d=2, provided the parameter a is large enough. We evaluate the critical temperature and density for a close to the minimal value necessary for the transition to occur. We find a remarkable quantitative difference in their behavior as function of a corresponding to the d=3- and d=2-systems. In addition, the transition is absent at T=0 in d=2 for the entire range of b and a values. On the contrary, in d=3, there exists a window of parameter a-values for which the transition also takes place in the ground state. These discrepancies between the d=3 and d=2 cases are ultimately rooted in the different scaling relations connecting the Fermi pressure and the density in these dimensionalities. In this way, the evidence of a nontrivial phase behavior of the polarized Fermi gas has been provided within a study which goes beyond the usual van der Waals theory. Our analysis suggests that one can expect to observe simple manifestations of the Pauli principle at the macroscopic level in similar systems.
Acknowledgements. We thank Asbjørn Bækgaard Lauritsen for helpful discussions. Financial support from the National Science Center of Poland (NCN) via grant 2020/37/B/ST2/00486 (K.M.) and 2021/43/B/ST3/01223 (M.N.) is gratefully acknowledged.
9
PauliW. Pauli, General Principles of Quantum Mechanics, Springer-Verlag, Berlin, Heidelberg, New York (1980).
LiebThe Stability of Matter: From Atoms to Stars, Selecta of Elliott H. Lieb, Edited by W. Thirring, Springer-Verlag, Berlin, Heidelberg, New York (1997).
LL1972E. H. Lieb and J. L. Lebowitz, Advances in Mathematics 9, 316 (1972).
BohrA. Bohr, Rotational states of atomic nuclei, PhD Thesis, Ejnar Munksgaards Forlag, København (1954).
Dexheimer2018V. Dexheimer, L. T. T. Soethe, J. Roark, R. O. Gomes, S. O. Kepler, and S. Schramm, Int. J. Mod. Phys. E 27, 1830008 (2018).
Vovchenko2015 V. Vovchenko, D. V. Anchishkin, and M. I. Gorenstein, Phys. Rev. C 91, 064314 (2015).
Thomas2002K. M. O’Hara, S. L. Hemmer, M. E. Gehm, S. R. Granade, and J. E. Thomas, Science 298 2179 (2002).
MC2023G. Bertaina, M. G. Tarallo, and S. Pilati, Phys. Rev. A 107.5 (2023).
LauSeA.B. Lauritsen and R. Seiringer, J. Funct. Anal. 286.7 (2024).
BoronatJ. Pera and J. Boronat, Am. J. Phys. 91, 90–101 (2023).
Kh2022N. D. Khoa, N. H. Tan, and D. T. Khoa, Phys. Rev. C 105, (2022).
Ta2023H. Tajima et al., Phys. Rev. C 108, (2023).
Lan2 L.D. Landau and E. M. Lifshitz, Statistical Physics, vol. 2, Pergamon Press, Oxford, UK (1980).
BoccatoC. Boccato, Rev. Math. Phys. vol. 33, No. 01, 2060006 (2021)
HuangK. Huang, Statistical Mechanics, Wiley and Sons, New York, USA (1963)
CallenH. Callen, Thermodynamics and an Introduction to Thermostatistics, Second Ed., Wiley and Sons, New York, USA (1985).
GRJin2003M. Greiner, C. A. Regal, and D. S. Jin, Nature 426 537 (2003).
Salomon2003T. Bourdel, J. Cubizolles, L. Khaykovich, K. Magalhaes, S.
Kokkelmans, G. Shlyapnikov, and C. Salomon, Phys. Rev. Lett. 91 020402 (2003).
Jochim2003bS. Jochim, M. Bartenstein, A. Altmeyer, G. Hendl, S. Riedl,
C. Chin, J. Hecker-Denschlag, and R. Grimm, Science 302, 2101 ( 2003).
Ketterle2003bM. W. Zwierlein, C. A. Stan, C. H. Schunk, S. M. F. Raupach,
S. Gupta, Z. Hadzibabic, and W. Ketterle, Phys. Rev. Lett. 91 250401 (2003).
IKS2007M. Inguscio, W. Ketterle, and C. Salomon, Editors, Ultra Cold Fermi Gases, Proceedings of the International School of Physics "Enrico Fermi", IOS Press, (2007).
Zwerger2008I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008) .
GPS2008S. Georgini, L. P. Pitaevskii, and S. Stringari, Rev. Mod. Phys. 80, 1215 (2008) .
HL1976P. C. Hemmer and J. L. Lebowitz, in Critical Phenomena nad Phase Transitions, vol. 5b, Academic Press, New York (1976).
Blum2004B. E. Granger and D. Blume, Phys. Rev. Lett. 92 133202 (2004).
GuarRadz2007V. Gurarie and L. Radzihovsky, Annals of Physics 322, 2 (2007).
GRA2005V. Gurarie, L. Radzihovsky, and A.V. Andreev, Phys. Rev. Lett. 94 230403 (2005).
SGPeng2019S.-G. Peng, J. Phys. A: Math. Theor. 52, 245302 (2019).
YosUeda2015S. M. Yoshida and M. Ueda, Phys. Rev. Lett. 115, 135303 (2015).
Zhang2015Z. Yu, J. H. Thywissen, and S. Zhang, Phys. Rev. Lett. 115, 135304 (2015).
YosUeda2016S. M. Yoshida and M. Ueda, Phys. Rev. A 94, 033611 (2016).
Hu2016S.-G. Peng, X.-J. Liu, and H. Hu, Phys. Rev. A 94, 063651 (2016).
JianZhou2018Shao-Jian Jiang and Fei Zhou, Phys. Rev. A 97, 063606 (2018).
Zhang2019S. Ding and S. Zhang, Phys. Rev. Lett. 123, 070404 (2019).
MakEns2023J. Maki and T. Enss, Phys. Rev. A 107, 023317 (2023).
LambertJ. H. Lambert, Acta Helveticae physico-mathematico-anatomico-botanico-medicae, Band III, (1758).
NIST NIST Digital Library of Mathematical Functions. https://dlmf.nist.gov/, Release 1.2.0 of 2024-03-15. F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds.
mycite
*
§
Here we sketch the derivation of the asymptotic laws Eq.(<ref>) governing the behavior of the critical temperature and critical density of two-dimensional system in the vicinity of the minimal value of parameter a̅. This minimal value a̅=1 corresponds to the onset of the phase transition. We start with the chemical potential equation μ̅ = φ^(2)(t,n̅), Eq.(<ref>).
The critical point parameters t_c(a̅),n̅_c(a̅) solve the equations below
0=∂φ^(2)/∂n̅ =-a̅+(1+2n̅)^2/1+n̅exp(n̅(1+n̅)/t)/(exp(n̅(1+n̅)/t)-1)
-2t^2/(1+n̅)^3f_2(e^(n̅(1+n̅)/t)-1),
0= ∂^2 φ^(2)/∂n̅^2 =(1+2n̅)/t(1+n̅)^2e^(n̅(1+n̅)/t)/(e^(n̅(1+n̅)/t)-1)^2
(.-1-..
. 5n̅-8n̅^2-4n̅^3+3t(e^(n̅(1+n̅)/t)-1))+
6t^2/(1+n̅)^4f_2(e^(n̅(1+n̅)/t)-1).
We are interested in the behavior of t_c and n̅_c in the regime a̅≈ 1, where the coexistence line is very short and accordingly t_c is small. From the absence of the transition at t=0 we conclude that also n_c is close to zero. Accordingly, we expand the terms in Eq.(<ref>) that are polynomial or rational functions in variables t and
n̅ to leading order. On the other hand, since a priori we do not know the behavior of n̅_c/t_c
we need to keep terms like e^n̅_c(1+n̅_c)/t_c in the calculations. In this way the second equation above takes the following form
0 ≈e^n̅(1+n̅)/t/(e^n̅(1+n̅)/t -1)^2 (-1-7n̅) + 3t(e^(n̅(1+n̅)/t)-1)+
6t^3/(1+n̅)^2 f_2(e^(n̅(1+n̅)/t)-1).
We consider the asymptotic regime a̅↘1 in which t_c, n̅_c and their ratio t_c/n̅_c tend to zero. In this regime Eq.(<ref>) simplifies to
3 t_c - e^- n̅_c(1+n̅_c)/t_c(1+ 3t_c +7n̅_c) = 0 ,
where f_2(e^z)≈z^2/2 for large z has been used.
We use this relation to get rid of the exponential terms in the upper equation in Eq.(<ref>) and obtain
n̅_c ≈a̅ -1/3 - t_c.
When inserted back into Eq.(<ref>), it gives
e^a̅+2/3 (a̅ -1/3t_c-1) - 1=1 + 7/3(a̅-1)-7t_c/3t_c.
Thus, in the asymptotic regime a̅↘1 one obtains <cit.>
t_c ≈ - a̅-1/3W_-1(1-a̅/e)≈ - a̅-1/3ln(a̅-1)
n̅_c ≈a̅-1/3 +a̅-1/3ln(a̅-1) ,
where we have chosen the lower branch of the Lambert W function because 1-a̅ is small and negative and ea̅ -1/3t_c is large, and used the asymptotic behavior of the Lambert function W_-1(x)≈ln(-x) for x small and negative.
|
http://arxiv.org/abs/2409.02763v1 | 20240904143911 | Federated Quantum-Train with Batched Parameter Generation | [
"Chen-Yu Liu",
"Samuel Yen-Chi Chen"
] | quant-ph | [
"quant-ph"
] |
Federated Quantum-Train with Batched
Parameter Generation The views expressed in this article are those of the authors and do not represent the views of Wells Fargo. This article is for informational purposes only. Nothing contained in this article should be construed as investment advice. Wells Fargo makes no express or implied warranties and expressly disclaims all legal, tax, and accounting implications related to this article.
Chen-Yu Liu 26,
Samuel Yen-Chi Chen37
2Graduate Institute of Applied Physics, National Taiwan University, Taipei, Taiwan
3Wells Fargo, New York, NY, USA
Email:
6
[email protected],
7
[email protected]
September 4, 2024
====================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In this work, we introduce the Federated Quantum-Train (QT) framework, which integrates the QT model into federated learning to leverage quantum computing for distributed learning systems. Quantum client nodes employ Quantum Neural Networks (QNNs) and a mapping model to generate local target model parameters, which are updated and aggregated at a central node. Testing with a VGG-like convolutional neural network on the CIFAR-10 dataset, our approach significantly reduces qubit usage from 19 to as low as 8 qubits while reducing generalization error. The QT method mitigates overfitting observed in classical models, aligning training and testing accuracy and improving performance in highly compressed models. Notably, the Federated QT framework does not require a quantum computer during inference, enhancing practicality given current quantum hardware limitations. This work highlights the potential of integrating quantum techniques into federated learning, paving the way for advancements in quantum machine learning and distributed learning systems.
Quantum Machine Learning, Federated Learning, Quantum-Train
§ INTRODUCTION
Quantum computing (QC) promises potential computational advantages for certain tasks over classical computers, particularly in areas like machine learning (ML) and combinatorial optimization problems <cit.>. Meanwhile, the advances in classical ML and artificial intelligence (AI) have demonstrated amazing capabilities in various tasks <cit.>. With the progress in quantum hardware, it is natural to consider the combination of these two fascinating technologies.
While existing quantum computing devices still suffer from noises and imperfections, a hybrid quantum-classical computing paradigm <cit.> which divides computational tasks among quantum and classical computing resources according to their properties to leverage the best part from the both world is proposed.
Variational quantum algorithms (VQAs) <cit.> are the fundamental algorithms framework under this hybrid paradigm. Leading quantum machine learning (QML) methods largely rely on these variational algorithms.
Variational quantum circuits (VQCs) are the building blocks of existing QML models <cit.>. It has been shown theoretically that VQC can outperform classical models when certain conditions are met <cit.>. VQC-based QML models have been shown to be successful in various ML tasks ranging from classification <cit.>, time-series modeling <cit.>, audio and language processing <cit.>, quantum algorithm reconstruction <cit.>, and reinforcement learning <cit.>.
The great success of modern AI/ML techniques not only depend on good model architecture design but also on the volume of high-quality data and QML is no exception. The requirements of data also raise the privacy concerns in the QML research and application. Among various methods to mitigate the privacy concerns, federated learning (FL) is a method in which various participating parties share the locally trained models but not the actual training data to avoid data leakage.
Several FL methods have been proposed in the realm of QML to enhance the privacy-preserving features <cit.>.
While effective, these quantum FL (QFL) methods require the trained models to be used on quantum devices in the inference stage. It poses certain challenges at the moment as there are limited real quantum resources available and it is unclear whether the proposed methods are realistic in the real-world scenarios.
In this paper, we propose a Quantum-Train (QT)-based <cit.> QFL method in which the quantum neural networks (QNN) are trained to generate the well-performing classical neural network weights in the federated setting. Once the training is finished, the QNN is not used during the inference phase.
Our main contributions are:
* Addressing data encoding issue in QFL: The QT approach integrated with FL simplifies data handling by using classical inputs and outputs, avoiding the complexities and potential information loss of encoding large datasets into quantum states. This method retains quantum computational advantages without the scaling difficulties of quantum data encoding.
* Reduction of qubit count in QT: Utilizing the batched parameter generation approach, we reduce the qubit usage from ⌈log_2 m ⌉ to ⌈log_2 ⌈m/n_mlp⌉⌉, compared to the original QT proposal. Here, m is the number of parameters of the target classical model and n_mlp is the batch size in the parameter generation approach. In the example examined in this study, qubit usage is reduced from 19 to as low as 8 qubits.
* Inference without quantum hardware: The training results of QT are designed to operate seamlessly on classical hardware, eliminating the need for quantum computing resources, unlike conventional QML and QFL. This feature enhances its applicability, especially given the current limited access to quantum computers compared to classical counterparts.
§ FEDERATED LEARNING
Federated Learning (FL) <cit.> has emerged in response to growing privacy concerns associated with large-scale datasets and cloud-based deep learning <cit.>. In the FL framework, the primary components are a central node and multiple client nodes. The central node maintains the global model and collects trained parameters from the client nodes. It then performs an aggregation process to update the global model, which is subsequently shared with all client nodes. The client nodes locally train the received model using their own data, which typically constitutes a small subset of the overall dataset.
The concept of FL has been explored in the field of QML since the publications <cit.>. In <cit.>, the authors examined the simplest form of QFL utilizing hybrid quantum-classical models. In this approach, a pre-trained CNN compresses input images into a dimension manageable by a VQC. The locally trained hybrid model parameters are then uploaded to a central server, which aggregates these parameters and distributes the updated model to all participants. This framework has been further enhanced to process sequential data using a federated quantum LSTM network <cit.>. The study <cit.> delves into a more advanced scenario where QFL processes quantum states instead of classical images.
While QFL can mitigate the risk of direct leakage of training datasets, it remains vulnerable to attacks that can extract training data entries from the trained models themselves. Such attacks pose a significant threat to data privacy. To address this issue, <cit.> explores the integration of differentially-private gradient optimizers with QFL, aiming to enhance the privacy of QML models.
QFL can be further extended to scenarios where training is conducted on encrypted data, as demonstrated in <cit.>.
QFL can be applied in diverse scenarios, including autonomous vehicles <cit.> and quantum fuzzy learning <cit.>.
§ VARIATIONAL QUANTUM CIRCUITS AND QUANTUM-TRAIN
At the core of the QML scheme, VQCs play a pivotal role by providing the parameterized ansatz that forms the function approximator for learning tasks. A typical VQC used as a QNN is depicted on the left side of Fig. <ref>. The process begins with the initial state |0 ⟩^⊗ N, where N is the number of qubits. This is followed by parameterized single-qubit and two-qubit unitary operations U_3 and controlled-U_3 (CU_3) gates, characterized by their matrix representations:
U_3(μ, φ, λ) = [ [ cos(μ/2) -e^i λsin(μ/2); e^i φsin(μ/2) e^i(φ + λ)cos(μ/2) ]],
CU_3 = I ⊗ |0⟩⟨ 0 | + U_3(μ, φ, λ) ⊗ |1⟩⟨ 1 |,
The parameterized quantum state (QNN) can then be described as:
|ψ(θ) ⟩ = (∏_i CU_3^i, i+1∏_j U_3^j )^L |0⟩^⊗ N,
where i and j are qubit indices, and L is the number of repetitions. The proposed vanilla QT <cit.> is as follows: consider a target neural network model with parameters ω, where ω = (ω_1, ω_2, …, ω_m) and m is the total number of parameters. Instead of updating all m parameters as in conventional ML, QT utilizes |ψ (θ) ⟩, a QNN with N = ⌈log_2 m ⌉ qubits, to generate 2^N distinct measurement probabilities |⟨ϕ_i | ψ (θ) ⟩|^2 for i ∈{1, 2, …, 2^N}, where |ϕ_i ⟩ is the i-th basis state. These probabilities are then input into a mapping model G_β, a multi-layer perceptron (MLP) type classical neural network with parameters β.
The first m basis measurement result probabilities, along with the vector representations of the corresponding basis states |ϕ_i ⟩, are mapped from values bounded between 0 and 1 to -∞ and ∞ using the following equation:
G_β (|ϕ_i ⟩, |⟨ϕ_i | ψ (θ) ⟩|^2) = ω_i, i = 1, 2, …, m.
Here, it can be observed that the parameter ω of the target model is generated from the QNN |ψ(θ) ⟩ and the mapping model G_β. Notably, the required number of parameters for both θ and β scales as O(polylog(m)) <cit.>, allowing for the effective training of the target model with m parameters by only tuning O(polylog(m)) parameters of θ and β. Unlike conventional QML approaches, which require the QNN during the inference stage, the QT approach decouples the quantum computing resource after training. Since the QNN is used solely for generating the parameters of the target model, the resulting trained model is a classical neural network. This classical model can then be executed entirely on classical computing hardware. This characteristic is particularly practical given that quantum computing hardware is currently a relatively rare and expensive resource.
§ QUANTUM-TRAIN WITH BATCHED PARAMETER GENERATION
Building upon the previously proposed QT method, which generates a single parameter of the target network model from a single basis measurement probability, this study introduces a batch parameter generation approach. This method generates a batch of parameters from a single basis measurement probability, as illustrated in Fig. <ref>. In this approach, the m parameters of the target model are divided into n_ch chunks, each containing n_mlp parameters, such that n_ch = ⌈ m / n_mlp⌉. The mapping model, now denoted as G_β, takes as input |ϕ_i ⟩ and |⟨ϕ_i | ψ (θ) ⟩|^2 and generates a batch of parameters in ω of size n_mlp:
G_β (|ϕ_i ⟩, |⟨ϕ_i | ψ (θ) ⟩|^2) = ω⃗_i, i = 1, 2, …, n_ch,
ω⃗_i = ( ω_i,1, ω_i,2, ... ω_i,j ), j = 1,2, …, n_mlp.
This setup is realized through a decoder-like architecture of the MLP in the mapping model G_β, where the output size is expanded from 1 to n_mlp, or in the following syntax. Consequently, the qubit usage N is reduced from N = ⌈log_2 m ⌉ to
N = ⌈log_2 n_ch⌉ = ⌈log_2 ⌈m/n_mlp⌉⌉,
effectively saving approximately ⌈log_2 n_mlp⌉ qubits from the original QT proposal, the original method can be considered as a special case with n_mlp = 1. Reducing qubit usage also mitigates the issue of the exponential requirement of measurement shots, as mentioned in the original QT study. The remaining training process is similar to the vanilla QT method, as depicted in Fig. <ref>.
§ FEDERATED QUANTUM-TRAIN
Following the original idea of FL and QFL, we introduce the concept of the QT model within the federated framework. In this approach, each quantum client node employs QNN |ψ (θ) ⟩ and mapping model G_β to generate the local target model parameters. During each training round, every quantum client nodes update their QNN parameters and the associated mapping model parameters based on their local datasets. These updated parameters are sent to the central node, where they are aggregated to update the global model, as depicted in Fig. <ref>. This process ensures that the global model benefits from the QT performed at each client node, leading to improved performance and efficiency. By integrating the QT model into FL, we leverage the advantages of quantum computing to reduce the number of training parameters and enhance the scalability of distributed learning systems. Notably, compared to traditional QFL, federated QT does not require a quantum computer during the inference stage.
§ RESULT AND DISCUSSION
To examine the applicability of the proposed federated QT framework, we tested it using a VGG-like convolutional neural network (CNN) structure with the CIFAR-10 dataset. The target CNN model has 285226 parameters. We tested three different QT setups: n_mlp∈{2000, 1000, 500}, while fixing the repetition L = 5. The required qubit usage, derived from Eq. <ref>, is 8, 9, and 10 qubits, respectively. Compared to the original QT with the same CNN model <cit.>, which required 19 qubits, our new batched parameter generation significantly reduces the qubit usage. Fig. <ref> illustrates the number of model parameters for the models investigated in this study, with n_mlp denoted as .
In the upper row of Fig. <ref>, the Cross-Entropy loss for the CIFAR-10 image classification task, involving 10 classes over multiple rounds, is presented with different setups of local epochs and , while fixing the number of clients at 4. It can be observed that a larger number of local epochs leads to lower loss values. This outcome is expected, as the model undergoes more frequent updates, providing more opportunities to correct incorrect predictions.
In the lower row of Fig. <ref>, the local epoch is fixed to 1, and the effect of varying the number of clients is investigated. In this investigation, the dataset is divided into as many pieces as the number of clients. Interestingly, increasing the number of clients results in better performance in terms of training loss. This improvement can be attributed to the flexibility provided by different local models, which adjust distinct sets of parameters corresponding to different parts of the dataset. The parameter aggregation step then incorporates these updates from diverse perspectives, enhancing the overall model performance.
A noticeable trend in the figures is that models with more parameters tend to perform better in terms of training loss. This can be attributed to the increased expressiveness of the corresponding models, which enhances their ability to fit the training data. However, this observation only indicates the model’s effectiveness on the training dataset and does not necessarily reflect its general performance on unseen data.
As illustrated in Fig. <ref>, we present the testing and training accuracy of the models investigated in Fig. <ref>. A notable observation is the significant overfitting in the purely classical case. While the training accuracy of the classical model is extremely high, its testing accuracy is slightly lower than that of the n_mlp = 2000 case. This behavior underscores an advantage of the QT method, as highlighted in previous studies <cit.>: the QT method can reduce the deviation between training and testing accuracy, which is proportional to the generalization error. Moreover, our batched parameter generation approach not only significantly reduces qubit usage but also preserves the advantage of generalization error reduction inherent in the vanilla QT method.
While there is no clear trend for different local epochs and the number of clients in the classical, n_mlp = 2000, and n_mlp = 1000 cases, the n_mlp = 500 case shows an increase in both training and testing accuracy with an increase in local epochs and the number of clients. This behavior demonstrates that the federated framework combined with QT can improve models with extreme compression, such as the n_mlp = 500 case, which uses only about 10% of the original CNN model’s parameters.
§ CONCLUSION
In this work, we introduced the Federated QT framework, integrating the QT model into federated learning to leverage quantum computing for distributed learning systems. Each quantum client node employs QNNs and a mapping model G_β to generate local target model parameters. These parameters are updated based on local datasets and aggregated at a central node, enhancing the global model through quantum-enhanced training.
Our experiments, using a VGG-like CNN on the CIFAR-10 dataset, demonstrate the efficacy of the Federated QT framework. We tested three different QT setups with varying n_mlp values, significantly reducing the required qubit usage compared to the original QT method. Specifically, our batched parameter generation approach reduced qubit usage from 19 to as low as 8, while maintaining the benefits of generalization error reduction.
Results indicate that models with more parameters perform better in training loss due to increased expressiveness, but overfitting was observed in purely classical models. The QT method mitigated this issue, resulting in a closer alignment between training and testing accuracy. The federated framework combined with QT also showed improved performance in highly compressed models, such as the n_mlp = 500 case, which uses only about 10% of the original CNN model’s parameters.
The Federated QT framework provides a scalable and efficient approach to distributed learning, utilizing quantum computing to reduce training parameters and enhance model performance. Notably, QT does not require a quantum computer during the inference stage, making it highly practical given the current limitations of quantum hardware. Our findings highlight the practical benefits of integrating quantum techniques into federated learning, paving the way for future advancements in QML and distributed learning systems.
IEEEtran
|
http://arxiv.org/abs/2409.02434v1 | 20240904042510 | Context-Aware Agent-based Model for Smart Long Distance Transport System | [
"Muhammad Raees",
"Afzal Ahmed"
] | cs.MA | [
"cs.MA"
] |
1]Muhammad Raees
[email protected]
[1]organization=Mirpur University of Science and Technology,
city=Mirpur,
state=AJK,
country=Pakistan
1]Afzal Ahmed
[email protected]
§ ABSTRACT
Long-distance transport plays a vital role in the economic growth of countries. However, there is a lack of systems being developed for monitoring and support of long-route vehicles (LRV). Sustainable and context-aware transport systems with modern technologies are needed. We model for long-distance vehicle transportation monitoring and support systems in a multi-agent environment. Our model incorporates the distance vehicle transport mechanism through agent-based modeling (ABM). This model constitutes the design protocol of ABM called Overview,
Design and Details (ODD). This model constitutes that every category of agents is offering information as a service. Hence, a federation of services through protocol for the communication between sensors and software components is desired. Such integration of services supports monitoring and tracking of vehicles on the route. The model simulations provide useful results for the integration of services based on smart objects.
Vehicular Networks Long-Distance Transport Service Federation
§ INTRODUCTION
The smart transport sector positively impacts the economy which is vital for growth, social welfare, and development <cit.>. Thus, detecting changes and forecasting future freight flows and freight transport demand is a task of great importance <cit.>.
Long-distance freight transfer booms the economy and generates a wide range of employment for common people and business opportunities for investors <cit.>.
However, in third-world countries like Pakistan where there are not many systems that are developed for the tracking or monitoring of long distance vehicles, this transportation medium is not very productive <cit.>.
Thus, harming the economy due to long delays in the transportation of goods. Some of the common problems faced by long-distance vehicles range from small breakdowns to stolen goods or vehicles.
Other effects like weather, lack of knowledge about the rest areas, fuel stations, vehicle service areas, road police or ambulance services for any emergency are lacking due to minimum or no communication with the vehicle owner company and other help services. The problems also extend to local public authorities causing blockage on roads on the local, regional, or national level to mega economic issue of not reaching goods from one part to another.
Therefore, a need arises for a sustainable transport system incorporated with modern technologies that is capable of fulfilling
the transport needs of society. At the same time, there is a need to hinder the negative effects of long-distance transportation.
By incorporating various types of technologies currently available and other infrastructural measures, it is often possible to influence how these actions are selected and executed.
There has been a lot of research done on the scheduling and routing of the long-distance vehicle. We propose a model for
long-distance vehicle transportation. Our model incorporates
the distance vehicle transport mechanism through agent-based
modeling. We checked our model to the basic design protocol of ABM
called Overview, Design, and Details (ODD). As we assume that
every category of agents is offering information as a service
we need a service-oriented protocol for the communication
between sensors and software components for monitoring and
tracking of vehicles on the route for better transportation. For
the service protocol, we incorporate concepts of the Internet of Things and Mobile ad-hoc networks <cit.>.
Our proposed model incorporates the different services offered
by each entity.
This section presented the introduction of our work. The remainder of this paper is organized as follows: In Section 2, we briefly describe some background studies for a better understanding of used concepts. Section 3 illustrates our ODD agent-based model. Section 4 describes a relevant scenario as we have made scenario-based validation. We conclude our paper in section 5.
§ BACKGROUND
Technology plays an important role in tracking and monitoring long-distance transportation. The term used for intelligent and connected devices is called Internet of Things. The Internet of Things (IoT) <cit.> has attracted the attention of both academia and industry. The European Commission has already predicted that by 2030, there will be more than 250 billion devices connected to the Internet <cit.>. IoT has the concept of being connected to people and things anytime, anyplace, with anything and anyone, ideally using any path/network and any service <cit.>.
The concept of offering Everything-as-a-Service (XaaS) <cit.> is a category of models introduced with cloud computing.
Vehicular systems with agent-based modeling have different applications of tracking and autonomy <cit.>, however, this case relates to tracking and management.
Agent-based modeling (ABM) is used to model systems that are comprised of individual, autonomous, interacting “agents”.
ABM thus follows a bottom-up approach to understanding real-world systems <cit.>.
A lot of work has been done on the tracking and monitoring of long-distance vehicles using real-time and passive systems. In
<cit.> a low-cost vehicle monitoring system is presented. Work states that the shipping industry has developed many tracking and monitoring systems first to determine where each vehicle was at any time. While in the modeling perspective, there are many models predicted for long-distance vehicle transport. Recently, also several agents-based freight transport analysis models have been suggested, e.g., INTERLOG <cit.> and TAPAS <cit.>, which belong to the class of micro-level models, where
individual entities are represented and the relations between
entities are typically studied over time.
In <cit.> a running intermodal transport service was used as a case study. The performance of the inter-modal transport service was compared against a potential road transport service for a set of mode choice variables, including price, transit time, reliability, and flexibility.
§ LRV MONITORING AND SUPPORT MODELING
Agent-based modeling has been itself grown into a vast field so
it is difficult to keep all of the model’s characteristics.
Many descriptions of Agent-Based Modelings presented in the literature are not complete, which makes it impossible to replicate
and re-implement the model. However, replication is key to science, and models should be reproduced.
Also, the agent-based modeling descriptions are a lengthy mixture of words of factual description and these include long justifications, discussions, and explanations of many kinds. We have to read a lot about the model to understand even if the model itself is quite simple.
The way to describe ABMs should be easy to understand yet it should describe a complete model. A known way to deal with such kind of problems is standardization. It is much easier to know and understand written material if the information is presented in a standardized way and we know the order of textual information. So a consistent protocol that is effective for Agent-Based Modeling becomes useful and it makes it easier to understand and write models.
A root-cause analysis can also provide an effective strategy to understand what to build in models <cit.>.
To bring the benefits of standardization to ABMs, scientists have developed the ODD protocol for describing ABMs <cit.>. “ODD” stands for “Overview, Design Concepts, and Details”: the protocol starts with three elements
that provide an overview of what the model is about and how it
is designed, followed by an element of design concepts that
depict the ABM’s essential characteristics, and it ends with
three elements that provide the details necessary to make the
description complete.
§.§ Agents
After the analysis of the literature and problem domain, we identified agents that are involved in the interaction. Several agents were identified to model are shown in figure <ref> and their interaction is shown in figure <ref>.
* Vehicle (trucks)
* Fuel station
* Rest Area
* Rescue/Medical Van
* Police Van
* Service Area
* Vehicle owner/manager company
* Origin agent (freight sender, shipment agent)
* Terminal agent (freight receiver)
Origin Agents: Origin agents are the agents that are the supplier/sender of goods. These are generated at the beginning of transport by booking with a transport company.
Origin agents: are the agents that are the
supplier/sender of goods. These are generated at the beginning of
transport by booking with a transport company.
Owner company/Transport company: Owner/transport
company is the central agent in the context as it is responsible for monitoring of vehicle. In case of any issue with the vehicle, the
transport company interacts with the police vans present near
the vehicle signal or in case of signal loss. These agents store
all the information about vehicle movements.
Vehicle: Vehicle agents are the actual trucks containing the goods. They interact with all other agents which help make a successful transport. The term we refer to connected as “connected smart objects”. Smart objects interact with each other throughout the journey and provide better decision-making for vehicles and other agents.
All other agents are helping agents in case of emergency or need. Interaction with a help service agent is assumed to be generalized for any kind of service at this point of modeling. As we make our model more concrete we will define the specific help service and its interaction with other agents.
§.§ Behavioral modeling using ODD model
§.§.§ Overview
Purpose: The purpose of the model is to design the movement of long-distance vehicle transportation, their interactions with the other helping agents, and find out the behaviors of agents over time. Under what circumstances does the vehicle need to contact police services, rescue services, rest areas, fuel stations, or any other agents? By understanding the need for any kind of service needed by a vehicle we can predict the time to reach destinations during any trip. We can efficiently provide information to vehicles in need of any kind of emergencies as well as the road and weather conditions to avoid any breakdowns. By gathering the data after applying the model we can predict why a vehicle takes more time to cover a distance. How often a vehicle needs interaction with other agents, and how these interactions will take place?
Entities, state variables and scales: The model has several kinds of entities including vehicles (trucks), shipper and receiver agents, help and control services, fuel stations, and rest areas. Each entity has some basic attributes to model for observing their behavior. To model the entities, we need to model roads and agents. Geographical road forms the patch areas. This can be read on a square grid for testing purposes.
For a real system, this is obtained as the location of any agent on
the ground. Each location has two variables longitude and latitude. Agents are described by which position they are in.
The vehicle has variables like speed, fuel capacity, reliability, travel time, load carrying, travel direction, vehicle category, etc. Police vans have variable coverage in which those can operate, it may be free or already engaged with some other agent. Same as for the rescue services. Fuel stations may indicate the level of fuel they have and their locations. Rest areas must be divided category-wise as well as the services they offer.
Process overview and scheduling: There are multiple processes in the model, the basic process is the movement of the vehicle (trucks) from one point to another. Interaction with other agents like police, service areas, ambulances, or any other service also forms a new process. However, interaction with other vehicles is somewhat important because there is no interaction with other vehicles. So, several processes are formed during vehicle transportation while interacting with other agents. Scheduling is also an important aspect of interaction processes in case of unavailability of an agent service.
There is a need to schedule the vehicle requests for services if an agent instance is not available temporarily or permanently.
§.§.§ Design concept
Emergence: The concept of emergence has been given connotations of being unexplainable in principle, but with ABMs, we are focusing on just the opposite: can our model system look and behave like the real one? The concepts that emerge from the model are not simply the sum of individual characteristics. Whether between any given two points same type of vehicle takes different times at the same load. The variability in the behavior of individuals makes some property to emerge from the model. The model's primary value is trip duration. Trip duration is counted as time taken from the starting point to the ending point.
TotalTime(actual) = ∑_i=0^n∑_j=1^mΔ T_i,j
Where total time is calculated as sum of time during each checkpoint up to n-number of checkpoint. During the trip number of checkpoints is defined for continuous monitoring of vehicle time. Typical actual/observed time duration between two checkpoints is defined as:
Δ T(actual) =
T_cp[j] - T_cp[i] if i < j
undefined otherwise
Tcp[j] is the time observed at a current checkpoint, and Tcp[i] is the time observed at a previous checkpoint. This observation is compared to average the time between two checkpoints and the number of parameters are updated depending on the total time consumed T and Dt by vehicle. If a vehicle is behind the schedule, route minimization is updated for vehicles such as taking short rest periods and increasing average speed within acceptable limits.
Important secondary values are reliability improvement, threats
faced and breakdown.
Observation: ABM can produce many kinds of dynamics <cit.>. What we can learn depends upon what we observe from it. We need to observe each entity; and what individuals are doing at each time. So in our model, we observe each entity to know what they are doing at any time. These observations form a result that predicts the outcome of a specific process.
Adaptive behavior: The agents in this model adapt themselves during the trip. For example, if the weather forecast is bad for a specific area, the agent may rest early and take a longer trip later when the situation is better for traveling. Adaptive behavior is most needed in agent-based modeling and model entities must adapt to the best available solution at any given time.
Sensing: Sensing is an integral part of the agent-based models <cit.>. Vehicle agents with the help of the onboard unit should sense their current location and time to reach their next checkpoint. It is assumed that on onboard unit will calculate the time to reach till next checkpoint.
Certain help services offer their services to vehicles during the
trip to get better observations. By accessing the location of the vehicle, the location of the next checkpoint, and the average speed for the current area the time to reach the next checkpoint is calculated.
Time-to-reach-checkpoint = Dv[j]/AS[i,j]
Where Dv j is the distance between the vehicle’s current position v
and checkpoint j. ASi j is the average speed of the vehicle between checkpoint i and checkpoint j.
Prediction: Prediction is fundamental to decision-making. Prediction plays an important role in the evolution of our model. This cannot be done by the vehicle itself, rather it can be analyzed over time by the management company to help in better prediction of tours for a specific route. We can predict why a vehicle extra time in an area. We can predict the behavior of breakdowns while observing a vehicle and area’s breakdowns. With this kind of knowledge, we can make better decision-making and predictions. Let s be a sensor then the super choice will be
S_s = sup{ s_1, s_2, s_3, …, s_n }
Nearest Criteria choice will be
N_s = Nr(s_1, s_2, s_3, …, s_n)
whereas, the best choice will be
B_s = Aggr( B_s ⋂ N_s )
Interaction: This model includes interaction with other agents like police vans, ambulances, and service areas in case of emergency. The vehicle must transmit a signal to the management company, police station, service area, or medical help in case of vehicle damage, some emergency, or threat. The vehicle must be contacted by the owner's company for further investigation. The other agents like police vans and ambulances must also be notified about the nature of the incident.
Stochasticity: Variability in the movement of the vehicle is too complex to represent. We may not be sure why a vehicle is taking longer time than expected in a certain area. The movement of agents cannot be the same for each trip within a specific area. This variability is represented by the reliability of the moving vehicle. The higher the reliability the higher the chances that the vehicle moves as per expectations through that area.
§.§.§ Model Details
Initialization: For initialization of the model we need to initialize the landscape upon which the vehicle has to travel. The initial population can be set in any range. We can increase or decrease the population. Initial parameters like vehicle reliability and service availability are set at the start.
Input: The model does not need any time series data.
Sub models: Models divided into further sub-models. A vehicle can use alternative paths during travel. The available services may not be available when needed also makes a sub-model that needs to be addressed.
Police Service Model: We break down the model into a set of sub-models, the interaction of a vehicle with the police vans forms a sub-model. Police vans are the agents that provide help services in case of danger of theft or accident during the trip. Each police van agent can communicate with vehicles within their predefined area and mark their service availability or unavailability.
Figure <ref> shows the simple service provision model for the interaction between a police van and vehicle agents. Police vans are the agents that are placed on the long-distance roads for the help of vehicles in case of emergency. Police vans are limited to cover a specific area assigned to them to offer services. Vehicles generate requests for the help service in an area. If the van does not have a pending request to process, it will mark itself as the available agent to offer its services. If the police van is already engaged with another assignment, it will check whether the new request is in the way to the current request the van is handling van can provide service in a way. If the new request is not in the current direction of the van it is to be marked engaged and the request is transferred to other vans. When the provision of service ends, the van will mark itself available for service.
Fuel Station: Vehicle agents can interact with the fuel stations on the road during the trip. Fuel stations can mark the availability of fuel to vehicles as well as prices and services available at stations.
Vehicles calculate the distance the reach the fuel stations and based on the current fuel level, the driver can assess whether to choose a station or travel to the next station for some more offered benefits as suits best.
Service Areas/Medical Services: Vehicle agents can interact with the service areas on the road during the trip in case of any fault in the vehicle. Agents may also seek medical help from the provided medical facilities on the road during the trip. Service area or medical services agents ask to offer help on-station or off-station depending upon the severity of the fault in a vehicle or need of medical assistance respectively.
Shipment manager: Interaction with the shipment manager is very important in our model. In case of any service needed or unavailability of service, the vehicle interacts with the shipment manager to seek help. Shipment managers can also interact with the help service agent depending on the location of the vehicle and the type of request.
Shipment managers can communicate with the help services so that they can better predict the actions needed to be taken by vehicle for on-time delivery.
Help Services: Road conditions, weather situations, and traffic warnings play important roles in the successful completion of a trip. Sensing are reading these signals from provided sensors is very important for correct prediction of action to be taken.
Monitoring For monitoring purposes, we use the onboard units that must be installed on the vehicles to gather all the data about
vehicle movements.
Use of the Internet of Things: This model uses the recent trend of connected things to make an automated model of this process <cit.>. Internet of Things changing the way how things should be done. So we present a scenario of long-distance vehicle transport <cit.>. In this scenario, we say a vehicle has to transport freight for a longer distance (say 2000 Km). The vehicle starts from the origin agent, by starting we mean the transport agent has information about the vehicle over a while so that he can predict the expected time to reach to destination. The expected arrival time is communicated to the vehicle and terminal agent. As the vehicle moves through the expected arrival time is updated upon every checkpoint/stop in the central server as travel history for future decisions. The weather and road situations are communicated to vehicles from time to time. In case of a signal lost from the vehicle, an alarm is generated at the transport agent's end and also on the police van present near the last signal received, so that the police vans are directed to check the related issue. In case of a medical emergency vehicle can generate an alarm to know about medical help available and ambulance services may be directed to the vehicle. Vehicles may also get knowledge of next rest points, service areas, and fuel stations to get a better view of their travel. Vehicle history is maintained during the whole journey; proper analysis of data can report the abnormalities to overcome bad encounters.
We may be able to know why a particular vehicle takes more time in an area while other vehicles don’t. why does one vehicle have more breakdowns than others? What is the vehicle's average time to reach
destinations? With this kind of knowledge, we can make better
decision-making and predictions.
Identifying such questions and digging deep to extract relevant answers is an essential technique <cit.> which we may explore in the future in this context.
Protocols: While taking everything as a service Interoperability is one of the major challenges in achieving the vision of the Internet of Things. The Semantic Gateway as Service provides a mechanism to integrate popular IoT application protocols to co-exist in a single gateway system. This protocol allows many of Internet of Things devices to connect with other web services. A study of different system web services is also needed.
§ SCENARIO
Let’s consider the scenario of the China–Pakistan Economic Corridor of length 2;442 kilometers. After the completion of this corridor, China will use it for trading rather than South China
Sea route. If a loaded truck moves from Gawadar to Kashgar at an average speed of 50km/hour non-stop then, it will take 48:84 hours means 2:04 days. It’s impossible for a human to remain in a vehicle for such a long time.
Normally, a truck driver requires 3 times meals a day and also some refreshments after every 3 to 4hours. Also, the truck will require refueling after a certain distance and we suppose it 5 times during this route.
Now, we calculate this time to reach the destination. We consider
every meal break of at least 30 minutes and refreshment break of 15 minutes and a fueling time of 15 minutes. Now, let’s calculate this time
TotalTime = 2:04 × (3 × 0.5) + 2:04 × (6 × 0.25) + 5 × 0.25 + 48:84
TotalTime = 56:21 hours
TotalTime = 2:34 days
So, after calculating this time by the model the company owning the fleet and the ultimate customer can expect when the truck will reach. Secondly, the truck driver can face any problem during this move. In case of any problem, he will inform the nearest police van directly, considering all the services are interconnected. In case the nearest police van is busy then in this scenario the system decides to convey the message to the second nearest van or wait for this to continue. So, this whole scenario is depicted in the model.
Also, in case of an accident, the ambulance should be informed. And the most suitable ambulance should be assigned the duty.
In case of any mechanical problem, the mobile workshop should be informed. The same best workshop be selected based on the criteria. Also, the driver should be informed about the next filling station distance. Similarly, the driver should have the facility to check the next restaurant. The deals on the restaurant and rates of the restaurant should also be visible to the driver with the distance from the current position.
§ CONCLUSION
Long-distance transport has both negative and positive effects on our economy and society. Monitoring and modeling long-distance transport has been found very interesting field by researchers. We have proposed a model here that uses the agent-based modeling approach to model this system as a multi-agent system. We have done some background study on how agent-based modeling works. Literature review shows that agent-based modeling approaches have been used to model transport systems lately. We propose our model and map with ABM protocol of Overview, Design concept, and Details (ODD). However, proper implementation of the model is still needed to be done. We propose the use of everything as a service concept, precisely sensing as a service where every agent provides services to others and uses the services of others. Service interoperability protocols have been studied. For services to connect between the Internet of Things and web services the Sematic Gateway as Service architecture has been found useful. The study of the most suitable protocol has been important work to focus on in the future.
elsarticle-num
|
http://arxiv.org/abs/2409.02583v1 | 20240904100610 | Self-induced Floquet magnons in magnetic vortices | [
"Christopher Heins",
"Lukas Körber",
"Joo-Von Kim",
"Thibaut Devolder",
"Johan H. Mentink",
"Attila Kákay",
"Jürgen Fassbender",
"Katrin Schultheiss",
"Helmut Schultheiss"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Bm
#⃗1#1
#1#̂⃗̂1̂⃗̂
ℋ̋
rcases
.
}
@fixname@fixname
@fixname#1
ifundefinedlanguagealias@#1
@fixname#1
nameuselanguagealias@#1
enenglish
Helmholtz-Zentrum Dresden–Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, D-01328 Dresden, Germany
Fakultät Physik, Technische Universität Dresden, D-01062 Dresden, Germany
Helmholtz-Zentrum Dresden–Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, D-01328 Dresden, Germany
Fakultät Physik, Technische Universität Dresden, D-01062 Dresden, Germany
Radboud University, Institute of Molecules and Materials, Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands
Centre de Nanosciences et de Nanotechnologies, CNRS, Université Paris-Saclay, 91120 Palaiseau, France
Centre de Nanosciences et de Nanotechnologies, CNRS, Université Paris-Saclay, 91120 Palaiseau, France
Radboud University, Institute of Molecules and Materials, Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands
Helmholtz-Zentrum Dresden–Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, D-01328 Dresden, Germany
Helmholtz-Zentrum Dresden–Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, D-01328 Dresden, Germany
Fakultät Physik, Technische Universität Dresden, D-01062 Dresden, Germany
[email protected]
Helmholtz-Zentrum Dresden–Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, D-01328 Dresden, Germany
[email protected]
Helmholtz-Zentrum Dresden–Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, D-01328 Dresden, Germany
§ ABSTRACT
Driving condensed matter systems with periodic electromagnetic fields can result in exotic states not found in equilibrium. Termed Floquet engineering, such periodic driving applied to electronic systems can tailor quantum effects to induce topological band structures and control spin interactions. However, Floquet engineering of magnon band structures in magnetic systems has proven challenging so far. Here, we present a class of Floquet states in a magnetic vortex that arise from nonlinear interactions between the vortex core and microwave magnons. Floquet bands emerge through the periodic oscillation of the core, which can be initiated by either driving the core directly or pumping azimuthal magnon modes. For the latter, the azimuthal modes induce core gyration through nonlinear interactions, which in turn renormalizes the magnon band structure. This represents a self-induced mechanism for Floquet band engineering and offers new avenues to study and control nonlinear magnon dynamics.
Self-induced Floquet magnons in magnetic vortices
H. Schultheiss
September 9, 2024
=================================================
The electronic band structure of a crystal is characterized by Bloch states, which reflect the discrete translational symmetry of the underlying periodic potential in space. For periodic driving in time, an analogous phenomenon called Floquet states can arise. While Bloch states are shifted in momentum space, Floquet states are shifted in energy by multiples of the drive frequency, which expands the range of possible behavior and properties of condensed matter <cit.>. Recently, periodic drive using ultrafast laser pulses has been used to induce topological Floquet states <cit.>, Floquet phase transitions <cit.>, modulations of optical nonlinearity <cit.>, novel states in Josephson junctions <cit.>, and to perform band engineering in black phosphorus <cit.>. Similarly, Floquet states also enable the dynamical control of spin exchange interactions <cit.>, which suggests the possibility of inducing novel features in the collective excitations of magnetically ordered systems, such as magnons.
While Floquet engineering in magnetic systems has been studied theoretically in different contexts <cit.>, experimental evidence of magnetic Floquet modes remains scarce. The main difficulty in using laser illumination, for example, to modulate intrinsic material parameters such as exchange and anisotropy is that strong dissipation in the electron and phonon systems occurs much faster than the characteristic time scale of coherent excitations, such as magnons in the microwave regime. Here, we present an approach for magnetic materials that does not involve modulating material constants directly, but instead harnesses distinct internal modes that act as the periodic drive. Specifically, we show that the sub-GHz gyrotropic eigenmode of a magnetic vortex <cit.>, as sketched in Fig. <ref>, can induce Floquet states through nonlinear coupling. These states are distinct from the GHz-range magnon modes about a static vortex at equilibrium. Moreover, the sole excitation of GHz magnons by an external magnetic field drives the vortex core into steady state gyration through this nonlinear coupling, which in turn renormalizes the magnon band structure. We term this self-induced Floquet band engineering.
§ GROUND-STATE MAGNONS OF A MAGNETIC VORTEX STATE
Ferromagnetic disks with certain aspect ratios host magnetic vortices as ground states <cit.>, as shown in Fig. fig:geometrya. Vortices possess two distinct classes of dynamical eigenmodes, gyrotropic and geometrically-quantized magnon modes. Gyrotropic modes involve the gyration of the vortex core around its equilibrium position at the disk center <cit.> [Fig. fig:geometryb]. The frequency of the fundamental gyration mode is in the MHz range and proportional to the geometric aspect ratio f_g∼ L/D to lowest order, where D and L are the diameter and thickness of the disk, respectively <cit.>. For quantized magnon excitations, the core remains quasi-static in the center and the magnetic moments in the skirt of the vortex precess collectively <cit.>. In thin disks, these quantized modes are in the GHz range and are indexed by two integers (n,m), where the radial index n denotes the number of nodal lines along the radial direction of the disk, while the azimuthal index m counts the number of periods along the angular direction. Figure fig:geometryc depicts the fundamental mode (0, 0). The frequencies of both classes of modes strongly depend on the material parameters and disk dimensions.
We studied vortex modes in ferromagnetic Ni_81Fe_19 disks patterned on top of a 2-wide central signal line of an on-chip coplanar waveguide. Microwave currents flowing through this waveguide generate oscillating in-plane magnetic fields, which due to their symmetry couple directly only to either the vortex gyration or to magnon modes with azimuthal mode numbers m=± 1, depending on the applied frequency, as shown in Fig. fig:modesa. For a 2-diameter disk, a strong resonant response can be observed for a microwave field at f_nm=6.2, which is visible in the experimental spectra obtained with Brillouin light scattering (BLS) microscopy (for a microwave power of -5dBm) and micromagnetic simulations (for an excitation amplitude of 0.25). Fig. fig:modesb shows the simulated spatial profile of the mode excited at f_nm=6.2, which confirms that the azimuthal mode (0, 1) couples effectively to the microwave drive. Simulated profiles for modes with n=0 and different m=0, -1, ±2, ±3 are also shown for reference, but these do not appear in the spectral response in Fig. fig:modesa.
§ FLOQUET MAGNONS OF A MAGNETIC VORTEX STATE
To probe the dynamics far from equilibrium, a second microwave signal with f_g=200 (at -5dBm), close to the fundamental gyration frequency for the 2-wide disk, is applied in addition to the first microwave signal with f_nm=6.2 that excites the (0,1) mode. In both the experimental and simulated spectra, a frequency comb appears around the initially excited azimuthal mode (f_nm), with the spacing between the sideband peaks given by the gyration frequency f_g. In the simulated spectra, the gyration along with its harmonics are visible in the sub-GHz range, but these are below the instrumental limit of our experiment. Fig. fig:modesd shows the simulated spatial profiles of the modes that constitute the frequency comb. Neighboring modes in the comb are not only shifted in frequency by ± f_g, but their azimuthal index also vary by an increment of Δ m = ± 1. Importantly, the magnon modes about the gyrating vortex exhibit qualitatively different profiles compared to their counterparts in the static case [Fig. fig:modesb], indicating a fundamental change in character resulting from the periodic vortex motion.
Previous observations of magnon frequency combs have been attributed to resonant three- or four-magnon scattering involving regular modes of the system <cit.>, including off-resonant scattering within the linewidths of the existing modes<cit.>, or scattering with other textures such as skyrmions <cit.> or domain walls <cit.>. The modes we observe within the frequency comb are not part of the regular magnon spectrum. This is confirmed in micromagnetic simulations of the Langevin dynamics of the magnetization in which thermal fields populate the magnon modes. In the absence of microwave drive, we recover the regular spectrum of vortex eigenmodes corresponding to a static core, as shown in Fig. fig:modese for the four lowest radial indices, n=0 to 3. Higher-order azimuthal modes in this configuration are typically degenerate, while for small m (± 1, ± 2 for the disk dimensions studied) the magnon modes are hybridized with the gyrotropic mode and exhibit a sizeable frequency difference between opposite azimuthal numbers <cit.>. We can compare this regular magnon dispersion to the results obtained for a gyrating vortex, by overlaying the modes identified in the frequency comb as hollow blue dots in Fig. fig:modese. It is clear that several modes in the frequency comb do not coincide with the regular magnon dispersion.
The frequency comb appears when the disk is excited with two frequencies f_nm and f_g simultaneously. The additional low-amplitude, low-frequency drive at f_g leads to a periodic modulation of the ground state which results in the generation of Floquet states. This behavior is reproduced in micromagnetics simulations when a rotating in-plane magnetic field, whose frequency matches the gyrotropic mode frequency f_g, is included in addition to the thermal fluctuations. As Fig. fig:modesf shows, the resulting spectra exhibit a frequency comb related to the Floquet magnon bands induced by the core gyration. For larger values of m, these Floquet bands resemble the regular magnon dispersion but shifted by ± f_g and ± m=1. For smaller values of m, however, the bands are much more complex, differing strongly from the regular magnon dispersion with band crossings and avoided level crossings. Furthermore, Floquet magnons of opposite azimuthal mode indices at large m exhibit a larger frequency difference compared to regular modes.
§ FLOQUET THEORY
The qualitative change to the magnon spectrum and the emergence of additional bands can be understood within Floquet theory of a many-particle picture of vortex modes that incorporates magnon-magnon interactions. Consider the following Hamiltonian describing vortex gyration and quantized magnon modes (with ħ = 1),
Ĥ = ΩN̂_σ + ∑_nmω_nmn̂_nm + Ĥ_int,
with Ω = 2π f_g and N̂_σ=Â^†_σÂ_σ being the frequency and occupation number of the gyrotropic mode, σ=± 1 the gyration sense, and ω_nm and n̂_nm=â^†_nmâ_nm the respective quantities of the regular magnon modes. The operators Â_σ^† (Â_σ) and â_i^† (â_i) denote the bosonic creation (annihilation) operators of the modes and Ĥ_int the interaction between the vortex gyration and the magnon modes. Moving into the Dirac picture with respect to the gyromode Â_σ by transforming Â→exp(-iΩ t) allows to drop the term ΩN̂. In this picture, the terms of the interaction Hamiltonian Ĥ_int = Ĥ^(1) + Ĥ^(2) + ..., which are lowest in order of regular magnon modes, are given as
Ĥ^(1) = e^iΩ t∑_n U_nσÂ^†_σâ_nσ + h.c.
Ĥ^(2) = e^iΩ t∑_nn^' m V_nn^' mσÂ^†_σâ_n'-m+σâ_nm + h.c.
and describe two-particle and three-particle scattering as well as their time-reversed processes (given by the symbol "h.c." denoting the Hermitian conjugate of the preceding term), as sketched in Fig. <ref>. The parameters U_nσ and V_nn^'σ describe the coupling of the regular magnon modes â_i to a gyrating vortex core (represented by the gyration mode Â_σ), which is conceptually similar to the magnon scattering on a traveling magnetic domain wall. These parameters can be found qualitatively in a Lagrangian collective variables approach assuming a constant gyration radius.
Under steady-state gyration of the vortex core, the system described by Ĥ(t) becomes periodic in time, which allows us to apply the Floquet theorem. This states that the spectrum in a time-periodic system can be obtained from the Floquet Hamiltonian Ĥ_F, which enters in the total time-evolution operator,
Û(t_2,t_1) = e^-iK̂(t_2)e^-iĤ_F(t_1-t_2)e^iK̂(t_1).
Here, K̂(t) = K̂(t+T_g) is the kick operator which is time-periodic with period T_g=2π/Ω, see, e.g., Refs. . Consequently, the Floquet spectrum of the system is only defined up to a multiple of the gyration frequency Ω and, therefore, can be indexed with an additional mode index λ.
For the present model K̂(t) and Ĥ_F can be found analytically, resulting in the Floquet spectrum
ω_nmλ = ω_nm^' + ω_0 + λΩ with λ = 0, ± 1, ± 2, ...
with ω_0 being a constant frequency shift due to the slight change in the ground state energy with respect to the vortex with a static core.
Importantly, the frequencies ω_nm^'≠ω_nm of the new modes do not coincide with the frequencies of the original magnon modes ω_nm but include non-perturbative corrections due to magnon-magnon interactions. This model explains the appearance of additional modes in the magnon spectrum of a gyrating vortex seen in Fig. fig:modesf. Moreover, magnon-magnon interactions with the gyration result in the strong deviations of the Floquet branches from being mere Ω-shifted copies of the original dispersion. This becomes apparent by overlaying the Floquet spectrum over the regular ground-state magnon dispersion from Fig. fig:modese, where large deviations can already be seen in the zeroth-order branches λ = 0. Such renormalizations are a characteristic signature of Floquet systems <cit.> and cannot be accounted for by simple frequency multiplication.
Consider the coupling between the core gyration and magnon modes within the usual particle picture, as illustrated in Fig. <ref>. The gyration involves selection rules for the azimuthal mode index, m, by imposing a difference of ±1 in the scattered mode indices. This is reminiscent of Umklapp processes in crystals, where momentum conservation is satisfied up to the reciprocal space vector, G, i.e. k = k' + G, with the initial and scattered wave vectors k' and k, respectively. Here, we observe something analogous for the azimuthal mode number, m = m' + σ, which indicates that the gyration plays the role of a reciprocal space vector, albeit limited to values of ±1, and shifts the Floquet bands not only in energy but also in the azimuthal mode index.
§ TRANSIENT DYNAMICS OF FLOQUET MAGNONS
Like most studies on Floquet engineering to date, the theoretical framework above describes how new magnon bands are generated under periodic drive in the steady-state, i.e., for a constant gyration radius R_g. However, the transient dynamics related to these bands remains largely unexplored. Figure <ref> illustrates how these bands emerge from an initial static state in a 2-wide Ni_81Fe_19 disk using micromagnetics simulations. We used a small damping constant of α=0.0001 (compared to the more realistic value of α=0.007) in order to obtain narrower spectral lines and longer transients toward steady-state gyration. After an initial thermalization step in which the Langevin dynamics is simulated over 230, a rotating in-plane magnetic field at frequency f_g=200 is applied to excite the vortex gyration. The core gyration radius as a function of time is shown in Fig. fig:transienta, while the magnon spectra obtained over several successive intervals are shown in Figs. fig:transientb-g.
Before the onset of gyration (t<0, Fig. fig:transientb), the vortex core undergoes low-amplitude Brownian motion close to the disk center, with a thermal magnon spectrum corresponding to the case shown in Fig fig:modese. As the rotating field is switched on and the gyration radius R_g increases, the Floquet bands emerge progressively as witnessed in the different snapshots of the dispersion relations in Fig. fig:transientb to fig:transientg. The gradual appearance of the Floquet bands is correlated with the growth in the gyration orbit, which further underscores the key role of core gyration, rather than the external driving field.
§ SELF-INDUCED FLOQUET MAGNONS
We found that these Floquet states can also be self-induced by exciting magnon modes directly, using microwave frequencies an order of magnitude above the gyration frequency. We demonstrate this experimentally in a 500-diameter, 50-thick disk, for which the fundamental gyration frequency is around f_g= 500. This value is above the instrumental limit of our micro-focus BLS measurements, which allows us to probe the gyration and magnon modes simultaneously. Figure fig:spontaneousa shows the measured BLS spectra as a function of the microwave power when a single frequency of f_nm=10.2 is applied. At low power, the microwave field resonantly excites a single azimuthal mode, as shown in Fig. fig:spontaneousa. At the threshold power of about 5m, a frequency comb around this azimuthal mode appears, along with a strong spectral response in the sub-GHz regime associated with core gyration. Above this threshold, increases in the microwave power lead to a reduction in the gyration frequency, a phenomenon known as nonlinear redshift <cit.>. This redshift is imprinted in the frequency spacing of the Floquet states [red shaded areas in Fig. fig:spontaneousa]. Additionally, increasing powers also result in shorter delays (Δ t) before the onset of gyration, as observed in the time resolved BLS spectra in Figs. fig:spontaneousb,c. This power dependence is a hallmark of nonlinear mode coupling, which in the present case involves the parametric excitation of the gyrotropic mode by an azimuthal magnon mode.
While the connection between frequency combs and core gyration has been discussed in another context <cit.>, our experiments establish a clear connection to Floquet physics that has previously been overlooked. The Floquet mechanism here also differs crucially from previous studies in that the primary source of modulation (core gyration) is not excited directly, but rather through the nonlinear coupling to other modes which are populated by the external drive. This self-induced mechanism is a novel feature of the vortex state.
§ CONCLUSION AND OUTLOOK
We have demonstrated a new approach to Floquet engineering in which the periodic drive of an internal mode, namely vortex core gyration, induces Floquet magnon bands through the nonlinear coupling with this mode. This differs inherently from more traditional approaches in which laser or microwave illumination is used to periodically modulate material constants such as exchange or anisotropy. Another difference lies in the fact that the Floquet bands can be induced either by driving the core gyration directly, with an oscillatory external magnetic field in the sub-GHz range, or indirectly, through the excitation of a higher-order eigenmode in the GHz range, which couples to the gyration through nonlinear interactions. This suggests new avenues to explore Floquet engineering with other topological magnetic solitons, such as domain walls and skyrmions, which also possess low-frequency Goldstone-like modes that couple to high-frequency eigenmodes. We anticipate that this paradigm may find applications beyond magnetism, such as in ferroelectric or superconducting systems, which also host solitonic objects like vortices.
Science
§ ACKNOWLEDGMENTS
The authors thank A. Manchon for pointing us in the direction of Floquet physics, B. Scheumann for depositing the Ni_81Fe_19 and Au films, V. Iurchuk for contributing to the micromagnetic simulations in the early phase of the project, and A. Hoffmann for fruitful discussions.
§.§ Funding
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) through the programs KA 5069/3-1 and GL 1041/1-1, and the EU Research and Innovation Programme Horizon Europe under grant agreement no. 101070290 (NIMFEIA). Support by the Nanofabrication Facilities Rossendorf (NanoFaRo) at the IBC is gratefully acknowledged.
§.§ Author contributions
H.S., K.S. and C.H. conceived the experiments. K.S. fabricated the sample. C.H. carried out the experiments. C.H. and J.-V.K. performed the micromagnetic simulations. J.M., L.K. and J.-V.K. developed the theory. L.K., C.H., J.-V.K, K.S. and H.S. visualized the results. J.-V.K., T.D., J.M., A.K., K.S., J.F., and H.S. acquired funding. All authors analyzed the data and discussed the results. L.K., J.-V.K., J.M., A.K., and K.S. wrote the original draft of the paper. All authors reviewed and edited the paper.
§.§ Competing interest
The authors have no conflicts to disclose.
§.§ Data and materials availability
The data that support the findings of this study are openly available in RODARE. We used scientific color maps to prevent visual distortion of the data and exclusion of readers with color-vision deficiencies <cit.>. Specifically, the color maps used include oslo (https://www.fabiocrameri.ch/colourmaps/), cmocean.map (https://matplotlib.org/cmocean/) and guppy (https://cmasher.readthedocs.io/index.html <cit.>).
§ SUPPLEMENTARY MATERIALS
Materials and Methods
Figure S1
References [51-58]
§ SUPPLEMENTARY MATERIALS
§.§ Sample preparation
The experiments discussed in this work were performed on the sample shown in Fig. <ref> that was fabricated in a two-step procedure. We started with an undoped, high-resistance silicon substrate. In a first step, we patterned the coplanar waveguide using a double-layer resist of methyl methacrylate (EL11) and poly(methyl methacrylate) (950 PMMA-A2), electron beam lithography, electron beam evaporation of a Cr(5)/Au(65) layer and subsequent lift-off in an acetone bath. The central line and ground lines of the coplanar waveguide have widths of 2 and 13.5, respectively, the gap between them is 2.8 wide.
In a second step, the magnetic structures with different diameters (500, 1, and 2) are patterned directly on top of the central signal line of the coplanar waveguide. Therefore, we use a poly(methyl methacrylate) (950 PMMA-A6) resist, electron beam lithography, electron beam evaporation of a Cr(5)/Ni_81Fe_19(50)/Cr(2) layer and subsequent lift-off in an acetone bath.
§.§ Time-resolved Brillouin light scattering microscopy
All experimental measurements were performed at room temperature. The magnon spectra were detected by means of Brillouin light scattering microscopy <cit.>. Therefore, a monochromatic, continuous-wave 532 laser was focused onto the sample surface using a microscope lens with a high numerical aperture, yielding a spatial resolution of about 300. The backscattered light was then directed into a Tandem Fabry-Pérot interferometer <cit.> in order to measure the frequency shift caused by the inelastic scattering of photons and magnons. The detected intensity of the frequency-shifted signal is directly proportional to the magnon intensity at the respective focusing position.
The low-frequency range is challenging to measure using Brillouin light scattering due to the high intensity of the elastically scattered Rayleigh peak, whose flank covers the low frequency signals. In details, the low-frequency detection limit of the interferometer is determined by the course spacing of the etalon mirrors (distance L_1 in Fig. 1(b) in Ref. ) which defines the free spectral range. The data shown in Fig. <ref> was recorded with a mirror spacing of 23,
while the spectra plotted in <ref> were measured with a mirror spacing of 18.
Hence, the different detection limits in the low-frequency range.
When using microwave pulses to excite magnons, it is possible to measure the temporal evolution of the magnon spectra using a time-of-flight principle. Therefore, we simultaneously monitor the state of the interferometer and the time when each photon is detected with respect to a clock provided by the microwave generator using a time-to-digital converter. In order to acquire enough signal, the pulsed experiment needs to be repeated stroboscopically, covering hundred thousands of repetitions.
During all experiments, the investigated microstructure was imaged using a red LED and a CCD camera. Displacements and drifts of the sample were tracked by an image recognition algorithm and compensated by the sample positioning system.
To account for the different spatial distributions of the magnon modes, the signal was integrated over 3 radial and 4 azimuthal positions across half the 2-wide disk. The signal for the 500-wide disk was obtained at a singular position.
§.§ Micromagnetic simulations
Simulations of the vortex dynamics were performed using the open-source finite-difference micromagnetics code MuMax3 <cit.>, which performs a time integration of the Landau-Lifshitz-Gilbert equation of motion of the magnetization m(r,t),
mt = -γm×(B_eff + b_th) + αm×mt.
Here, m(r,t) = M(r,t)/M_s is a unit vector representing the orientation of the magnetization field M(r,t) with M_s being the saturation magnetization, γ = gμ_B/ħ is the gyromagnetic constant, and α is the dimensionless Gilbert-damping constant. The effective field, B_eff = -δ U/δM, represents a variational derivative of the total magnetic energy U with respect to the magnetization, where U contains contributions from the Zeeman, nearest-neighbor Heisenberg exchange, and dipole-dipole interactions. The term b_th represents a stochastic field with zero mean, ⟨ b_th^i(r,t) ⟩ = 0 and spectral properties satisfying <cit.>
⟨ b_th^i(r,t) b_th^j(r',t') ⟩ = 2α k_B T/γ M_s Vδ_ijδ(r-r') δ(t-t') ,
with amplitudes drawn from a Gaussian distribution. Here, k_B is Boltzmann's constant, T is the temperature, and V denotes the volume of the finite difference cell. This stochastic term models the effect of thermal fluctuations acting on the magnetization dynamics. An adaptive time-step algorithm based on a sixth-order Runge-Kutta-Fehlberg method was used to perform the time integration <cit.>.
We model our 50-nm thick, 2-μm diameter disk using 512 × 512 × 8 finite difference cells with γ=1.86e-11/(Ts), M_s=775/, an exchange constant of A_ex=12/, and α = 0.007 –– the nominal value for this material. Note that smaller values of α were used to highlight different aspects of the Floquet bands, as discussed in the main text.
The dispersion relations shown in Fig. <ref> were computed as follows. For each of the non-driven and driven cases, time integration of the stochastic dynamics with α=0.0007 was performed over an interval of 10 and the out-of-plane magnetization fluctuations corresponding to the different azimuthal index, a_m(t), were recorded. This involved a spatial Fourier decomposition that is computed on-the-fly by projecting out the magnetization m_z(r,t) using the basis functions ψ(r) = e^i m ϕ,
a_m(t) = ∫ dV ψ^*(r) m_z(r,t),
with ϕ representing the angular variable in cylindrical coordinates. The power spectrum for each a_m(t) was then computed using the Welch method, which involves averaging over the power spectra generated from the discrete Fourier transform of half-overlapping 400-ns Hann windows into which the original time series data is sliced. Note that the basis functions chosen are taken to be uniform across the film thickness, which means the power spectra shown only capture symmetric thickness modes. In the driven case, a rotating in-plane magnetic field with a frequency of 200 MHz and an amplitude of 0.05 mT was applied.
The dispersion relations shown in Fig. <ref> were obtained in a similar way with α = 0.0001, except that smaller 30-ns windows were used for the Fourier transform of the time series data in order to produce different snapshots in time of the Floquet bands.
§.§ Floquet theory
To describe self-induced Floquet magnon modes in a magnetic vortex, we consider the following Hamiltonian (Eq. (<ref>) of the main text)
Ĥ = ΩN̂_σ + ∑_nmω_nmn̂_nm + Ĥ_int
where Ω_σ is the frequency of the vortex-core oscillation with mode occupation N̂_σ=Â^†_σÂ_σ and gyration sense (circular polarization) σ=±1. ω_nm is the regular magnon dispersion with mode occupation n̂_nm=â^†_nmâ_nm, with â_nm (â^†_nm) a annihilation (creation) operator for magnons with radial index n and azimuthal index m. Ĥ_int describes the coupling between the magnon modes and the vortex gyration. Phenomenologically, this interaction can be derived from the Lagrangian formulation in collective coordinates <cit.>, while here, for simplicity, we only consider a minimal model consistent with a Hamiltonian formulation of nonlinear magnon modes <cit.>. In leading order, this model is determined by terms only involving one and two magnon modes: Ĥ_int = Ĥ^(1) + Ĥ^(2), with
Ĥ^(1) = ∑_n U_nσÂ^†_σâ_nσ + h.c.
Ĥ^(2) = ∑_nn'm V_nn'mσÂ^†_σâ_n'-m+σâ_nm + h.c.
Note that we do not include a summation over σ since, in practice, no superposition of left and right rotating vortex cores arise. U_nσ and V_nn’mσ are the expansion coefficients for the terms linear and quadratic in magnon operators. We are interested in the self-induced changes of the spectrum ω_nm of the magnon modes due to the periodic oscillation of the core gyration. To this end it is convenient to go into the interaction picture with respect to the H_0=ΩN̂_σ. This results in replacing Â^†→Â^†exp(iΩ t), Â→Âexp(-iΩ t), while H_0 drops out. Then we are left with solving:
Ĥ_m = Ĥ_m^(0)+Ĥ_m^(1)+Ĥ_m^(2)
Ĥ_m^(0) = ω_m n̂_m
Ĥ_m^(1) = δ_mσ(U_σe^iΩ tÂ^†_σâ_σ + U^*_σe^-iΩ tÂ_σâ^†_σ)
Ĥ_m^(2) = V_me^iΩ tÂ^†_σâ_-m+σâ_m + V^*_me^-iΩ tÂ_σâ^†_-m+σâ^†_m,
where we dropped the indices n to simplify the notation. Since this Hamiltonian is periodic in time, we can use the Floquet theorem, which states that the spectrum in the presence of driving can be obtained from the Floquet Hamiltonian Ĥ_F that enters the total evolution operator Û(t_2,t_1) of the system as
Û(t_2,t_1) = e^-iK̂(t_2)e^-iĤ_F(t_1-t_2)e^iK̂(t_1),
where K̂(t)=K̂(t+T), is an hermitian operator that is periodic with period T=2π/Ω. Clearly, the Floquet spectrum of the system is then defined only modulo the driving frequency Ω. Interestingly, for the problem at hand we can find Ĥ_F and K̂ analytically. Choosing K̂(t)=-Ωn̂_m t yields the time-independent Floquet Hamiltonian:
Ĥ_F^(0) = 1/2 (ω_m-Ω)n̂_m + 1/2 (ω_-m+σ-Ω)n̂_-m+σ
Ĥ_F^(1) = 1/2 (δ_m,σ+δ_-m+σ,σ)(U_σÂ^†_σâ_σ + U^*_σÂ_σâ^†_σ)
Ĥ_F^(2) = V_mÂ^†_σâ_-m+σâ_m + V^*_mÂ_σâ^†_-m+σâ^†_m
For stroboscopic times t_2=t_1+kT, k an integer, this is equivalent to the interaction picture with the Hamiltonian Ĥ'=Ωn̂_m. To gain qualitative insight, it is sufficient to limit the remaining discussion to the case in which we can treat the steady core gyration as a classical variable, A=⟨Â⟩, which we absorb in the definitions of the coupling strengths U→ AU and V→ AV. The Floquet Hamiltonian can then be diagonalized with a generalized Bogolyubov transformation:
α̂_m = u_m(â_m+λ_m) + v_m(â^†_-m+σ+μ_m^*),
α̂_-m+σ = u_m(â_-m+σ+μ_m) + v_m(â^†_m+λ_m^*),
where λ_m,μ_m and u_m,v_m, with |u_m|^2-|v_m|^2=1, are (generally complex) parameters. Substitution and rearrangement of the terms under the sum yields the Floquet Hamiltonian
Ĥ_Fm = ω^'_mα̂^†_mα̂_m + ω_m^0,
ω^'_m = 1/2(ω_m - ω_-m+σ) + 1/2√((ω_m + ω_-m+σ - 2Ω)^2-16|V_m|^2)
ω^0_m = -1/2(ω_m - Ω)|δ_-m+σ,σU_σ|^2 + 1/2(ω_-m+σ-Ω)|δ_m,σU_σ|^2 /1/4(ω_m-Ω)(ω_-m+σ-Ω)-|V_m|^2 -1/2√((ω_m + ω_-m+σ-2Ω)^2-16|V_m|^2)
Reinserting the index n, the Floquet spectrum of the system features the modes
ω_nmλ = ω_nm^' + ω_nm^0 + λΩ_σ
with λ=0,±1,±2, ….
|
http://arxiv.org/abs/2409.02241v1 | 20240903191401 | What makes a face looks like a hat: Decoupling low-level and high-level Visual Properties with Image Triplets | [
"Maytus Piriyajitakonkij",
"Sirawaj Itthipuripat",
"Ian Ballard",
"Ioannis Pappas"
] | q-bio.NC | [
"q-bio.NC",
"cs.CV"
] |
Decoupling low-level and high-level Visual Properties with Image Triplets
M. Piriyajitakonkij et al.
Department of Computer Science, The University of Manchester, UK Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore Neuroscience Center for Research and Innovation (NX), Learning Institute and Big Data Experience Center (BX), King Mongkut’s University of Technology Thonburi, Thailand Department of Psychology, University of California, Riverside, USA Laboratory of Neuro Imaging, University of Southern California, USA
[1]* Senior authors
[2]† Corresponding author: [email protected]
[3]Code: [https://github.com/maytusp/triplet_search]https://github.com/maytusp/triplet_search
What makes a face looks like a hat: Decoupling low-level and high-level Visual Properties with Image Triplets
Maytus Piriyajitakonkij1,20000-0002-7610-8953 Sirawaj Itthipuripat30000-0001-9302-0964 Ian Ballard†*,40000-0003-1814-3141 Ioannis Pappas*,50000-0002-0168-7014
September 9, 2024
==================================================================================================================================================================
§ ABSTRACT
In visual decision making, high-level features, such as object categories, have a strong influence on choice. However, the impact of low-level features on behavior is less understood partly due to the high correlation between high- and low-level features in the stimuli presented (e.g., objects of the same category are more likely to share low-level features). To disentangle these effects, we propose a method that de-correlates low- and high-level visual properties in a novel set of stimuli. Our method uses two Convolutional Neural Networks (CNNs) as candidate models of the ventral visual stream: the CORnet-S that has high neural predictivity in high-level, “IT-like” responses and the VGG-16 that has high neural predictivity in low-level responses. Triplets (root, image1, image2) of stimuli are parametrized by the level of low- and high-level similarity of images extracted from the different layers. These stimuli are then used in a decision-making task where participants are tasked to choose the most similar-to-the-root image. We found that different networks show differing abilities to predict the effects of low-versus-high-level similarity: while CORnet-S outperforms VGG-16 in explaining human choices based on high-level similarity, VGG-16 outperforms CORnet-S in explaining human choices based on low-level similarity. Using Brain-Score, we observed that the behavioral prediction abilities of different layers of these networks qualitatively corresponded to their ability to explain neural activity at different levels of the visual hierarchy. In summary, our algorithm for stimulus set generation enables the study of how different representations in the visual stream affect high-level cognitive behaviors.
§ INTRODUCTION
Low-level visual properties, such as shape, contour, and texture, are integral to visual decision-making. They act as inputs to high-level visual processing goals like object identification and also directly influence preferences and decisions. For instance, a bakery customer might choose a generally less preferred dessert because of its appealing color or texture. However, our understanding of how these lower-level visual properties impact high-level behavior is incomplete. A better understanding of how lower-level visual information impacts high-level behavior is needed to understand the cognitive and neural bases of visual processing <cit.>.
A key difference between deep neural network models for visual categorization and the human brain’s visual system is how they make decisions based on abstract representations or features. Modern vision models, such as CNNs <cit.>, Transformers <cit.>, and Recurrent Neural Networks (RNNs) <cit.>, base their decisions solely on information from the preceding, highest processing layer. In contrast, there are direct connections in humans from the earliest levels of neuronal visual processing to association cortex that influence behavior <cit.>. Understanding how the brain integrates low-level and high-level information can inspire researchers to build more brain-like computer vision algorithms <cit.>.
The ventral visual stream, the object-identification pathway in the brain, is organized hierarchically and neurons in this pathway show response profiles that are strikingly well-modeled with CNNs trained to perform object identification <cit.>. The Inferior Temporal (IT) cortex is the most high-level visual processing area in the ventral visual stream and contains subregions specialized for detecting faces, body parts, tools, places and other significant visual categories <cit.>. Visual stimuli from different categories elicit dissimilar patterns of activity in the IT cortex <cit.>. Earlier visual areas such as V2 and V4 process lower-level visual features and are less understood in terms of how these early regions bias visual decision-making. This gap exists partly because the specific visual features that elicit distinct or similar neural patterns in V2 and V4 are not well-studied <cit.>. Moreover, objects from the same visual category tend to have similar lower-level visual properties, making it difficult to disentangle their unique contributions to decision-making.
We propose a novel approach to generate sets of visual stimuli that decouple high-level and low-level visual similarity. Our approach leverages computational candidate models of the ventral visual pathway, specifically CNNs, which are among the most advanced models for the ventral visual stream <cit.>. We generate stimulus sets of triplets of images composed of a root image and two response images. We aimed to control the relative high- and low-level similarities between the root image and the two response images. This allows us to decorrelate low- and high-level visual properties in a naturalistic set of images, enabling flexible experimental investigation of the role of each level of visual information on behavior. Moreover, our approach permits the comparison of different model architectures in their ability to explain human choices at different levels of visual processing. Most relevantly to the proposed work, CNNs have been previously used to create pairs of images that are parametrized by their similarity on the high-level features and they used these image's similarities to predict brain function <cit.>. This approach did not distinguish between high and low-level visual similarity, a key advance of our approach.
We first present our algorithm for stimulus set generation. Second, we collect human behavioral data on our stimulus set. We found that both CORnet-S <cit.> and VGG-16 <cit.> predict human decisions based on high-level visual information, whereas only VGG-16 can account for the influence of low-level visual information on choices. We conclude by comparing these results to BrainScore assessments of these models’ ability to explain neural recording data.
§ METHODS
Our goal is to create stimulus sets of triplets of images, T = (I_root, I_1, I_2). Our framework allows users to manipulate the neural network similarity levels between I_1 and I_2 relative to the root image I_root. One can select the similarity model from state-of-the-art deep neural networks and choose which layer in the neural network to represent a particular brain area. The criterion for layer selection is neural predictivity explained below.
§.§ Neural Predictivity
Neural Predictivity tells how well the response X in a computational model (e.g. a layer’s response) predicts the neural response y in a brain area (e.g. a single neuron activity in V2) given the model and the brain the same images. We use these metrics in BrainScore to compare how our models performed in human data from our study to published neural data. V4 and IT data is from <cit.> and V1 and V2 data is from <cit.>. IT responses are collected with 2,560 grayscale images divided into eight types of objects (like animals, boats, cars, chairs, faces, fruits, planes, and tables). Each type includes eight different objects (for example, the “face” type has eight different faces). The images were created by placing 3D object models on natural backgrounds. V2 responses are collected with the 9,000 texture stimuli spanning across 15 texture families <cit.>.
§.§ Neural Network Dissimilarity: D
We define the “neural network dissimilarity” for high- and low-level layers between I_1 and I_2 relative to I_root as <ref> and <ref>:
D_high(I_root, I_1, I_2) = C(F_high(I_root), F_high(I_1)) - C(F_high(I_root), F_high(I_2))
D_low(I_root, I_1, I_2) = C(F_low(I_root), F_low(I_1)) - C(F_low(I_root), F_low(I_2))
where F_high and F_low are the high- and low-level layers of the brain model respectively. C(·,·) is the Pearson product-moment correlation coefficient, measuring linear alignment between the model response to I_root and the model response to I_1 or 2. If I_1 and I_root are similar in the high-level layer, the correlation between higher responses to these images is high. If both correlation terms are high, the dissimilarity D will be low, which means both I_1 and I_2 are very similar to I_root.
§.§ Brain-Model-Guided Stimulus Set
We create a stimulus set corresponding to the high-level and low-level layers of the brain model, <ref> and <ref> respectively. Firstly, we create a triplet container 𝒯(·,·), a function that returns a triplet of stimuli corresponding to given neural network dissimilarity, defined as follows:
𝒯(b_low,b_high) = (I_root, I_1, I_2)
where b_low, b_high∈{1,2,...,N_bin} are container bin indices and indicate dissimilarity levels of the sampled triplet (I_root, I_1, I_2). The triplet (I_root, I_1, I_2) is sampled from M triplets in a container bin b_low, b_high. Each bin has neural network dissimilarity D_low and D_high as the following conditions
D_low = ± b_lowS_low±ϵ ,
|D_high| = b_highS_high±ϵ
where 2ϵ is the size of each bin and S determines the distance between bins. We observe that when D_high. is positive, there is a much higher chance that D_low is positive rather than negative, resulting in the high correlation between them. We want to decorrelate D_high and D_low. Therefore, the right-hand side of the D_low condition can be either positive or negative with a 50% chance.
Algorithm <ref> describes how stimuli are selected. We exclude selected image indices (r,i,j) in the bins to make sure each image is used only one time in a triplet container. The study design and methods were approved by and followed the ethical procedures of the University of California, Berkeley Committee for the Protection of Human Subjects.
§ EXPERIMENT: DISSOCIATING HIGH-LEVEL AND LOW-LEVEL VISUAL INFLUENCES ON CHOICE
We tested whether our stimulus set was able to distinguish behavioral data based on both low-level layers (e.g., V2 area) and high-level layers (e.g., IT area). Participants were tasked to select which image from I_1 and I_2 was more similar to the root image I_root as shown in Fig. <ref>. Image triplets can vary in their relative low and high level similarity scores, with four extreme cases as shown in Fig. <ref>.
Experimental Procedures: Each trial began with the root image I_root for 2,000 ms, and subjects were instructed to respond based on whether the image was indoor or outdoor. This was meant to encourage encoding of the image and these responses were not analyzed. After a 750 ms inter-stimulus interval, a pair of images appeared. The response image location assignments to the left or right side of the screen were randomized across trials. Subjects (n = 17) were instructed to select which image was more similar to the root image. They had 750 ms to respond. If subjects failed to respond to either the root image or the similarity judgment, text was shown instructing subjects to respond more quickly. There were 300 total trials, with a 2,000 ms inter-trial interval. Subjects were instructed that an algorithm determined which image was in fact more similar and that 10 trials would be selected at random and they would be paid $0.50 for each question they answered correctly according to the algorithm. They were given no feedback to allow them to learn what information the algorithm used; rather, the instructions were meant to motivate subjects. Subjects were paid according to which image had the largest high-level similarity to the root image.
CORnet-S: CORnet-S is designed to replicate the primate ventral visual stream. It achieves high ImageNet classification accuracy compared to other models of similar size and incorporates feedback connections to represent more faithfully the architecture of the ventral visual stream. CORnet-S consists of four blocks, each corresponding to a different area in the ventral pathway: V1, V2, V4, and IT.
VGG-16: is a CNN architecture composed of 16 layers. It has been widely used for image classification tasks and serves as a benchmark in the field of deep learning. Its operations from the lowest to the highest layers are as follows: Conv1, Conv2, Pool1, Conv3, Conv4, Pool2, Conv5, Conv6, Conv7, Pool3, Conv8, Conv9, Conv10, Pool4, Conv11, Conv12, Conv13, Pool5, FC1, FC2, and FC3. Conv is a convolutional layer, Pool is a max pooling layer, and FC is a fully connected layer.
Stimulus Set: Our stimulus set is created by using the FC3 and Pool3 layers of VGG-16 model. We call it VGGSet. FC3 is the high-level layer and Pool3 is the low-level layer. The key metric is the relative similarity between the two response images and the root: D_low/high(I_root, I_1, I_2). This will give the relative weight of evidence for selecting the left relative to the right image as being more similar to the root. The correlation between this quantity as calculated from FC3 and Pool3 is 0.13. Our stimulus selection algorithm is effective at reducing the correlation between high- and low-level visual properties. Stimuli come from the subset of the Things dataset <cit.> as it is widely used in neuroscience and has rich behavioral and neural data shared among researchers.
Analysis: We analyzed our data using mixed generalized linear models with random-intercepts. We modeled the choice of left versus right image as a binary dependent variable. As independent variables, we included the similarities between the root and the left, relative to right, image derived from both high-level and low-level layers of the model. Including both levels in the same model allows us to test for independent influences of each level of visual information on choice. We also included interaction terms between low-level and high-level similarity. When comparing VGG and Cornet-S, we included model type as a categorical variable as well as its interactions with low-level and high-level similarity.
§ RESULTS
VGG-16 versus CORnet-S: We found that VGG-16 outperforms CORnet-S in explaining human choices based on low-level similarity, with a significant interaction between model type and both low-level similarity, Z = 7.8, p=8 × 10^-15, and CORnet-S outperforms VGG-16 based on high-level similarity, Z = -23.4, p=2 × 10^-16, Fig. <ref> (a). Interestingly, these behavioral results correspond with neural predicticity scores of these layers, with CORnet-S IT outperforming VGG-16 FC3 in IT neural predictivity, and VGG-16 Pool 3 outperforming CoRnet-S V2 neural predicticity, Fig. <ref>. The human subjects are more likely to select the left image when the high-level similarity between the left, relative to right, image and the root image is higher, i.e., D_high(I_root, I_1, I_2)>0, Z = 34.2, p = 2 × 10^-16. Additionally, they select the left image more often if the low-level similarity between the left, relative to right, image and the root image is higher, i.e., D_low(I_root, I_1, I_2)>0, Z = 11.0, p = 2 × 10^-16. This result is depicted in Fig. <ref> (a) by showing the categorical preference of whether low-level similarity with the root image was higher for the left, relative to the right image. However, we note that our statistical model used continuous similarity values. Fig. <ref> (a) shows that low-level similarity exerts an additive effect: subjects are even more likely to select the left image if it is more similar to the root at both high- and low levels. In contrast, subjects are less likely to select the image with higher high-level similarity if the low-level similarity favors the alternative option, as in Fig. <ref>. In contrast, similarities derived from CORnet-S do not show the same pattern as shown in Fig. <ref> (b). Whereas high-level similarity metrics derived from CORnet-S do explain choices, Z = 28.9, p = 2 × 10^-16, low-level similarity metrics do not, p > 0.1.
Other layers of VGG-16 also explain choices: The results are shown in Fig. <ref> (c-f). As VGG-16 has many layers, we can select other high-level and low-level layers based on their hierarchy and neural predictivity scores. Therefore, we recomputed D_high and D_low using different VGG-16 layers. Interestingly, some of the layer combinations exhibit the same behavior only for some high-level similarity levels (D_high) unlike the initial VGG-16 layers that we chose to create VGGset. For the high-level layer FC3 and the low-level layer Pool2 (Fig. <ref> (c)), the human subjects tend to select the left image when the high-level similarity between the left, relative to right, image and the root image is higher, Z = 33.8, p = 2 × 10^-16. They also select the left image more often if the low-level similarity between the left, relative to right, image and the root image is higher, Z = 7.6, p = 3.1 × 10^-14. The same trends apply for other layer combinations with different statistically significant levels as follows: The high- and low-level layer FC1 and Conv11 have Z = 28.4, p = 2 × 10^-16 and Z = 9.8, p = 2 × 10^-16 for high- and low-level similarity respectively. The high- and low-level layer FC1 and Pool3 have Z = 29.7, p = 2 × 10^-16 and Z = 2.8, p = 0.005 for the high- and low-level similarity respectively. Moreover, the FC1 and Pool2 pairs show the similar statistical values, with Z = 29.9, p = 2 × 10^-16 and Z = 2.8, p = 0.006 for high- and low-level similarity trends respectively. Comparing different VGG-16 layers, FC1 outperforms FC3 in explaining high-level similarity's influence on choices, Z=-20.0, p = 2 × 10^-16, which is aligned with the high-level neural predictivity score of the IT area that is higher for FC1 than FC3. Pool2 exhibits the same low-level influence on choices as Pool3, p>0.05. Conv11 also has the same low-level influence on choices as Pool3, p>0.1.
§ DISCUSSION
We found that both the high-level and low-level layers of VGG-16 can account for variance in human decision making. Overall, we found that VGG-16 explains human decision-making in our task better than CORnet-S, despite CORnet-S having a higher overall neural predictivity score according to Brain-Score <cit.>. We note that although CORnet-S outperforms VGG-16 at explaining IT neural data as shown in Fig. <ref>, the VGG's pool3 layer outperforms CORnet-S’s V2 layer at explaining V2 neural data, Fig. <ref> (a). We speculate that an increased ability of VGG-16 to explain neural responses in lower-level regions like V2 may account for its ability to explain the influence of low-level visual features on human decisions. Because we did not collect neural data, future research probing the relationship between CNN predictions, neural responses, and human behavior at lower-levels of the visual hierarchy are needed. A weakness of our approach is that our stimulus set was designed using VGG-16 similarities, but not CORnet-S similarities. Additionally, we did not examine every layer in either network. Nonetheless, several lower-level layers in CORnet-S were unable to account for an influence of low-level visual properties on behavior, whereas multiple low-level layers in VGG-16 were able to predict behavior.
The key advance of our approach is that it reduces the correlation between high- and low-level visual features in natural images. Our approach contributes to the growing use of neural networks to generate image sets that permit new ways of addressing questions in neuroscience <cit.>.
splncs04
|
http://arxiv.org/abs/2409.03057v1 | 20240904201212 | VECA: Reliable and Confidential Resource Clustering for Volunteer Edge-Cloud Computing | [
"Hemanth Sai Yeddulapalli",
"Mauro Lemus Alarcon",
"Upasana Roy",
"Roshan Lal Neupane",
"Durbek Gafurov",
"Motahare Mounesan",
"Saptarshi Debroy",
"Prasad Calyam"
] | cs.NI | [
"cs.NI"
] |
VECA: Reliable and Confidential Resource Clustering for Volunteer Edge-Cloud ComputingThis material is based upon work supported by the National Science Foundation (NSF) under Award Number: OAC-2232889. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF.
Hemanth Sai Yeddulapalli1,
Mauro Lemus Alarcon1,
Upasana Roy1,
Roshan Lal Neupane1,
Durbek Gafurov1,
Motahare Mounesan2,
Saptarshi Debroy2,
Prasad Calyam1
1University of Missouri-Columbia, USA;
2City University of New York, USA
Email:
1{hygw7, lemusm, u.roy, neupaner, durbek.gafurov, calyamp}@missouri.edu;
[email protected];
[email protected]
=======================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Volunteer Edge-Cloud (VEC) computing has a significant potential to support scientific workflows in user communities contributing volunteer edge nodes. However, managing heterogeneous and intermittent resources to support machine/deep learning (ML/DL) based workflows poses challenges in resource governance for reliability, and confidentiality for model/data privacy protection. There is a need for approaches to handle the volatility of volunteer edge node availability, and also to scale the confidential data-intensive workflow execution across a large number of VEC nodes. In this paper, we present VECA, a reliable and confidential VEC resource clustering solution featuring three-fold methods tailored for executing ML/DL-based scientific workflows on VEC resources. Firstly, a capacity-based clustering approach enhances system reliability and minimizes VEC node search latency. Secondly, a novel two-phase, globally distributed scheduling scheme optimizes job allocation based on node attributes and using time-series-based Recurrent Neural Networks. Lastly, the integration of confidential computing ensures privacy preservation of the scientific workflows, where model and data information are not shared with VEC resources providers. We evaluate VECA in a Function-as-a-Service (FaaS) cloud testbed that features OpenFaaS and MicroK8S to support two ML/DL-based scientific workflows viz., G2P-Deep (bioinformatics) and PAS-ML (health informatics). Results from tested experiments demonstrate that our proposed VECA approach outperforms state-of-the-art methods; especially VECA exhibits a two-fold reduction in VEC node search latency and over 20% improvement in productivity rates following execution failures compared to the next best method.
Volunteer Edge-Cloud Computing, k-means Clustering, Recurrent Neural Networks, Confidential Computing, Function-as-a-Service, Volatility
There has been an exponential growth of data-intensive applications that rely on automation of complex scientific workflows with high demand for computational resources. To address this demand for processing power, the paradigm of volunteer computing <cit.> has evolved, offering a decentralized solution that harnesses the collective power of sporadically available edge compute resources (e.g., laptops, desktops, servers) contributed voluntarily by individuals or organizations <cit.>. In recent times, the paradigm of Volunteer Edge-Cloud (VEC) computing has become possible due to the emergence of frameworks such as BOINC <cit.> and Kubernetes-at-the-Edge <cit.>, where participants from scientific application communities unite to form a distributed computing ecosystem (such as e.g., Open Science Data Federation <cit.>) to benefit each other for tasks related to scientific data analytics using machine/deep learning <cit.>.
The inherent intermittent and volatile availability of VEC node resources can lead to unpredictable workflow performance <cit.>, unlike the predicatable performance typically observed in data centers or cloud platforms with unconstrained/high-availability resources. This is followed by the resource management complexity in VEC environments, where VEC node providers can unexpectedly alter resource configurations, expose workflows to threat actors, and ultimately impact the capacity, trust, and availability of VEC node resources <cit.>. Consequently, for the orchestration of a potentially large scale of VEC nodes, the cloud hub needs to address challenges in scheduling the workflows with considerations to resource governance for reliability (to handle volatility in VEC node availability), and also confidentiality for model/data privacy protection (to avoid exposing model and data information to the VEC resource providers).
There is dearth of work that can address the scalability, reliability, privacy, governance, and communication overhead issues between nodes and job schedulers in VEC computing environments, posing the need for efficient mechanisms for managing a large number of diverse and unpredictable nodes <cit.>. Recent works such as VECFlex <cit.> and VELA <cit.> offer solutions for scalable, and reliable services to manage the edge-cloud continuum, however they are not effective in handling the intermittent nature of VEC nodes and also in ensuring confidentiality of ML/DL-based scientific workflows.
In this paper, we present a volunteer edge-cloud allocation (VECA) framework, which provides a reliable and confidential VEC resource clustering solution for executing ML/DL-based scientific workflows on VEC computing resources. The VECA solution features three methods to effectively cope with the intermittent nature of VEC nodes, and to ensure trusted computing with stringent data and model confidentiality. Firstly, a capacity-based clustering method using the k-means algorithm is presented to aggregate VEC nodes based on their capacity similarity (e.g., CPU, RAM, storage) thereby minimizing VEC node search latency of the scheduler. Secondly, a novel two-phase, globally distributed scheduling scheme is proposed to optimize job allocation based on workflow capacity specifications by using time-series forecasting using recurrent neural networks (RNN) and fail-over governance mechanism to improve productivity rate. Lastly, a confidential computing certifier is integrated to ensure privacy-preservation of the ML/DL-based scientific workflows within a trusted execution environment (TEE), using the AWS Nitro Enclaves <cit.> as an exemplar implementation.
We evaluate VECA in a Function-as-a-Service (FaaS) emulation testbed that features OpenFaaS <cit.> and MicroK8S <cit.> technologies, and uses the Amazon Web Services (AWS) cloud platform capabilities. We consider two ML/DL-based scientific workflows viz., G2P-Deep (bioinformatics) <cit.> and PAS-ML (health informatics) <cit.> that are setup as Docker containers and are executed as serverless functions (i.e., without the need for provisioning specific servers) on scheduled VEC nodes. We implement the VECA components as microservices that use REST APIs and Message Queues for interaction. Our evaluation results demonstrate that our VECA approach significantly reduces VEC node search latency compared to existing baseline solutions i.e., VECFlex <cit.>, and VELA <cit.>. Furthermore, we evaluate how VECA enhances the overall productivity rate with availability prediction using RNN and fail-over capability using a governance strategy involves distributed cache management implemented via Redis<cit.>.
The remainder of this paper is organized as follows: Section <ref> presents the related work. Section <ref> describes the VECA problem formulation and outlines the proposed solution. Section <ref> details the k-means clustering approach. Section <ref> discusses the two-phase scheduling and confidential computing based scientific workflow execution. Section <ref> details the performance evaluation. Section <ref> concludes the paper and outlines future directions.
§ RELATED WORK
§.§ Clustering for Volunteer Edge-Cloud Computing
Recent works in the edge-cloud continuum such as VELA <cit.> have set the precedent for distributed scheduling systems that effectively bridge the gap between cloud and edge computing realms. While this approach points out the inefficiencies of random cluster selection, it lacks specificity in considering the characteristics/behavior of the nodes for cluster selection. Similarly, CLARA <cit.> highlights the advantages of leveraging clustering to enhance resource availability but fails to address the problem of efficient VEC node search and cluster-based resource allocation. Other recent works such as VECFlex <cit.> and Greedy-Random <cit.> address the brokering of VEC nodes for execution of data-intensive scientific workflows, however they do not consider clustering of the VEC nodes to meet workflow demands. The survey <cit.> provides details on how unsupervised learning algorithms such as k-means can be used for clustering the workloads. Inspired by the above works, VECA solution advances prior research by introducing an intelligent capacity-based clustering approach to reduce the search space in VEC computing for increasing efficiency in ML/DL-based scientific workflow scheduling.
§.§ Distributed Scheduling for Intermittent Availability
The inherent complexities and the sporadic nature of volunteer resource contributions impact both the capacity and reliability of VEC systems, as noted by <cit.>. This issue is compounded by the need for sophisticated scheduling mechanisms capable of handling the unpredictable availability of resources. Prior works such as OneEdge <cit.> and Mesos <cit.> underscore the importance of geo-distributed infrastructure and sophisticated resource sharing mechanisms, yet they do not address distributed scheduling challenges related to intermittent node availability. In contrast, approaches such as the application-aware task scheduling discussed in <cit.> partly address node volatility issues, however they do not align well with issues on volunteer resource dynamics. Addressing the unpredictable resource availability in a VEC environment, advancements have been made through stochastic models and semi-Markov processes as explored by <cit.>. These research studies provide a foundation for predictive analytics, which is crucial for anticipating resource availability and managing disruptions in VEC environments, as elaborated by <cit.>. The findings in these works justify the predictive analytics component of our VECA approach that features a novel time-series based RNN model to address issues of distributed scheduling of VEC nodes with intermittent availability.
§ PROBLEM FORMULATION AND SOLUTION OVERVIEW
In this section, we first discuss the problem in execution of scientific workflows via the management of dynamic volunteer resources, while addressing security and privacy concerns in VEC environments. Subsequently, we present our VECA solution overview to optimize VEC node resource allocation, manage the intermittent nature of volunteer resources, and preserve the privacy and confidentiality of data and models in ML/DL-based scientific workflow execution.
§.§ Executing Scientific Workflows in VEC Environments
§.§.§ Challenges
Within a VEC environment, we encounter tasks and resources with diverse needs and specifications. On one side, there are workflows with specific performance and security requirements, while on the other side, there exists a large number of volunteer resources with disparate specifications and security setups, as illustrated in Fig. <ref>. Current distributed scheduling systems include clustering approaches to group VEC nodes based on specific factors and manage the allocation of resources at the cluster level <cit.>. However, these approaches lack effective mechanisms to manage the large number of volunteer resources in a manner that aligns with the dynamic and heterogeneous requirements of workflows. This gap results in suboptimal cluster selection, reducing the overall efficiency of resource allocation and utilization in VEC environments. The challenge lies in designing and implementing a clustering mechanism that can effectively map the computational requirements of workflows to the capabilities of available VEC node resources. The objective is to optimize the alignment between task demands and VEC node capabilities, minimizing resource allocation overhead and enhancing system responsiveness.
Given a set of VEC nodes N = {n_1, n_2, …, n_m} and a set of ML/DL-based scientific workflows W = {w_1, w_2, …, w_k}, where each workflow w_j has defined resource requirements R_j = (r_j1, r_j2, …, r_jp) across p parameters (CPU, RAM, and Storage), the goal is to optimally cluster N nodes into k clusters C = {c_1, c_2, …, c_q} such that:
C = Cmin∑_i=1^k∑_n ∈ c_i d(n, μ_i)
where d(n, μ_i) is a distance function that measures the dissimilarity between a node n and the centroid μ_i of cluster c_i, reflecting the fit between node capabilities and workflow requirements.
§.§.§ k-means clustering as a VEC scheduling solution
To address the challenge of effectively clustering VEC nodes, we have developed an advanced mechanism utilizing the k-means algorithm <cit.>. As illustrated in Fig. <ref>, our VECA solution architecture incorporates a clustering feature to cluster VEC node resources based on capacity characteristics. Once clusters are defined, we initiate the first stage of our two-phase scheduling mechanism, selecting the cluster that is more likely effective and suitable to execute a particular workflow. We limit the search granularity of VEC nodes at the cluster level, reducing search latency times associated with this phase.
Details of this approach are presented later in Section <ref>.
§.§ Managing Dynamic Volunteer Resources
§.§.§ Challenges
VEC environments are characterized by the volatility of volunteer-provided resources, which manifest in unpredictable availability patterns. In Fig. <ref>, we illustrate this characteristic where in FaaS Cluster n there occurs an event where a VEC node instantly goes offline in the middle of workflow execution. This intermittency poses a substantial risk to the continuity and reliable/predictable execution of scientific workflows <cit.>. Formally, the challenge is to develop a predictive and adaptive system that minimizes the disruptive impact of this intermittency on workflow execution within the VEC environment. This requires both forecasting future availability, and a fail-over mechanism to deal with smooth recovery and continued operations following failures.
Modeling Node Availability
Node availability can be modeled as a stochastic process, where the state of each node is represented as a binary variable x_t at time t, indicating whether the node is online (x_t = 1) or offline (x_t = 0).
Predictive Modeling
We propose using a Recurrent Neural Network (RNN) to model the time-dependent sequences of node availability. The state of the RNN at time t, denoted as h_t, is computed as:
h_t = ReLU(W_ih x_t + b_ih + W_hh h_(t-1) + b_hh)
where W_ih, W_hh are the input-hidden and hidden-hidden weight matrices, b_ih, b_hh are biases, and ReLU is the activation function providing non-linearity.
The objective is to maximize the overall availability of the computing resources by minimizing the probability of workflow failures due to VEC node unavailability. This is achieved by optimizing the selection and scheduling strategies based on the predictive models and fail-over mechanisms as detailed later in Section <ref>.
§.§.§ Solution to address the dynamic nature of volunteer resources
We propose a nuanced assessment of nodes within the selected cluster, focusing on their future availability, and geographic proximity. We propose a two-phase scheduler as shown in Fig. <ref> by harnessing a RNN-based time-series forecasting and fail-over mechanism. In the event of an execution interruption, the system is adept at dynamically reassigning the workflow to the subsequent optimal node within the cluster by reading the workflow details from the cluster cache. This prevents having to go to the source for assignment, reducing the round trip times and maintaining a seamless operational flow. Such a process design with a Two-phase scheduling approach aims to ensure an efficient, reliable, and interruption-resistant scientific workflow execution. Details of this approach are presented later in Section <ref>.
§ CAPACITY BASED K-MEANS CLUSTERING
In this section, we detail the steps and related implementation of our k-means clustering approach for intricate VEC resource management and scientific workflow orchestration.
§.§ VEC Nodes Characterization, Optimization and Clustering
We consider the capacity characteristics of VEC nodes, recognizing them as crucial components for scientific workflow execution. This characterization encompasses quantitative metrics such as: (a) number of CPUs, representing the processing power, (b) RAM to indicate the memory capacity of each node, and (c) storage size to reflect the available storage on each node.
We utilized the Elbow method to determine the optimal number of clusters from the given pool of VEC resources. In our example case that involved 50 VEC nodes, the Elbow method results in 4 clusters.
§.§ Implementing k-means for VEC Node Clustering
For our k-means clustering implementation, we trained our model on 50 VEC nodes, generating their characteristics, mentioned in Section <ref>. This dataset was generated to replicate the real-world scenario of a VEC computing environment. Before starting the clustering process, we standardized the dataset using the StandardScaler from scikit-learn, ensuring that each feature had mean of approximately 0, and variance of 1. Standardization is a crucial pre-processing step, especially when features have different units of measurement, as it puts them on the same scale allowing the clustering algorithm to converge more effectively. we have used heuristic Elbow method to determine the optimal number of clusters. This approach involves running the k-means clustering process on the dataset for a range of values of k (the number of clusters). In this case, we consider k= range(1, 9) for the experimentation. For each value of k, the Sum of Squared Distances (SSD) within each cluster is computed. This measure, also known as inertia, quantifies the compactness of the clusters, with lower values indicating better clustering.
The k-means volunteer node clustering algorithm (as depicted by Algorithm <ref>) was applied to the dataset containing multidimensional descriptions of the VEC nodes. The SSD for each value of k was plotted against the corresponding k values to visualize the Elbow curve as shown in Fig. <ref>. The “Elbow” point on this curve, where the rate of decrease sharply changes, indicates the appropriate number of clusters for the data. This is based on the principle that increasing the number of clusters beyond the true number does not significantly improve the SSD. This method is particularly useful for identifying the value of k that balances informativeness with simplicity, thereby avoiding over-fitting. The Elbow plot depicted in Fig. <ref> helps in determining the number of clusters where the additional variance explained does not justify adding another cluster. In this specific case, the optimal number of clusters is 4. With the VEC nodes appropriately grouped based on their similarity in capacity characteristics, re-clustering is performed when ever there is a 10% increase in the number of cluster nodes. Following this, the VEC environment is now prepared for the second phase of our approach.
§ DISTRIBUTED TWO-PHASE SCHEDULER
Scientific users will be provided with an User Interface to submit the workflow. Upon workflow submission, the scheduler initiates a two-phase scheduling algorithm (as shown in Algorithm <ref>) as a pipeline <ref>. The phase one of the scheduler is executed in the Cluster Selection Controller node of the Cloud Hub. In the phase one of the pipeline, the scheduler selects a cluster based on the workflow's capacity requirements using k-means algorithm by passing the new data point as an input to the model to determine the cluster that is nearest to the new data point (as depicted in Step 1 in Fig. <ref>). This involves delegating the workflow to a cluster agent node, which possesses comprehensive data on VEC nodes availability in that particular cluster. Phase two of the scheduler is executed in the Agent Node of the selected cluster and the asynchronous inter-scheduler communication takes place using RabbitMQ message queue. In phase two, the selected cluster's nodes undergo evaluation based on future availability, and geo-location. This process utilizes an RNN-based feed-forward neural network trained on time-series data to forecast node availability (Step 2). The scheduler then assigns the workflow to the most suitable node, taking into account geographic proximity to the scientific user for node selection (Step 3). Below, we provide a detailed implementation involved in the two-phase scheduler.
§.§ RNN-based Time-series Forecasting for Availability Prediction
In VECA, time series forecasting serves as a critical tool for enhancing system robustness by predicting the availability of VEC nodes. It allows for preemptive scheduling decisions, ensuring that the workflows are allocated to nodes when they are most likely to be available, thereby minimizing down times and optimizing the resource utilization. This is Step 2 in the Fig. <ref>, where we first sample all the available VEC nodes of the cluster at a given moment and pass them through an RNN model to predict the future availability. RNN-based feed-forward neural networks, with their inherent strength in handling sequential data, are an ideal choice for this forecasting task <cit.>. Their ability to learn from historical availability patterns enables the prediction of future node statuses, making the system more reliable and efficient. To the best of our knowledge, this is the first paper to propose time series forecasting for VEC computing. Herein, we further detail the model implementation.
§.§.§ Custom dataset preparation
To evaluate the effectiveness of our approach in a realistic setting, we constructed a synthetic dataset encompassing data for 50 VEC nodes and their availability over a one-year period. This dataset incorporates diverse availability patterns, reflecting real-world scenarios. Some nodes exhibit limited availability during typical working hours (weekdays, 9AM-5PM), while others, likely contributed by research labs or universities, demonstrate high availability throughout the week. The dataset enables the model to learn the relationships between day of the week, hour, and VEC node ID, ultimately predicting availability with robustness. This approach can be readily extended to capture real-world VEC node availability data using node monitoring.
§.§.§ Data pre-processing
As part of the pre-processing, categorical features (VolunteerID, Weekday) are converted into a numerical format using OneHotEncoder. This step expands the dimensionality, where each unique category is represented by a binary vector. The `Hour' feature undergoes normalization via StandardScaler, transforming it to have a mean of 0 and a standard deviation of 1, improving model convergence speed and stability.
§.§.§ Model architecture
The RNN model is constructed with a specified input size (matching the feature vector's dimension), hidden size (determining the complexity and capacity of the model), and output size (1, for binary availability prediction). RNNs leverage the sequential nature of time series data, using the hidden state that carries information across time steps to capture temporal dependencies.
The input encoding format for RNN is given by:
X = [(VID, WD), (H)]
where VID, WD and H are VolunteerID, Weekday, and Hour respectively.
The hidden state at time t is computed as:
h_t = tanh(W_ih x_t + b_ih + W_hh h_(t-1) + b_hh)
The output at time t is given by:
o_t = W_ho h_t + b_o
The predicted availability is obtained using the sigmoid function:
ŷ_t = σ(o_t)
where σ denotes the sigmoid function, transforming the RNN output to a probability for availability prediction.
In the provided Equations <ref>, <ref>, and <ref>, the values W_ih, W_hh, and W_ho represent the weight matrices for transitions from input to hidden layer, hidden layer to itself, and hidden layer to output layer, respectively. The bias terms for these transitions are denoted by b_ih, b_hh, and b_o respectively. The tanh function in Equation <ref> introduces non-linearity to the hidden state computation, while the sigmoid function σ transforms the RNN's output to a probability, suitable for binary classification tasks such as availability prediction where the value ranges from 0 to 1, depicting the probability of VEC node availability for a specific time under consideration.
§.§.§ Training process
We have trained the dataset using 60 epochs and 128 hidden layers, where the model makes predictions, calculates loss via a loss function, combining logistic regression with binary cross-entropy loss, and updates weights using back propagation with an Adam optimizer for adaptive learning rate adjustments finalized to 0.001. The RNN’s forward pass computes the output considering current input and the previous time step's hidden state, followed by linear transformation for final prediction.
§.§.§ Output interpretation
The output generated by the trained model indicates the probability of a node remaining online, with values scaled between 0 and 1. A value approaching 1 suggests a high likelihood of the node maintaining availability for time t. This probabilistic output enables a nuanced assessment of node reliability in real-time scenarios.
§.§ Geo-location-based Node Selection for Workflow Execution
Incorporating geo-location awareness into the system significantly enhances user satisfaction by prioritizing the selection of computing nodes closest to the user's location for workflow execution. This is Step 3 in Fig. <ref> where we filter the predicted_availability of VEC nodes ≥ 0.8 and pick the nearest VEC node for executing the workflow. By leveraging geographical proximity, the system can offer more responsive and tailored computing services. This geo-location-based selection strategy, underpinned by mathematical distance calculation as illustrated in Algorithm <ref>, is pivotal for optimizing resource allocation in distributed computing.
§.§ Confidential Computing-based Workflow Execution
As the next step (Step 4) in the Two-phase scheduler, if the scientific user chooses to run the workflow on a TEE that delivers CC, the workflow will be assigned to the VEC node that has AWS Nitro installed. In the implementation of CC using AWS Nitro, the process is structured into four distinct steps, ensuring the integrity and confidentiality of the computations.
Building enclave Involves building the Encrypted Image Snapshot (EIS) from the Docker Image safeguarding it during storage and transit.
Running enclave Involves running the enclave on AWS Nitro enabled EC2 instances, this provides isolated CPU and memory resources that are accessible only to the enclave itself
Validating enclave This is achieved through the Attestation Document, a cryptographic proof generated at the enclave's startup, detailing its identity and confirming the integrity of its contents.
Terminating enclave Once the required computations are completed, the enclave is securely shut down, ensuring that all sensitive data and state information are erased, preventing any residual data exposure.
Through these steps, we adapt the AWS Nitro services for executing workflows in a secure and controlled manner, utilizing advanced isolation, encryption, and attestation to meet the stringent demands of confidential computing.
§.§ Fail-over Mechanism
In the event of a workflow execution failure on any VEC node, the system's fail-over mechanism plays a crucial role in ensuring robust and efficient recovery. This process leverages the Redis cache to swiftly retrieve essential workflow details and the pre-computed order of VEC nodes, thereby avoiding the need to revisit the origin of the workflow data or to re-execute the RNN model for node prioritization. By storing this data in the Redis cache, the system significantly reduces round trip times and avoids the computational overhead associated with re-running the initial phases of the scheduler. As depicted in Step 5 of Fig. <ref>, upon failure, the process resumes from Step 3, seamlessly continuing the execution without unnecessary delays. The resultant workflow data is then promptly relayed back to the agent node, subsequently forwarded to the main scheduler, and finally stored on a Flask server to display the execution results to the scientific workflow user. This fail-over governance strategy not only enhances the system's resilience against disruptions but also ensures that the resource allocation remains optimal i.e., execution times are minimized, maintaining a high level of service continuity for end users.
§ PERFORMANCE EVALUATION
We have developed a comprehensive VEC web-based tool published on GitHub <cit.>, where a scientific workflow user can submit his/her workflow using a provided user interface. To implement the VECA solution for evaluation experiments, we define a technology stack that includes OpenFaaS, MicroK8s, and Dockerization. OpenFaaS enables encapsulation of complex functionalities into scalable, serverless functions, which are ideally suited for the heterogeneous VEC environments. MicroK8s simplifies Kubernetes orchestration, offering a lightweight solution ideal for the decentralized nature of the VEC resources. Through these technologies, we ensure that our system not only addresses current security challenges but also is amenable to adapt to the evolving landscape of distributed computing.
In the following, we detail our evaluation experiments on our approach using VEC Node Search Latency, Productivity Rate metrics.
§.§ VEC Node Search Latency
VEC Node Search Latency is a crucial performance metric in VEC environments, as it measures the time taken to identify the most appropriate VEC node for executing a given workflow. Lower latency is indicative of a more efficient system, contributing to faster workflow deployment and execution, which is critical in time-sensitive scientific computations. Thus, we study the performance of our approach for VEC node search latency and compare with state-of-the-art methods i.e., VELA <cit.> and VECFlex <cit.>.
In VECFlex, the entire pool of nodes, which can be substantial in number, must be sampled to identify the optimal node for task execution. This process is defined by:
Latency_VECFlex = Time_Node Sampling(n),
where n is the total number of nodes. This exhaustive search, while thorough, introduces significant latency, making it less desirable for time-critical tasks.
VELA, on the other hand, categorizes nodes into clusters. When a workflow is submitted, VELA randomly selects a subset of clusters and then samples nodes from these clusters. This introduces randomness and potential inefficiencies into the node selection process:
Latency_VELA = Time_Cluster Selection + Time_Node Sampling(n · c).
where n is the number of nodes per cluster and c is the number of clusters sampled. Although the search space is reduced, when compared to VECFlex, the random selection of clusters does not guarantee that the chosen VEC nodes are best suited for the workflow requirements as VEC node characteristics are not considered.
Our approach, VECA, optimizes the process of VEC node search by intelligently selecting a cluster that closely matches the workflow's capacity requirements. Consequently, only the VEC nodes within this single cluster are sampled:
Latency_VECA = Time_Cluster Selection + Time_Node Sampling(n).
Although there is an additional computational overhead for selecting the most suitable cluster, VECA's targeted approach significantly reduces the overall search space by narrowing the search to VEC nodes within a single, capacity-matched cluster. In addition, VECA reduces the VEC node search latency while maintaining a high probability of node suitability for the task requirements. This fine-grained and predictive scheduling approach exemplifies the optimization of resource allocation within VEC systems, thus balancing efficiency and precision in task scheduling.
To validate the efficiency of our VECA system against state-of-the-art methods such as VELA and VECFlex, we implemented a simulation within a structured VEC environment consisting of 50 VEC nodes, strategically divided into 4 clusters using the k-means algorithm. We conducted experiments by scheduling 50 workflow instances under varied workload conditions. As illustrated in Fig. <ref>, the results demonstrate a consistently low node search latency for VECA compared to VELA and VECFlex. The graph reveals that, generally, VECA achieves lower latency in task execution, which underscores the system's effectiveness in optimizing VEC node search within clusters. Notably, there are instances where the latency numbers for VELA approach those of VECA. This convergence typically occurs during periods when multiple VEC nodes are engaged in other tasks, limiting the pool of immediately available VEC nodes. In such scenarios, VECA and VELA are restricted to selecting from a similar subset of freely available VEC nodes, which momentarily equalizes their performance.
VECA consistently outperforms the state-of-the-art solutions over a broad range of scales. Specifically, we performed experiments for variable set of workflow instances with increasing scale {10, 50, 150, 500}, as shown in Fig. <ref>, highlighting its superior efficiency in VEC Node Search under distributed workloads. We can note that our VECA consistently exhibits a two-fold reduction in VEC node search latency compared to the next best solution i.e., VELA. The observed performance advantage is primarily due to VECA's intelligent clustering and node selection algorithms, which significantly reduce unnecessary computational overheads for sampling VEC nodes in the resource allocation processes, ensuring optimal resource allocation and faster response times in dynamic VEC environments.
§.§ Productivity Rate
The productivity rate metric is used to measure the efficiency of a system in successfully recovering from failures and continuing operation without significant loss of functionality or data. In the context of VEC computing environments, it could refer to the system's capability to handle VEC node failures by quickly resuming tasks on alternative VEC nodes, thus ensuring minimal disruption and maintaining system performance. This metric is particularly important in distributed systems where tasks are critical and require high availability.
We define the productivity rate as the proportion of the total execution time that was not taken up by recovery actions, expressed as a percentage. This measure indicates the efficiency of the recovery process—a higher productivity rate indicates a more resilient system.
Productivity Rate = (1 - Time Taken for Recovery/Total Execution Time) × 100%,
where:
* Time Taken for Recovery is the duration from the onset of a failure to the resumption of normal operations.
* Total Execution Time is the sum of the recovery time and any time spent on normal operations as part of the workflow execution.
Our experimentation results, illustrated through a box plot analysis as shown in Fig. <ref>, demonstrate that VECA significantly outperforms both state-of-the-art solutions i.e., VECFlex and VELA in terms of productivity rates. The mean productivity rate for VECA was 86.9%, compared to 66.7% for VELA and 65.7% for VECFlex. This superior performance of VECA can be attributed to its advanced availability prediction mechanism coupled with a strategic caching system empowered by Redis Cache, which collectively ensures a rapid resumption of workflow tasks post-failure without the need for re-sampling of nodes. By adopting VECA, VEC environments can achieve higher resilience and reliability, thus broadening their applicability in critical ML/DL-based scientific workflows e.g.., bioinfomatics and health informatics, where downtime of VEC nodes can have significant impacts on the expected productivity in terms of execution times.
§ CONCLUSION AND FUTURE WORKS
In this paper, we proposed a solution viz., VECA for reliable and confidential resource clustering for VEC computing in order to address the challenges of managing VEC resources for ML/DL-based scientific workflows. By implementing capacity-based clustering, confidential computing integration, and globally distributed scheduling schemes, VECA significantly improves the ability to recover from VEC node failures, and offers a systematic set of protections to ensure privacy preservation of the ML/DL-based scientific workflows in VEC computing environments. The evaluation results demonstrate the effectiveness of VECA in reducing VEC node search latency in identifying optimal VEC nodes for workflow execution, and enhancing productivity rates to complete workflow executions, compared to existing state-of-the-art solutions such as VECFlex and VELA.
Future research can focus on integrating federated machine learning to create cluster capacities suitable for other diverse scientific workflows e.g., medical imaging with unique performance and privacy preservation requirements.
IEEEtran
|
http://arxiv.org/abs/2409.02424v1 | 20240904040548 | Enhancing Information Freshness: An AoI Optimized Markov Decision Process Dedicated In the Underwater Task | [
"Jingzehua Xu",
"Yimian Ding",
"Yiyuan Yang",
"Shuai Zhang"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Enhancing Information Freshness: An AoI Optimized Markov Decision Process Dedicated In the Underwater Task
Jingzehua Xu1^,+,
Yimian Ding1^,+,
Yiyuan Yang2,
Shuai Zhang3
1Tsinghua Shenzhen International Graduate School, Tsinghua University, China
2Department of Computer Science, University of Oxford, United Kingdom
3Department of Data Science, New Jersey Institute of Technology, USA
Email: {xjzh23, dingym24}@mails.tsinghua.edu.cn, [email protected], [email protected]
^+ These authors contributed equally to this work.
==========================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Ocean exploration utilizing autonomous underwater vehicles (AUVs) via reinforcement learning (RL) has emerged as a significant research focus. However, underwater tasks have mostly failed due to the observation delay caused by acoustic communication in the Internet of underwater things. In this study, we present an AoI optimized Markov decision process (AoI-MDP) to improve the performance of underwater tasks. Specifically, AoI-MDP models observation delay as signal delay through statistical signal processing, and includes this delay as a new component in the state space. Additionally, we introduce wait time in the action space, and integrate AoI with reward functions to achieve joint optimization of information freshness and decision-making for AUVs leveraging RL for training. Finally, we apply this approach to the multi-AUV data collection task scenario as an example. Simulation results highlight the feasibility of AoI-MDP, which effectively minimizes AoI while showcasing superior performance in the task. To accelerate relevant research in this field, the code for simulation will be released as open-source in the future.
Age of Information, Markov Decision Process, Statistical Signal Processing, Reinforcement Learning, Autonomous Underwater Vehicles
§ INTRODUCTION
Harsh ocean environment put forward higher difficulty on ocean exploration <cit.>. As a novel approach, utilizing autonomous underwater vehicles (AUVs) via reinforcement learning (RL) has merged as a significant research focus <cit.>. Relying on Internet of underwater things (IoUT) <cit.>, AUVs can communicate with each other and work in collaboration to accomplish human-insurmountable tasks <cit.>. However, underwater tasks have mostly failed due to the observation delay caused by acoustic communication, leading to the non-causality of control policies <cit.>. Although this issue can be alleviated by introducing states that incorporate past information and account for the future effects of control laws <cit.>, it becomes increasingly challenging as the number of AUVs grows, leading to more complexity in both communication and decision-making processes <cit.>.
As a significant indicator evaluation the freshness of information, age of information (AoI) is proposed to measure the time elapsed at the receiver since the last information was generated until the most recent information is received <cit.>. And it has been verified to solve the severe delay caused by constantly sampling and transmitting observation information <cit.>. Central to this consensus is that minimizing AoI can enhance the freshness of information, thereby facilitating the efficiency of subsequent decision-making process in the presence of observation delay <cit.>. Currently, numerous studies have focused on optimizing AoI to aid decision-making in the context of land-based or underwater tasks. For example, Messaoudi et al. optimized vehicle trajectories relying on minimizing average AoI while reducing energy consumption <cit.>. Similarly, Lyu et al. leveraged AoI to assess transmission delay impacts on state estimation, improving performance under energy constraints <cit.>. These studies primarily aim to reduce AoI by improving the motion strategies of agents, without considering the impact of information update strategies on AoI. They assume that agents instantaneously perform the current action upon receiving the previous information. However, this zero-wait strategy has been shown to be suboptimal in scenarios with high variability in delay times <cit.>. Conversely, it has been demonstrated that introducing a waiting period before updating can achieve lower average AoI. This highlights the necessity of integrating optimized information update strategies into the underwater tasks.
Furthermore, most studies currently leverage the standard Markov decision process (MDP) without observation delay to model the underwater tasks, which assumes the AUV can instantaneously receive current state information without delay, so that it can make corresponding actions <cit.>. This idealization, however, may not hold in many practical scenarios, since signal propagation delays and high update frequencies causing channel congestion reduce the freshness of received information, hindering the AUV’s decision-making efficiency. Therefore, extending the standard MDP framework to incorporate observation delays and AoI is necessary <cit.>.
Based on the above analysis, we attempted to propose an AoI optimized MDP (AoI-MDP) dedicated in the underwater task to improve the performance of the tasks with observation delay. The contributions of this paper include the following:
* To the best of our knowledge, we are the first to formulate the underwater task as an MDP that incorporates observation delay and AoI. Based on AoI-MDP, we utilize RL for AUV training to realize joint optimization of both information updating and decision-making strategies.
* Instead of simply modeling observation delay as a random distribution or stationary stochastic variable, we utilize statistical signal processing to realize the high-precision modeling via AUV equipped sonar, which potentially yielding more realistic results.
* Through comprehensive evaluations and ablation experiments in the underwater data collection task, our AoI-MDP showcases superior feasibility and excellent performance in balancing multi-objective optimization. And to accelerate relevant research in this field, the code for simulation will be released as open-source in the future.
§ METHODOLOGY
In this section, we present the proposed AoI-MDP, which consists of three main components: an observation delay-aware state space, an action space that incorporates wait time, and reward functions based on AoI. To achieve high-precision modeling, AoI-MDP utilizes statistical signal processing (SSP) to represent observation delay as underwater acoustic signal delay, thereby aiming to minimize the gap between simulation and real-world underwater tasks.
§.§ AoI Optimized Markov Decision Process
As illustrated in Fig. 1, consider the scenario where the i-th underwater acoustic signal is transmitted from the AUV at time T_i, and the corresponding observed information is received at time D_i, AoI is defined using a sawtooth piecewise function
Δ(t)=t-T_i,D_i≤ t<D_i+1, ∀ i ∈𝐍.
Hence, we denote the MDP that integrates observation delay and characterizes the freshness of information through the AoI as the AoI-MDP, which can be defined by a quintuple Ω for further RL training <cit.>
Ω={𝒮, 𝒜, ℛ, Pr(s_i+1|s_i, a_i), γ} ,
where 𝒮, 𝒜, ℛ represent state space, action space and reward functions, respectively. The term Pr(s_i+1|s_i, a_i) ∈ [0, 1] indicates state transition probability distribution, while γ ∈ [0, 1] represents a discount factor.
In AoI-MDP, instead of simply incorporating AoI as a component of the reward functions to guide objective optimization through RL training, we also leverage AoI as crucial side information to facilitate decision making. Specifically, we reformulate the standard MDP's state space, action space, and reward functions. The detailed designs for each of these elements are as follows:
State Space 𝒮: the state space of the AoI-MDP consists of two parts: AUV's observed information s^'_i, and observation delay Y_i at time i, represented by s_i=(s^'_i,Y_i)∈𝒮^'×Y.We introduce the observation delay Y_i as a new element so that the AUV can be aware of the underwater acoustic signal delay when the sonar emits an underwater acoustic signal to detect the surrounding environment. Additionally, we achieve high-precision modeling of both s^'_i and Y_i through SSP, whose details are presented in Section 2-B.
Action Space 𝒜: the action space of the AoI-MDP consists of the tuple a_i=(a_i^', Z_i)∈𝒜^'×Z, where a_i denotes the actions taken by the AUV, while Z_i indicates the wait time between observing the environmental information and decision-making at time i. Through jointly optimizing the wait time Z_i and action a_i, we aim to minimize the AoI, enabling the AUV's decision-making policy to converge to an optimal level.
Reward Function ℛ: the reward function r_i^' in standard MDP comprises elements with different roles, such as penalizing failures, promoting efficiency, and encouraging cooperation, etc. Here, we introduce the time-averaged AoI as a new component of the reward function. Thus, the updated reward function can be represented by the tuple r_i=(r^'_i, -Δ). And the time averaged AoI can be computed as follows:
Δ=∑_i=2^𝒩((2Y_i-1+Y_i+Z_i)×(Y_i+Z_i))/2×(∑_i=1^𝒩Z_i+∑_i=1^𝒩Y_i),
where 𝒩 is the length of information signal. Therefore, the time averaged AoI can be minimized through RL training.
According to the above analysis, the total reward function is set below:
R_i=∑_k=1^∞λ^(k)r^(k)_i,
where λ^(k) represents the weighting coefficient of the k-th reward function.
Based on proposed AoI-MDP, we further integrate it with RL training for the joint optimization of
information freshness and decision-making for AUVs. The pseudocode for the AoI-MDP based RL training is showcased in Algorithm 1.
§.§ Observation Delay and Information Modeling
Different from previous work, our study enhances the state space of AoI-MDP by considering the observed information using estimated information perceived by AUV-equipped sensors. And we consider observation delay as underwater acoustic signal delay, rather than merely treating it as a random distribution <cit.> or stationary stochastic variable <cit.>. This approach aims to provide high-precision modeling to improve the performance in the underwater environment. And the schematic diagram is shown in Fig. 2.
To be specific, our study assumes the AUV leverages a sonar system to estimate the distance from itself to environmental objects. This was achieved by transmitting acoustic signals through sonar, measuring the time delay taken for these signals to propagate to the target, reflect, and return to the hydrophone, thus allowing for distance estimation. The acoustic signal propagation can be represented as
𝒳[n]=𝒮[n-Y_i]+𝒲[n], n=0,1,…,N-1,
where 𝒮[n] represents the known signal, while Y_i denotes the time delay to be estimated, and 𝒲[n] is the Gaussian white noise with variance σ^2.
We further employ the flow correlator as an estimator to determine the time delay. Specifically, this estimator carries out the following computations on each received signal:
J[Y_i]=∑_n=Y_i^Y_i+M-1 𝒳[n]𝒮[n-Y_i], 0≤ Y_i≤ N-M,
Ŷ_̂î = argmax[J[Y_i] ],
where M is the sampling length of 𝒮[n]. By calculating the value of Ŷ_̂î that maximizes the value of Eq. (6a), the estimated time delay value can be obtained through Eq. (6b).
On the other hand, the AUV in our study utilizes a long linear array sensor to estimate the azimuth β between its orientation and environmental objects. The signal propagation can be expressed as follows:
x [n] = A c o s [2 π(F_0d/c c o s β) n + ϕ] + 𝒲[n] ,
n = 0,1 , ··· , M - 1,
where F_0 denotes the frequency of transmitted signal, while d represents the interval between sensors. Besides, c indicates the speed of underwater acoustic signal propagation, while A and ϕ are unknown signal amplitude and phase, respectively.
The estimator in SSP is further leveraged to estimate the azimuth β. By maximizing the spatial period graph, the estimate of β (0βπ/2) can be calculated
I_s(β) =1/M (|. ∑_n = 0^M - 1 x [n] exp[. - j 2 π(. F_0d/c c o s β.)n ]. |.)^2,
β̂ = argmax[I_s(β)].
By calculating the value of β that maximizes the value of Eq. (8a), the estimated time delay value can be obtained through Eq. (8b).
Finally, the AUV can achieve target positioning using the observed Ŷ_̂î and β̂. These observations are then utilized as data for the observed information in the state space of the AoI-MDP, which potentially
yields more realistic results to improve the underwater performance, while reducing the gap between simulation and reality in the underwater tasks.
§ EXPERIMENTS
In this section, we validate the proposed AoI-MDP through extensive simulation experiments. Further, we present the experimental results with further analysis and discussion.
§.§ Task Description and Settings
Since open-sour underwater tasks are scarce, we consider the scenario of a multi-AUV data collection task as a classic example to evaluate the feasibility and effectiveness of the AoI-MDP. This task utilizes RL algorithms to train AUVs to collect data of sensor nodes in the Internet of underwater things, encompassing multiple objectives, such as maximizing sum data rate and collision avoidance, while minimizing energy consumption, etc. For the remaining details and parameters of the task, please refer to the previous work <cit.>.
§.§ Experiment Results and Analysis
We first compared the experimental results of RL training based on AoI-MDP and standard MDP under identical conditions, respectively. Results in Fig. 3 show that AoI-MDP results in lower time-averaged AoI, reduced energy consumption, higher sum data rate, and greater cumulative rewards. This demonstrates that AoI-MDP improves the training effectiveness and performance of the RL algorithm.
Then we evaluated the generalization performance of AoI-MDP using commonly employed delay models in the communication field, including exponential, poisson and geometric distributions. The experimental results, compared with the SSP model, are shown in Table 1. The AoI-MDP based RL training demonstrates superior performance across various distributions, indicating strong generalization capabilities. Additionally, SSP for time delay modeling achieved near-optimal results in AoI optimization, sum data rate optimization, and energy consumption optimization, underscoring its effectiveness in the underwater data collection task.
We further turned our attention to comparing the generalization of AoI-MDP in various RL algorithms. We conducted experiments utilizing AoI-MDP on soft actor-critic (SAC) and conservative Q-learning (CQL), within the contexts of online and offline RL, respectively. As shown in Fig. 4, both online and offline RL algorithms can successfully adapt to AoI-MDP, while ultimately achieving favorable training outcomes.
Finally we guided the multi-AUV in the underwater data collection task using the expert policy trained via SAC algorithm based on AoI-MDP and standard MDP respectively. As illustrated in Fig. 5, the trajectory coverage trained under AoI-MDP is more extensive, leading to more effective completion of the data collection task. Conversely, under standard MDP, the trajectories of AUVs appears more erratic, with lower node coverage, thereby showcasing suboptimal performance.
§ CONCLUSION
In this study, we propose AoI-MDP to improve the performance of underwater tasks. AoI-MDP models observation delay as signal delay through SSP, and includes this delay as a new component in the state space. Additionally, AoI-MDP introduces wait time in the action space, and integrate AoI with reward functions to achieve joint optimization of information freshness and decision making for AUVs leveraging RL for training. Simulation results highlight the feasibility, effectiveness and generalization of AoI-MDP over standard MDP, which effectively minimizes AoI while showcasing superior performance in the underwater task. The simulation code will be released as open-source to advance research in the future.
IEEEtran
10
url@samestyle
1
Z. Wang, Z. Zhang, J. Wang, C. Jiang, W. Wei, and Y. Ren, “Auv-assisted node repair for iout relying on multiagent reinforcement learning,” IEEE Internet of Things Journal, vol. 11, no. 3, pp. 4139–4151, 2024.
2
Y. Li, L. Liu, W. Yu, Y. Wang, and X. Guan, “Noncooperative mobile target tracking using multiple auvs in anchor-free environments,” IEEE Internet of Things Journal, vol. 7, no. 10, pp. 9819–9833, 2020.
8
R. H. Jhaveri, K. M. Rabie, Q. Xin, M. Chafii, T. A. Tran, and B. M. ElHalawany, “Guest editorial: Emerging trends and challenges in internet-of-underwater-things,” IEEE Internet of Things Magazine, vol. 5, no. 4, pp. 8–9, 2022.
3
Z. Zhang, J. Xu, G. Xie, J. Wang, Z. Han, and Y. Ren, “Environment and energy-aware auv-assisted data collection for the internet of underwater things,” IEEE Internet of Things Journal, vol. 11, no. 15, pp. 26 406–26 418, 2024.
4
W. Wei, J. Wang, J. Du, Z. Fang, C. Jiang, and Y. Ren, “Underwater differential game: Finite-time target hunting task with communication delay,” in ICC 2022 - IEEE International Conference on Communications, 2022, pp. 3989–3994.
5
J. Wu, C. Song, J. Ma, J. Wu, and G. Han, “Reinforcement learning and particle swarm optimization supporting real-time rescue assignments for multiple autonomous underwater vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 6807–6820, 2022.
6
R. Talak, S. Karaman, and E. Modiano, “Optimizing information freshness in wireless networks under general interference constraints,” IEEE/ACM Transactions on Networking, vol. 28, no. 1, pp. 15–28, 2020.
7
R. D. Yates, Y. Sun, D. R. Brown, S. K. Kaul, E. Modiano, and S. Ulukus, “Age of information: An introduction and survey,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 5, pp. 1183–1210, 2021.
10279435
K. Messaoudi, O. S. Oubbati, A. Rachedi, and T. Bendouma, “Uav-ugv-based system for aoi minimization in iot networks,” in ICC 2023 - IEEE International Conference on Communications, 2023, pp. 4743–4748.
9442098
L. Lyu, Y. Dai, N. Cheng, S. Zhu, Z. Ding, and X. Guan, “Cooperative transmission for aoi-penalty aware state estimation in marine iot systems,” in 2020 IEEE 18th International Conference on Industrial Informatics (INDIN), vol. 1, 2020, pp. 865–869.
8000687
Y. Sun, E. Uysal-Biyikoglu, R. D. Yates, C. E. Koksal, and N. B. Shroff, “Update or wait: How to keep your data fresh,” IEEE Transactions on Information Theory, vol. 63, no. 11, pp. 7492–7508, 2017.
Howard1960DynamicPA
R. A. Howard, “Dynamic programming and markov processes,” 1960. [Online]. Available: <https://api.semanticscholar.org/CorpusID:62124406>
10.1145/149439.133106
E. Altman and P. Nain, “Closed-loop control with delayed information,” SIGMETRICS Perform. Eval. Rev., vol. 20, no. 1, p. 193–204, jun 1992.
9
B. Jiang, J. Du, C. Jiang, Z. Han, and M. Debbah, “Underwater searching and multiround data collection via auv swarms: An energy-efficient aoi-aware mappo approach,” IEEE Internet of Things Journal, vol. 11, no. 7, pp. 12 768–12 782, 2024.
10
E. Altman and P. Nain, “Closed-loop control with delayed information,” in Proceedings of the 1992 ACM SIGMETRICS Joint International Conference on Measurement and Modeling of Computer Systems, ser. SIGMETRICS '92/PERFORMANCE '92.1em plus 0.5em minus 0.4emNew York, NY, USA: Association for Computing Machinery, 1992, p. 193–204.
11
K. Katsikopoulos and S. Engelbrecht, “Markov decision processes with delays and asynchronous cost collection,” IEEE Transactions on Automatic Control, vol. 48, no. 4, pp. 568–574, 2003.
|
http://arxiv.org/abs/2409.03360v1 | 20240905090625 | Real-time diagnostics on a QKD link via QBER Time Series Analysis | [
"G. Maragkopoulos",
"A. Mandilara",
"T. Nikas",
"D. Syvridis"
] | quant-ph | [
"quant-ph"
] |
quantikz2,backgrounds,fit,decorations.pathreplacing
ε
theoremTheorem[section]
lemma[theorem]Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
proof[1][Proof]
#1
definition[1][Definition]
#1
example[1][Example]
#1
remark[1][Remark]
#1
APS/123-QED
Department of Informatics and Telecommunications, National and Kapodistrian
University of Athens, Panepistimiopolis, Ilisia, 15784, Greece
Department of Informatics and Telecommunications, National and Kapodistrian
University of Athens, Panepistimiopolis, Ilisia, 15784, Greece
Eulambia Advanced Technologies, Agiou Ioannou 24, Building Complex C, Ag. Paraskevi, 15342, Greece
Department of Informatics and Telecommunications, National and Kapodistrian
University of Athens, Panepistimiopolis, Ilisia, 15784, Greece
Department of Informatics and Telecommunications, National and Kapodistrian
University of Athens, Panepistimiopolis, Ilisia, 15784, Greece
Eulambia Advanced Technologies, Agiou Ioannou 24, Building Complex C, Ag. Paraskevi, 15342, Greece
§ ABSTRACT
The integration of QKD systems in Metro optical networks raises challenges which cannot be completely resolved with the current technological status. In this work we devise a methodology for identifying different kind of impairments which may occur on the quantum channel during its transmission in an operational network. The methodology is built around a supervised ML pipeline which is using as input QBER and SKR time-series and requires no further interventions on the QKD system. The identification of impairments happens in real time and even though such information cannot reverse incidents, this can be valuable for users, operators and key management system.
Real-time diagnostics on a QKD link via
QBER Time Series Analysis
Dimitris Syvridis
MOX - Dipartimento di Matematica “F. Brioschi”, Politecnico di
Milano, via Bonardi 9, 20133 Milan, Italy
^1 [email protected]
^2 [email protected]
^3 [email protected]
September 9, 2024
===============================================================================================================================================================================================================================================
§ INTRODUCTION
After three decades of intense research <cit.> in quantum cryptographic protocols and their implementations, we are in the era where Quantum Key Distribution (QKD) systems are commercially available and ready for use in the labs or in real life. While technology in such devices is very advanced and keeps improving, there is a couple of important obstacles which prevent their generalized integration in existing Metro optical networks.
The first obstacle is the vulnerability of quantum signals
to the effect of attenuation due to propagation in fiber or due to the presence of network components. A quantum signal for QKD purposes contains, in average, less than one photon
and therefore a simple incident of a photon loss is destructive for the carried quantum information.
Quantum repeaters <cit.>, error correcting codes <cit.>, twin-field protocols <cit.> are designed to solve in principle this problem but the technology is still immature for their generalized application.
The second important obstacle is the effect of photons addition to quantum signal in the case where this coexists with classical ones propagating in the same optical fiber. This effect in large extent restricts QKD signals to Single-mode Dark Fibers (SDF) and numerous studies have been carried out on the performance's degradation of QKD systems under different conditions of coexistence dependent on the wavelength spacing between classical and quantum channels, optical power of the former, etc.
In this work we seek for solutions offered by the field of Machine Learning (ML) for
advancing the integration of QKD devices in classical communication networks taking into account the aforementioned obstacles. More specifically, we develop an ML methodology to classify different kinds of
impairments (producing irreversible types of errors) which may occur on a quantum channel assuming only access to QBER and Secure Key Rate (SKR) data. This work follows the latest trend of improving the functionality of QKD via ML methods <cit.>. The objective of the reported work is to provide tools for performing real time diagnostics of the QKD system and to extract information on the impairments affecting the operation of the system. Such information can be particularly useful for the Key Management System (KMS) of the network which controls the `flaw' of keys.
The structure of the manuscript is as follows. We first present an optimized ML methodology for extracting features from QBER/SKR time series and performing classification according to these. In order to test the validity of the method we perform experiments to obtain training and test data under conditions which experimentally emulate different types of impairments. We present the results of classification for our experimental data and finally we discuss on the key-points and perspectives of the method.
§ ML METHODS
Different QKD systems differ in the QKD protocols which these implement as well as in the error correcting and authentication methods. On the other hand all QKD devices provide SKR and QBER data. We set up an ML `pipeline', sketched in Fig. <ref>., that is fed with QBER and SKR data from a QKD device.
The parameters inside are first trained with normal data and data in anomalous conditions to classify the different working phases of the
QKD link. Then by feeding this pipeline with real time sequential values of QBER and SKR from the device, one can classify the current status of the QKD link.
In more details, we use tsfresh python package <cit.> to extract features for each batch of N QBER and SKR data points. In our studies where we set N=10,
the total number k of extracted features for each batch is greater than 1500. These include both simple statistical measures such as mean, variance, quantiles, and more complex ones such as arima coefficients, Fourier and wavelet transformations. A feature space of this size increases the chances of over-fitting, as well as training and inference times. For these reasons, a xgboost model <cit.> is trained in order to reduce the feature space. This is achieved by choosing the top K in terms of information gain. The choice of xgboost in our studies has proven advantageous to more standard methods such as PCA mainly due to sparseness and missing values in the feature space. In our applications of the method, we pick the K=50 top performers provided by xgboost which
are then fed as input in a Neural Network (NN) whose architecture has a depth of 3 hidden layers (50x128x256x128x9) and where cross-entropy loss is used in the training phase.
Let us now describe the data-acquisition in training and prediction phases with a real-time usage example of the method.
Training
* Activate the QKD system and draw the first N data points. These data points are used as
reference points and all next points/time series should be normalized according to the median values of these data, using
MinMax scaler.
* Proceed by acquiring sequential log files with N values of QBER and SKR while
inducing impairments on the transmission line of the quantum signal. The data are then labeled according to the type of impairment.
* Use all collected data to train together the xgboost model and NN so that the data are classified to different labels
while the cross-entropy loss is minimized.
Prediction
* As for the training phase, we assume a period where the
system runs without impairment and collect N data points as reference ones.
* At every time step, one may feed the batch created by the current point and N-1 previous ones in the ML pipeline in order to conclude on impairments present on the QKD link.
§ DEMONSTRATION OF THE METHOD WITH EXPERIMENTAL DATA
The core of the experimental setting is a pair of Toshiba terminals, QKD4.2A-MU and QKD4.2B-MU, realizing an advanced one-way phase-encoded protocol with coherent states, the so called T12 protocol <cit.>. QKD Transmitter (Alice) is sending quantum data at 1310 nm to the Receiver (Bob) via a SDF. In the same fiber two low-power classical signals are co-propagating at 1530 and 1529.30 nm. A second auxiliary fiber connects the terminals for establishing classical communication from Receiver to Transmitter at 1528.77 nm. The power of all three classical channels is < +3 dBm.
The overall experimental setting for training-test data acquisition under different conditions which mark the different classes of impairments, is presented in Fig. <ref>. One can use this setting to realize 4 different configurations for acquiring data: a) Normal mode, i.e., no impairments along the transmission line, b) Coexistence with optical cw signals without amplification via EDFA, c) Coexistence with optical cw signals amplified by the EDFA, d) Photon loss induced by small fiber loops. The attenuation along the transmission line due to fiber, and optical elements (couplers, attenuators) is for all experiments at -14 dB. Below, we give quantitative details for the categorical classes of Tab. <ref> namely the wavelength λ and power P of lasers (as measured by OSA, see Fig. <ref>), as well as the excess attenuation A_exc due to the presence of fiber's loops.
In the training phase of coexistence with optical cw signals without EDFA we have
collected data for an interval of powers as indicated below and have treated all these data under the same class/label. We have done so in order to investigate whether the model is capable to get trained for a wide class of events.
* Class 1. 1 Laser: λ= 1549.38 nm, P=-{23.5, 21.7, 20.5, 19.55, 18.84,18.37, 18.1} dBm.
* Class 2. 2 Lasers:
λ_1= 1549.38, λ_2=1549.46 nm,
P_1=-{23.5, 21.7,20.5, 19.55, 18.84, 18.37, 18.1},
P_2=-{21.6, 20.2, 19.4, 19.0, 18.8, 18.9, 19.2} dBm.
* 4 Lasers & EDFA:
λ_1= 1548.5, λ_2=1549, λ_3= 1549.5 , λ_4=1550 nm,
Class 3. I=18 mA: P_1=-17.9,
P_2=-16.9, P_3=-15.6,
P_4=-15.6 dBm.
Class 4. I=21 mA: P_1=-16.5,
P_2=-15.7, P_3=-14.6,
P_4=-14.3 dBm.
Class 5. I=24 mA: P_1=-15.5,
P_2=-14.5, P_3=-13.4,
P_4=-13.1 dBm.
* Class 6. Photon Loss 20%: A_exc=-0.9 dB.
* Class 7. Photon Loss 46%: A_exc=-1.9 dB.
* Class 8. Photon Loss 67%: A_exc=-3.1 dB.
In the experiments we make distinction between lasers with and without EDFA since this type of amplifier incorporates an optical filter which decreases the outband ASE noise leaking to O band where the quantum channel resides. Regarding the experiments where we simulate the effect of photon losses via the formation of small loops. In these formations the radius of curvature of fiber's coating is decreased, impelling a part of the quantum light signal, proportional to the number of loops, to escape out of the fiber. This experiment emulates in a controllable way the action of an eavesdropper or the activation of an optical element, such as coupler or multiplexer in the network.
We proceed with the application of ML methodology on the acquired data and we first split these as 80%-20% for training-test respectively. The test data are always derived from the tail end of the time series since a plain random split would result to a training set which contains data from time frames ahead of some test data, that could lead to misleading results. After training the xgboost and NN models, predictions were made on the test set and the results are provided in Tab. <ref>. These are also complemented with the chord diagram of Fig. <ref> that intends to give a visual representation of the misclassified data.
The results of the table show that the Precision, i.e., the number of true positives divided by the total number of positive predictions, is high for all impairment types. On the other hand the classes `1 Laser' and `2 Lasers' are predicted less frequently than others as one can read from the Recall column. We correlate this outcome with the fact that for these specific classes data corresponding to different levels of powers for the Lasers are merged together. This argument is supported by the fact that for the cases where the EDFA is used, the model is able to distinguish between different levels of power and the prediction outcomes are very good. Finally, the model shows the highest confidence in distinguishing the different levels of photon loss, particularly in the case of 67% of photon loss, the model makes statistically no mistakes.
The Macro average of F1-score, that is the weighted average mean of Precision and Recall, is at the value 0.89 and one
may conclude that the developed methodology has the capacity to distinguish each class of simulated impairment with high certainty.
The chord
It is worth noting that the xgboost model alone using all features, achieves an average 0.86 F1 score, which is lower than the 0.89 achieved by the 50-features fed into the NN. In typical tabular data an opposite trend is usually observed, and we may conclude that for the given time series the combination of xgboost and NN is preferable. We also studied the performance of NN for different number of K features and we found out that as K is decreasing the performance is also decreasing. Finally it is important to underline that even though in theory QBER and SKR data are correlated, in practice we observed that the inclusion of both type of time series in the analysis much improves the results of classification.
§ CONCLUSIONS
The events of photon loss and addition on quantum signals are limiting factors across all QKD implementations. In this work, we designed and successfully tested an ML methodology, that after the training phase, permits to
identify in real-time such fundamental types of impairments. To the best of our knowledge this is the first work dedicated to the detection
of anomalous conditions of a QKD link by disclosing information from QBER's time series.
In this work the methodology is tested at a basic level by emulating anomalous conditions for the QKD link in the lab. We expect that the application of this methodology in a QKD test-bed will induce further refinements on the parameters and structure of this initial model. Finally, an advantage of the developed methodology is its QKD device-agnostic applicability that we plan to exhibit in future work.
§ ACKNOWLEDGEMENTS
This work was supported by the project Hellas QCI co-funded by the European Union under the Digital Europe Programme grant agreement No.101091504. A.M. acknowledges partial support from the European Union’s Horizon
Europe research and innovation program under grant agreement No.101092766 (ALLEGRO Project).
|
http://arxiv.org/abs/2409.03091v1 | 20240904213958 | Hamiltonian models for the propagation of long gravity waves, higher-order KdV-type equations and integrability | [
"Rossen I. Ivanov"
] | nlin.SI | [
"nlin.SI",
"76B15, 35Q35"
] |
§ HAMILTONIAN MODELS FOR THE PROPAGATION OF LONG GRAVITY WAVES, HIGHER-ORDER KDV-TYPE EQUATIONS AND INTEGRABILITY
1cm
Rossen I. Ivanov
1cm
-.3cm
School of Mathematics and Statistics, TU Dublin, City Campus,
Grangegorman Lower, Dublin D07 ADY7, Ireland
e-mail: [email protected]
0.5cm
psf
§ ABSTRACT
A single incompressible, inviscid, irrotational fluid medium bounded above by a free surface is considered. The Hamiltonian of the system is expressed in terms of the so-called Dirichlet-Neumann operators. The equations for the surface waves are presented in Hamiltonian form. Specific scaling of the variables is selected which leads to a KdV approximation with higher order nonlinearities and dispersion (higher-order KdV-type equation, or HKdV). The HKdV is related to the known integrable PDEs with an explicit nonlinear and nonlocal transformation.
Mathematics Subject Classification (2010):
76B15 (Water waves, gravity waves; dispersion and scattering, nonlinear interaction)
35Q35 (PDEs in connection with fluid mechanics), 37K10 (Completely integrable infinite-dimensional Hamiltonian and Lagrangian systems, integration methods, integrability tests, integrable hierarchies (KdV, KP, Toda, etc.))
Keywords: Dirichlet-Neumann Operators, Water waves, Solitons, KdV equation, Kaup-Kuperschmidt equation, Sawada-Kotera equation, KdV hierarchy.
§ INTRODUCTION
In 1968 V. E. Zakharov in his work <cit.> demonstrated that the equations for the surface waves of a deep inviscid irrotational water have a canonical Hamiltonian formulation. This result has been extended to many other situations, like long-wave models for finite depth and flat bottom <cit.>, short and intermediate wavelength water waves <cit.>, internal waves between layers of different density <cit.> as well as waves with added shear with constant vorticity <cit.>. We provide a detailed derivation of the higher-order KdV-type model from the Hamiltonian formulation. The Hamiltonian is expressed with the Dirichlet-Neumann Operators. These operators have known asymptotic expansions with respect to certain scale parameters which makes them convenient for the derivation of asymptotic PDE models with respect to these scale parameters.
We establish also the relation between the obtained HKdV model and the three known integrable PDEs with nonlinear and dispersive terms of the same types.
§ GENERAL SETUP OF THE PROBLEM AND GOVERNING EQUATIONS
An inviscid, incompressible and irrotational fluid layer of uniform density with a free surface and flat bottom is considered, as shown in Fig. <ref>.
The mean surface level is located at z=0 (where z is the vertical coordinate) and the wave elevation is given by the function η(x,t). Therefore we have
∫_ℝη(x,t) dx =0.
The flat bottom is at z=-h. The body of the fluid which occupies the domain Ω is defined as
Ω:={(x,z)∈ℝ^2: -h < z < η(x,t)}.
The subscript notation s will be used to refer to evaluation on the surface z=η(x,t) and b, if necessary, will refer to evaluation on the bottom z=-h.
Let us introduce the velocity field v=(u,w), where w is the vertical component. The incompressibility u_x+w_z=0 and the irrotationality of the flow u_z-w_x=0 allow the introduction of a stream function ψ(x,z,t) and velocity potential φ(x,z,t) as follows:
u=ψ_z=φ_x
w=-ψ_x=φ_z.
In addition,
Δφ=0, Δψ =0
in Ω. This leads to
|∇φ|^2 = ∇· (φ∇φ)= div (φ∇φ),
where ∇=(∂_x, ∂_z), Δ=∇^2.
The Euler equations written in terms of the velocity potential produce the Bernoulli condition on the surface
(φ_t)_s+1/2|∇φ|_s^2+g η=0
where g is the acceleration due to gravity.
There is a kinematic boundary condition on the wave surface given by
w=η_t +uη_x or (φ_z)_s=η_t + (φ_x)_s η_x
and on the bottom
(φ_z)_b=0.
The equations (<ref>) and (<ref>) suggest that the dynamics on the surface is described by two variables – the surface elevation η and the velocity potential ϕ=(φ)_s. In fact, it turns out that these are canonical Hamiltonian variables in the Zakharov's formulation <cit.>, which we present in the next section.
We make the assumption that the functions η(x,t), φ(x,z,t) are in the Schwartz class with respect to the x variable, for all possible values of the other variables[The Schwartz class function is essentially a function f(x) such that f(x), f'(x), f”(x), ... all exist everywhere on ℝ and go to zero as x→±∞ faster than any reciprocal power of x. ].
In other words we describe the propagation of solitary waves.
For rigorous mathematical results about a single layer of fluid one could refer to the monographs <cit.>. A comprehensive survey, derivation and analysis of the nonlinear water wave models is presented in <cit.>.
§ HAMILTONIAN FORMULATION
The Hamiltonian of the system (<ref>) – (<ref>) will be represented as the total energy of the fluid with density ρ,
H=1/2ρ∫∫_Ω (u^2+w^2)dz dx+ρ g ∫∫ _Ω z dz dx.
It can be written in terms of the variables (η(x,t),φ(x,z,t)), using (<ref>), as
H[η,φ]= 1/2ρ∫_ℝ∫_-h^ηdiv (φ∇φ) dz dx
+ρ g ∫_ℝ∫_-h^η z dz dx.
We introduce the variable ξ, which is defined to be proportional to the potential evaluated on the surface
ξ(x,t):=ρφ(x,η(x,t),t)≡ρϕ(x,t),
and the Dirichlet-Neumann operator G(η) defined by
G(η)ϕ =-η_x (φ_x)_s+(φ_z)_s =(-η_x, 1) · (∇φ)_s=√(1+η_x^2) n_s · (∇φ)_s
where n_s = (-η_x, 1)/√(1+η_x^2) is the outward-pointing unit normal vector (with respect to Ω) to the wave surface.
Applying Green's Theorem (Divergence Theorem) to (<ref>), the Hamiltonian can be written as
H[η, ξ]=1/2ρ∫_ℝξ G(η) ξ dx
+1/2ρ g∫_ℝ (η^2-h^2) dx .
On the bottom the outward-pointing unit normal vector is
n_b=(0, -1) and
n_b · (∇φ)_b =(0, -1) ·(( φ_x)_b, (φ_z)_b)=(0- (φ_z)_b)=0
thus n_b · (∇φ)_b =0 and no bottom-related terms are present in (<ref>). Noting that the term ∫_ℝ h^2(x) dx is a constant and will not contribute to δ H, we renormalize the Hamiltonian to
H[η, ξ]=1/2ρ∫_ℝξ G(η) ξ dx
+1/2ρ g∫_ℝη^2 dx.
The variation of the Hamiltonian can be evaluated as follows. An application of Green's Theorem transforms the following expression to one which involves contributions from the surface and the bottom alone:
δ[ρ/2∫_ℝ∫_-h^η (∇φ)· (∇φ) dz dx]
=ρ∫_ℝ∫_-h^η (∇φ)· (∇δφ) dz dx +1/2ρ∫_ℝ (| ∇φ |^2)_s δη dx
= ρ∫_ℝ∫_-h^ηdiv [(∇φ) δφ] dz dx +1/2ρ∫_ℝ (| ∇φ |^2)_s δη dx
=ρ∫_ℝ((φ_z)_s-(φ_x)_s η_x)(δφ)_s dx
-ρ∫_ℝ (φ_z)_b(δφ)_b dx
+1/2ρ∫_ℝ(| ∇φ |^2)_s δη dx.
Due to (<ref>), the contribution from the term evaluated on the bottom vanishes, thus
δ[ρ/2∫_ℝ∫_-h^η | ∇φ |^2 dz dx] =ρ∫_ℝ((φ_z)_s-(φ_x)_s η_x)(δφ)_s dx
+1/2ρ∫_ℝ (| ∇φ |^2)_s δη dx.
Clearly,
δ[ρ g∫_ℝη^2 dx] =2ρ g∫_ℝηδη dx.
Noting that the variation of the potential on the wave surface is given as
(δφ)_s=δϕ-(φ_z)_sδη,
where
ϕ(x,t) :=φ(x,η(x,t),t),
we write
δ H=ρ∫_ℝ((φ_z)_s-(φ_x)_s η_x)(δϕ-(φ_z)_sδη) dx
+1/2ρ∫_ℝ |∇φ|_s^2 δη dx
+ρ g∫_ℝηδη dx.
Evaluating δ H/ δξ we remember that ρδϕ=δξ and therefore
δ H/δξ=(φ_z)_s-(φ_x)_s η_x=η_t
due to (<ref>). Next we compute
δ H/δη= - ρ((φ_z)_s-(φ_x)_s η_x)(φ_z)_s +1/2ρ |∇φ|_s^2 +ρ g η.
Noting that, using the kinematic boundary condition (<ref>),
- ((φ_z)_s-(φ_x)_s η_x)(φ_z)_s
= - η_t (φ_z)_s
we write
δ H/δη= - ρη_t(φ_z)_s +1/2ρ |∇φ|_s^2 +ρ g η.
Recall that
ξ_t = ρ((φ_t)_s +(φ_z)_s η_t),
and so
δ H/δη = - ξ_t + ρ( (φ_t)_s +1/2 |∇φ|_s^2+ g η)= - ξ_t
by the virtue of the Bernoulli equation (<ref>). Thus we have canonical equations of motion
ξ_t= -δ H/δη, η_t= δ H/δξ
where the Hamiltonian is given by (<ref>). Introducing the variable 𝔲=ξ_x, which is proportional to the horizontal velocity along the free surface, by changing the variable, we can represent (<ref>) in the form
𝔲_t= -(δ H/δη)_x, η_t= - (δ H/δ𝔲)_x,
which can also be expressed in the matrix form
[ 𝔲; η ] _t = - [ 0 1; 1 0 ][ δH/δ𝔲; δH/δη ] _x.
The Hamiltonian can be expressed through the canonical variables 𝔲, η by using the properties of the Dirichlet-Neumann operator, which are introduced in the next section. Thus we have a formulation of the problem involving the surface variables alone.
§ THE DIRICHLET-NEUMANN OPERATOR
We begin this section with some basic properties of the Dirichlet-Neumann operator. The details can be found in <cit.>. The operator can be expanded as
G(η)=∑_j=0^∞ G^(j)(η)
where G^(j)(η)∼ (η/h)^j. The surface waves are assumed to be small, relative to the fluid depth, that is, ε=|η_max|/h≪ 1 is a small parameter, and hence one can expand with respect to |η/h |≪ 1 as follows:
G^(0)= Dtanh(hD) ,
G^(1)= Dη D -G^(0)η G^(0),
G^(2)= -1/2 (D^2η^2 G^(0) + G^(0)η^2 D^2- 2G^(0)η G^(0)η G^(0) ), … .
The operator D=-i∂/∂ x has the eigenvalue k=2π/λ for any given wavelength λ, when acting on monochromatic plane wave solutions proportional to e^ik(x-c(k)t). In the long-wave regime the parameter δ=h/λ≪ 1 is assumed to be small and since hD has an eigenvalue
hk=2π h/λ≪ 1
thus h k is small as well, and one can formally expand in powers of h D (which are of order δ). As a matter of fact, the equations for a single layer of fluid could be written in terms of non-dimensional variables, see for example <cit.>. Then the quantities h,g and c are simply equal to one. In our considerations however we keep track of these quantities explicitly and keep in mind that all they are of order one.
The magnitude of the terms is therefore labeled explicitly by the scale parameters ε and δ. In the long-wave and small-amplitude regime, hD ∼δ≪ 1, (that is, h D is of order δ).
Using the expansion
tanh(hD) =hD-1/3h^3D^3+2/15h^5 D^5+𝒪((hD)^7)
and introducing explicitly the scale parameters, we obtain
G(η) = δ^2 D( h + εη )D -δ^4 D^2[1/3 h ^3+ε h^2 η] D^2 + δ^6 2/15h^5 D^6
*****************************+𝒪(δ^8,
εδ^6, ε^2 δ^4)
In what follows we continue by considering the so-called Boussinesq-type approximation. In essence, this approximation further assumes δ^2 ∼ε, ξ∼δ, where the symbol ∼ means that the quantities are of the same order. The Boussinesq-type equations describe waves traveling simultaneously in opposing directions.
In the leading order of the scale parameters (that is, keeping only the lowest order δ^2 in (<ref>)), the operator (<ref>) is G^(00)=hD^2 and the Hamiltonian (<ref>) is therefore
H^(2)[ 𝔲, η]=1/2∫_ℝh/ρ𝔲^2 dx
+1/2∫_ℝρ g η^2 dx= 1/2∫_ℝ Q^T A Q dx.
It can be represented as a quadratic form for Q:=(𝔲,η)^T with a matrix
A:= [ h/ρ 0; 0 ρ g ].
The vector Q is 2-dimensional,
Q:=( 𝔲,η, )^T≡ (Q_1,Q_2)^T
and the equations (<ref>) under these assumptions are
Q_t=- J A Q_x , J:= [ 0 1; 1 0 ].
The diagonalization of the matrix J A is given by J A= 𝐕𝐂𝐕^-1 for
𝐂=diag(c_1,c_2)=diag(√(gh), -√(gh)),
where c_1 and the c_2 can be regarded as the speeds of the right- and left-moving waves, as we will now see, and
𝐕:= [ ρ√(g/h) -ρ√(g/h); 1 1 ].
We introduce a new variable Z=(Z_1, Z_2)^T, such that Q=𝐕Z. Then the equations (<ref>) become
Z_t+ C Z_x=0, or (Z_i)_t+c_i (Z_i)_x=0.
Thus the Z_i=Z_i(x-c_i t) in this approximation are functions of the corresponding characteristic variables. These functions are localised disturbances (waves) propagating with speeds c_i. We refer to the Z_i as ”propagation modes”. Given the fact that all propagation speeds are different, the disturbances, (or propagation modes) Z_i move with different, opposite speeds. It is reasonable therefore to make the assumption that their interaction is negligible after a certain period in time. This means, that in the higher order approximations of the Hamiltonian, we neglect any products Z_i Z_j when i j. The relationship between the physical variables and the propagation modes Q=𝐕Z, where 𝐕 is given in (<ref>), can be written explicitly in the form
Q_1 =𝔲=V_11Z_1+V_12Z_2=ρ√(g/h)(Z_1-Z_2),
Q_2 = η=Z_1+Z_2.
As a ”reference ” variable we take η=Z_i, this is the elevation of the wave propagating with wave speed c_i, where i=1 or i=2. From now on we do not write explicitly the index i, that is, we consider the propagation of only one of the two modes, η=Z, moving with speed c. In other words the other propagation mode is considered being identically zero. This is possible, since the interaction between both modes is neglected and modes propagate separately - in this case in opposite directions. Then eq. (<ref>) becomes simply
Q_1 =𝔲=V_1Z,
Q_2 = η=Z.
In order to take into account nonlinear terms, we expand the Hamiltonian (<ref>) in the scale parameter ε, taking into account the assumptions of the Boussinesq approximation. Using the expansion for the corresponding Dirichlet-Neumann operator (<ref>), keeping only terms of order ε ^4 we obtain:
H[Q]=ε^2 H^(2) +ε^3 H^(3)[Q] +ε^4 H^(4)[Q]+𝒪(ε^5),
where
H^(3)[Q]= -1/2∫_ℝh^3/3ρ𝔲_x^2 dx +1/2∫_ℝ1/ρη𝔲^2 dx,
H^(4)[Q]= 1/2∫_ℝ2h^5/15ρ𝔲_xx^2 dx -
1/2∫_ℝh^2/ρη𝔲_x^2 dx .
Taking into account the fact that the variables η, 𝔲 are both of order ε, the equations of motion (<ref>) are
𝔲_t =-ρ g η_x-ε/ρ𝔲𝔲_x + ε^2 h^2/ρ𝔲_x𝔲_xx ,
η_t = -h/ρ𝔲_x-ε h^3/3ρ𝔲_xxx-ε/ρ(η𝔲)_x
-ε^2 2h^5/15 ρ u_xxxxx-ε^2 h^2/ρ (η𝔲_x )_xx.
Now, our aim is to describe the time evolution of η=Z with a single equation. To this end we wish to extend the linear relation 𝔲=V_1Z in (<ref>) to a more complex one, which is suggested by the form of the nonlinearities in H^(3), H^(4) and the equations,
Q_1=𝔲= V_1Z + ε (α Z^2 + β Z_xx) +ε^2( γ Z^3 + μ Z Z_xx + ν Z_x^2 + θ Z_xxxx) ,
Q_2= η = Z,
where α,β, γ, μ,ν, θ are yet unknown constant coefficients. This relation is in fact an algebraic-differential constraint between the two Hamiltonian variables
𝔲 and η, which effectively reduces twice the phase space of the Hamiltonian system.
The time derivative of 𝔲 according to (<ref>) is
𝔲_t=(V_1 + ε ( 2 α Z + β∂_x^2) + ε^2(3γ Z^2+ μ Z_xx+μ Z ∂_x ^2 + 2ν Z_x ∂_x+θ∂_x^4 )) Z_t=:Ṽ Z_t,
where
Ṽ:=V_1 + ε ( 2 α Z + β∂_x^2) + ε^2(3γ Z^2+ μ Z_xx+μ Z ∂_x ^2 + 2ν Z_x ∂_x+θ∂_x^4 )
is a self-adjoined differential operator.
Inserting (<ref>) in (<ref>) leads to a system which involves only the Z variable and the unknown constants α, β, ... of the form
𝔲_t ≡Ṽ Z_t= f_1[Z],
η_t ≡ Z_t= f_2[Z],
where
f_1[Z]= -ρ g Z_x-ε/ρV_1^2 Z Z_x
-ε^2 3 α V_1/ρ Z^2 Z_x + ε^2
h^2 V_1^2 - β V_1/ρZ_x Z_xx-ε^2 β V_1/ρ Z Z_xxx + 𝒪(ε^3),
f_2[Z]= -h/ρ V_1Z_x-ε2/ρ(h α +V_1) Z Z_x -ε h/ρ( β+ h^2/3 V_1) Z_xxx
-ε^2( θ h/ρ + h^3β/3 ρ + 2h^5 V_1/15 ρ) Z_5x - ε^2 3(hγ+α)/ρ Z^2 Z_x
- ε^2 h μ + 2h ν +2α h^3 +β+3h^2 V_1/ρZ_x Z_xx
-ε^2 (hμ + β+ h^2 V_1/ρ+ 2α h^3/3 ρ)Z Z_xxx+ 𝒪(ε^3).
The equations (<ref>) are compatible iff f_1[Z]≡Ṽ f_2[Z]. This leads to a lengthy expression for Ṽ f_2[Z], which can be truncated up to the terms of order ε^2. The comparison with f_1[Z] in (<ref>) gives rise to equations, generated by matching the coefficients of the like terms. This enables the determination of the unknown constants as follows:
Z Z_x term → α=-1/4hV_1,
Z_xxx term → β =-h^2/6V_1,
Z^2 Z_x term → γ=-1/8h^2 V_1,
Z_xxxxx term → θ=-h^4/40V_1,
Z Z_xxx term → μ=-h/4V_1,
Z_x Z_xx term → ν=-9h/16V_1.
From (<ref>), the equation describing the evolution of the propagating mode Z takes the form Z_t-f_2[Z]=0. From (<ref>) using (<ref>) – (<ref>), this can be expressed as
Z_t+ h V_1/ρ Z_x+εh^3 V_1/6ρZ_xxx+ε3V_1/2ρ Z Z_x+ε^2 19/360h^5 V_1/ρ Z_5x
-ε^2 3/8V_1/hρ Z^2 Z_x +
ε^2 h^2 V_1/ρ( 23/24 Z_x Z_xx+5/12 Z Z_xxx)=0.
Taking into account the relations c=hV_1/ρ=±√(gh) given by (<ref>), (<ref>) and η=Z, we have
η_t + cη_x +εc h^2 /6η_xxx+ε3c/2hηη_x
+ ε^2 19 ch^4/360η_5x -ε^2 3c/8h^2η^2 η_x +ε^2 ch
( 23/24η_x η_xx+5/12ηη_xxx)=0.
This is a higher order KdV-type equation (HKdV). The expression for the other variable 𝔲 up to 𝒪(ε^2) is then
𝔲=V_1( Z-ε/4h Z^2 -ε h^2/6 Z_xx - ε^2/8h^2 Z^3-
ε^2 h/4Z Z_xx -ε^2 9h/16Z_x^2 -ε^2 h^4/40Z_4x) ,
equivalently, using as before c=hV_1/ρ and η=Z, we have the expression
𝔲 =ρ c/h(η-ε/4hη^2 -ε h^2/6η_xx
- ε^2/8h^2η^3-
ε^2 h/4ηη_xx -ε^2 9h/16η _x^2 -ε^2 h^4/40η _4x),
where the wavespeed has two possible values c=±√(gh), due to (<ref>).
A HKdV equation for 𝔲 can also be derived, but its coefficients will be different.
This can be achieved for example with a similar procedure, where the reference variable is taken to be
𝔲≡ V_1 Z and η is expressed in terms of Z by a relation of the type
η= Z+ε (α' Z^2 + β' Z_xx) +ε^2( γ' Z^3 + μ' Z Z_xx + ν' Z_x^2 + θ' Z_xxxx).
The equation (<ref>) appears in a number of previous studies involving models beyond the KdV approximation, see for example <cit.>. This equation in general is not integrable, its relation to integrable equations with the same type of nonlinear and dispersive terms will be established in the next section.
§ NEAR-IDENTITY TRANSFORMATION AND RELATION TO INTEGRABLE EQUATIONS
In this section we establish a relation between two HKdV equations of the type (<ref>) by employing the so-called Near-Identity Transformation (NIT) of the dependent variable η(x,t).
The transformation generates a HKdV with coefficients, different from the coefficients of the original HKdV, however, the transformed equation can be matched to (one of) the three known integrable HKdV equations, whose coefficients have particular values.
Let us suppose that η(x,t) satisfies the equation
η_t+ cη_x+ε A ηη_x + ε B η_xxx
+ε^2 M η^2 η_x
+ε^2 Q η_5x+ε^2(N_1 η_x η_xx+N_2 ηη_xxx)=0
for some constants c,A,B,M,Q,N_1,N_2, which is a general form of equation (<ref>). Let us consider the following NIT relating η of (<ref>) to another function E(x,t):
η(x,t)=E+ε( a_1E^2 + a_2 E_xx+a_3 E_x ∂_x^-1E),
where a_i are 3 constant parameters and the inverse differentiation means integration. This transformation is also known as the Kodama transform <cit.>, and appears in previous studies like <cit.>.
From (<ref>) we obtain by differentiation
η_t + c η_x = E_t+ c E_x + 𝒪(ε)
and as far as obviously from (<ref>) η_t + c η_x = 𝒪(ε), then
E_t+ c E_x = 𝒪(ε).
Again, from (<ref>) and using in addition (<ref>) we have
η_t + c η_x + ε A ηη_x + ε B η_xxx = E_t+ c E_x + ε A E E_x + ε B E_xxx+ 𝒪(ε^2)
and since from (<ref>) η_t + c η_x + ε A ηη_x + ε B η_xxx = 𝒪(ε^2), then
E_t+ c E_x + ε A E E_x + ε B E_xxx= 𝒪(ε^2).
In other words, the NIT does not change the original equation up to the terms of order ε.
If (<ref>) is applied to the full equation (<ref>), then after some similar straightforward calculations taking into account (<ref>), one can verify that up to terms of order ε^2 the associated evolution equation for E is
E_t+ c E_x+ε A E E_x + ε B E_xxx
+ε^2 M' E^2 E_x +ε^2 Q' E_5x+ε^2(N'_1 E_x E_xx+N'_2 E E_xxx)= 𝒪(ε^3),
where the 𝒪(ε^2) terms have the following coefficients (all terms involving ∂_x ^-1E miraculously cancelling out):
M' =M+A(a_1+1/2a_3),
Q' =Q,
N_1' =N_1+6Ba_1-2Aa_2+3Ba_3,
N_2' =N_2+3Ba_3.
The transformation (<ref>) could be used for example to relate the solutions of the non-integrable HKdV equation (<ref>) for the physical variable η to the solution E(x,t) of some integrable equation. Integrable equations have the advantage of possessing so-called soliton solutions, which are usually stable (in time) solitary waves, they interact elastically and recover their initial shape after interaction. The soliton solutions can be obtained explicitly by various methods, such as the inverse scattering method, <cit.>.
The relation of (<ref>) to the three known integrable equations of HKdV type can be shown as follows. The known integrable HKdV-type equations are
E'_τ+E'_5x'+2(6b+1) E'_x'E'_x'x'+4(b+1) E' E'_3x'+20b (E')^2 E'_x'=0,
where b=3/2 corresponds to the second equation from the KdV integrable hierarchy, b=4 and b=1/4 are the other two integrable cases, known as the Kaup-Kuperschmidt (KK) equation <cit.> (b=4) and
Sawada-Kotera equation <cit.> (b=1/4), which appears also in <cit.>. The soliton solutions
of the KK equation are obtained in <cit.>. The classification appears in <cit.> on p. 170, where the two equations b=4 and b=1/4 are given, the b=3/2 one is from the KdV hierarchy and is also a symmetry of the KdV equation - it is mentioned on p. 117.
Equation (<ref>) can also be rewritten in several equivalent forms. With a Galilean transformation a linear C_1 E'_x' term could be generated. The shift E'→ E'+C_2 (where C_1, C_2 are arbitrary constants), leads to another form of this integrable family of equations, with a new time-like variable t', see the details in <cit.>:
E'_t' +(C_1 +20bC_2^2) E'_x'+4(b+1)C_2 E'_3x'+40bC_2 E' E'_x'
+E'_5x'+2(6b+1) E'_x'E'_x'x'+4(b+1) E' E'_3x'+20b (E')^2 E'_x'=0.
Following the re-scaling E'=εκ E, and
x'=x/√(ε)ϑ, t'=t/√(ε)ϑ,
and introducing a new constant c'=C_1 +20bC_2^2 the above equation (<ref>) becomes (see<cit.> for details)
E_t +c' E_x+ε 4(b+1) C_2 ϑ^2 E_3x+ε 40bC_2 κ E E_x
+ε ^2 (ϑ ^4 E_5x+2(6b+1) ϑ^2 κ E_xE_xx+4(b+1)ϑ^2 κ E E_3x+20b κ^2 E^2 E_x)=0.
In order for the coefficients of (<ref>) to match the coefficients of the integrable equation (<ref>) we require
c' =c,
4(b+1) C_2 ϑ^2 = B,
40bC_2 κ =A ,
ϑ^4 =Q'=Q,
In so doing, we obtain
ϑ=Q^1/4, C_2=B/4(b+1)√(Q), κ=A/40 b C_2=(b+1)A√(Q)/10b B,
with the remaining matching conditions (<ref>) – (<ref>) determining the constants a_i:
4(b+1)ϑ^2 κ =N_2'=N_2+3Ba_3 , a_3=2(b+1)^2AQ/15bB^2-N_2/3B,
Consequently, we obtain
N_2'=2(b+1)^2AQ/5bB.
For the remaining coefficients we obtain
20bκ^2 =M'=M+Aa_1+1/2 Aa_3,
a_1 =4(b + 1)^2 A^2Q + 5bA B N_2 - 30b M B^2/30bAB^2,
M' =(b + 1)^2 A^2 Q/5bB^2,
2(6b+1)ϑ^2 κ =N_1'=N_1+ 6Ba_1-2Aa_2+3Ba_3 ,
a_2 =(b + 1)QA^2 + b N_1 B A - 6 b M B^2/2 B b A^2 ,
N_1' =(6 b + 1)(b + 1)QA/5 b B.
Thus we have achieved the following. The non-integrable physical model is given by equation (<ref>) with coefficients
A = 3 c/2 h, B=c h^2/6, Q = 19 c h^4/360,
M =-3 c/8 h^2, N_1 = 23 c h/24, N_2 =5 c h/12.
By employing a NIT (<ref>) this equation is transformed to an integrable one (<ref>)
with coefficients given by (<ref>) – (<ref>):
Q' =Q=19 c h^4/360, M'=171 (b + 1)^2 c/200 b h^2,
N_1' =19(6b + 1)(b + 1)ch/200 b , N_2'=19(b + 1)^2 ch/100 b.
Moreover, the parameters of the NIT (<ref>) are
a_1 = 57 b^2 + 214 b + 57/150 b h, a_2=(202b + 57)h^2/360 b ,
a_3=57 b^2 - 11b + 57/150 b h .
We note that both the coefficients and the NIT parameters depend on the parameter b, and the three known integrable equations of this type correspond to the three possible values: b=3/2, b=4 and b=1/4.
§ DISCUSSION
In this chapter we have presented in detail the procedure of the derivation of the higher order KdV model (HKdV) for surface waves and its relation to three integrable equations. These integrable models are solvable by the inverse scattering method and possess soliton solutions. Another interesting problem concerning the HKdV equation is its connection to the Camassa-Holm-type equations, however this has not been explored in the present contribution. The methods illustrated here can be extended in different directions, for example, for derivation of long-wave models for internal waves.
The zero vorticity assumption is essential since in this case the surface dynamics allows an analytic continuation to the fluid domain. The results could possibly be extended in a similar way to the case of constant vorticity, and this is work in progress. Arbitrary nonzero vorticity however leads to complicated interactions between the physical quantities at surface and in the fluid volume, see for example <cit.>.
§.§ Acknowledgements
This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 21/FFP-A/9150, and from discussions, undertaken at the workshop Nonlinear Dispersive Waves held at University College Cork in April 2023.
9
Lan J.L. Bona, D. Lannes, J.-C. Saut, Asymptotic models for internal waves,
Journal de Mathématiques Pures et Appliquées, 89 (2008) 538–566,
<https://doi.org/10.1016/j.matpur.2008.02.003>
CDG P. Caudrey, R. Dodd, J. Gibbon, A new hierarchy of Korteweg-de Vries equations. Proc.
R. Soc. A 351 (1976) 407–422.
<https://doi.org/10.1098/rspa.1976.0149>
CompelliIvanov1 A. Compelli and R. Ivanov, On the dynamics of internal waves interacting with the Equatorial Undercurrent. J. Nonlinear Math. Phys. 22 (2015) 531–539.
<https://doi.org/10.1080/14029251.2015.1113052>
arXiv:1510.04096 [math-ph].
CompelliIvanov2 A. Compelli and R. Ivanov, The dynamics of flat surface internal geophysical waves with currents. J. Math. Fluid Mech. 19 (2017) 329 – 344.
<https://doi.org/10.1007/s00021-016-0283-4>
arXiv:1611.06581 [physics.flu-dyn]
Constb A. Constantin, Nonlinear Water Waves with Applications to Wave-Current Interactions and Tsunamis (CBMS-NSF Regional Conference Series in Applied Mathematics), Publisher: Society for Industrial and Applied Mathematics; 1st edition (December 1, 2011)
CI A. Constantin and R. Ivanov, A Hamiltonian approach to wave-current interactions in two-layer fluids. Phys. Fluids 27 (2015) 086603.
<https://doi.org/10.1063/1.4929457>
CIM A. Constantin, R.I. Ivanov and C.I. Martin, Hamiltonian formulation for wave-current interactions in stratified rotational flows. Arch. Rational Mech. Anal. 221 (2016) 1417–1447.
<https://doi.org/10.1007/s00205-016-0990-2>
CIP A. Constantin, R. Ivanov and E. Prodanov, Nearly-Hamiltonian
structure for water waves with constant vorticity. J. Math. Fluid Mech. 10 (2008) 224–237. <https://doi.org/10.1007/s00021-006-0230-x>
arXiv:math-ph/0610014.
CJ08 A. Constantin, R. Johnson, On the Non-dimensionalisation, scaling and
resulting interpretation of the classical governing equations for water waves, Journal of Nonlinear Mathematical Physics 15, Supplement 2 (2008), 58–73,
<https://doi.org/10.2991/jnmp.2008.15.s2.5>
CraigGroves W. Craig and M. Groves, Hamiltonian long-wave approximations to the water-wave problem. Wave Motion 19 (1994) 367–389.
<https://doi.org/10.1016/0165-2125(94)90003-5>
CraigGuyenneKalisch W. Craig, P. Guyenne and H. Kalisch, Hamiltonian long wave expansions for free surfaces and interfaces. Comm. Pure Appl. Math. 58(12) (2005) 1587–1641.
<https://doi.org/10.1002/cpa.20098>
CraigSulem W. Craig and C. Sulem, Numerical simulation of gravity waves. J. Computat. Phys. 108 (1993) 73–83. <https://doi.org/10.1006/jcph.1993.1164>
CuIv J. D. Cullen, R. I. Ivanov, Hamiltonian description of internal ocean waves with Coriolis force, Communications on Pure and Applied Analysis, 21, (2022) 2291–2307;
<https://doi.org/10.3934/cpaa.2022029>
arXiv:2203.13940 [physics.flu-dyn]
CCRI C. Curtin and R. Ivanov, The Lagrangian formulation for wave motion with a shear current and surface tension, J. Math. Fluid Mech. 25:87 (2023)
<https://doi.org/10.1007/s00021-023-00831-6 >
arXiv:2406.00202 [physics.flu-dyn]
DGH H. R. Dullin, G. A. Gottwald, D. D. Holm,
Camassa–Holm, Korteweg–de Vries-5 and other asymptotically equivalent equations for shallow water waves,
Fluid Dynamics Research 33 (2003) 73–95,
<https://doi.org/10.1016/S0169-5983(03)00046-7>
VSG V. S. Gerdjikov, On Kaup-Kupershchmidt–type equations and their soliton solutions,
Il Nuovo Cimento 38 C (2015) 161,
<https://doi.org/10.1393/ncc/i2015-15161-7>
IKI23 D. Ionescu-Kruse, R. Ivanov, Nonlinear two-dimensional water waves with arbitrary vorticity, J. Differential Equations 368 (2023), 317–349; arXiv:2409.00446 [math.AP]
I07 R. Ivanov, Water waves and integrability, Phil. Trans. R. Soc. A (2007) 365, 2267–2280.
<https://doi.org/10.1098/rsta.2007.2007>
arXiv:0707.1839 [nlin.SI]
I23 R.I. Ivanov, On the modelling of short and intermediate water waves,
Applied Mathematics Letters 142 (2023) 108653,
<https://doi.org/10.1016/j.aml.2023.108653>
arXiv:2405.19344 [nlin.PS]
Johnson R. S. Johnson, A Modern Introduction to the Mathematical Theory of Water Waves. Cambridge University Press, 1997.
Adkdv5 A. Karczewska, P. Rozmej and E. Infeld, Shallow-water soliton dynamics beyond the Korteweg-de Vries equation, Phys. Rev. E 90 (2014) 012907.
<https://doi.org/10.1103/PhysRevE.90.012907>
Kod1 Y. Kodama, On integrable systems with higher order corrections,
Physics Letters A 107 (1985) 245–249,
<https://doi.org/10.1016/0375-9601(85)90207-5>
Kod Y. Kodama, Normal forms for weakly dispersive wave equations, Phys. Lett. A 112 (1985) 193–196.
<https://doi.org/10.1016/0375-9601(85)90500-6>
KK D.J. Kaup, On the Inverse Scattering Problem for Cubic Eigenvalue Problems of the Class ψ_xxx+ 6Qψ_x + 6Rψ = λψ, Stud. Appl. Math 62 (1980) 189–216.
<https://doi.org/10.1002/sapm1980623189>
Lanb D. Lannes, The Water Waves Problem, Mathematical Surveys and Monographs, vol.188, American Mathematical Society, Providence, 2013.
Mar T. Marchant, N. Smyth, The extended Korteweg-de Vries equation and the resonant flow of a fluid over topography. Journal of Fluid Mechanics, 221 (1990) 263–287.
<https://doi.org/10.1017/S0022112090003561>
Mikh A.V. Mikhailov, A.B. Shabat and V.V.Sokolov, The Symmetry Approach to Classification of Integrable Equations. In: Zakharov, V.E. (eds) What Is Integrability?. Springer Series in Nonlinear Dynamics. Springer, Berlin, Heidelberg 1991. <https://doi.org/10.1007/978-3-642-88703-1_4>
ZMNP S.P. Novikov, S.V. Manakov, L.P. Pitaevsky and V.E. Zakharov, Theory of Solitons: the Inverse Scattering Method, New York: Plenum, 1984.
SK K. Sawada, T. Kotera, A method for finding N-soliton solutions of the KdV equation and
KdV-like equation. Progr. Theor. Phys. 51 (1974) 1355–1367.
<https://doi.org/10.1143/PTP.51.1355>
Zakharov V.E. Zakharov, Stability of periodic waves of finite amplitude on the surface of a deep fluid. J. Appl. Mech. Tech. Phys. 9 (1968) 86–89.
<https://doi.org/10.1007/BF00913182>
ZhiSig Li Zhi, N.R. Sibgatullin, An improved theory of long waves on the water surface,
Journal of Applied Mathematics and Mechanics 61 (1997) 177–182,
<https://doi.org/10.1016/S0021-8928(97)00024-5>
|
http://arxiv.org/abs/2409.02330v1 | 20240903225346 | Internal Dynamics of Multiple Populations in 28 Galactic Globular Clusters: A Wide-Field study with Gaia and the Hubble Space Telescope | [
"Giacomo Cordoni",
"Luca Casagrande",
"Antonino Milone",
"Emanuele Dondoglio",
"Alessandra Mastrobuono-Battisti",
"Sohee Jang",
"Anna Marino",
"Edoardo Lagioia",
"Maria Vittoria Legnardi",
"Tuila Ziliotto",
"Fabrizio Muratore",
"Vernica Mehta",
"Elena Lacchin",
"Marco Tailo"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.SR"
] |
firstpage–lastpage
A generalized adaptive central-upwind scheme for compressible flow simulations and preventing spurious vortices
[
===============================================================================================================
§ ABSTRACT
We present the first comprehensive analysis of the internal dynamics of multiple stellar populations (MPs) in 28 Galactic Globular Clusters (GCs) across a wide field of view, extending from the innermost regions to the clusters' outskirts. Using astro-photometric catalogs from ground-based observations, Gaia, and the Hubble Space Telescope (HST), we identify first- (1P) and second-population (2P) stars, and study the internal dynamics of MPs using high-precision Gaia DR3 and HST proper motions. Our results reveal that while the 1P transitions from isotropy to slight tangential anisotropy toward the outer regions, 2P stars become increasingly radially anisotropic beyond the half-light radius. We also explore the connection between the dynamics of MPs and the clusters' structural and dynamical properties, finding statistically significant differences in the anisotropy profiles of dynamically young and non-relaxed clusters, particularly beyond the 1-2 half-light radii. In these regions, 1P stars transition from isotropic to slightly tangentially anisotropic motion, while 2P stars become more radially anisotropic.
In contrast, dynamically older clusters, with mixed MPs, exhibit weaker relative differences. Furthermore, clusters with orbits closer to the Galactic center exhibit larger dynamical differences between 1P and 2P stars than those with larger peri-Galactic radii. These findings are consistent with a scenario where 2P stars form in a more centrally concentrated environment, where the interaction with the Milky Way tidal field plays a crucial role in the dynamical evolution of MPs, especially of 1P.
stars: Hertzsprung-Russell and colour-magnitude diagrams; stars: kinematics and dynamics; Galaxy: globular clusters: general; Galaxy: kinematics and dynamics
§ INTRODUCTION
More than two decades of Hubble Space Telescope (HST) observations have revealed that nearly all Galactic Globular clusters (GCs) host Multiple Populations (MPs) with at least two main groups of stars: first-population (1P) and second-population <cit.>. These groups exhibit specific chemical compositions, with 2P stars being for example enriched in Sodium, Nitrogen and Helium and depleted in Carbon and Oxygen <cit.>. Despite several models have been proposed to explain the formation of MPs, none can simultaneous match the numerous observational constraints <cit.>. Some scenarios predict that 2P stars formed from the ejecta of short-lived massive 1P stars <cit.>. As the expelled gas accumulated toward the cluster center, 2P stars would dominate the composition of the central cluster regions. Alternatively, the chemical composition of 2G stars could be the result of the accretion of chemically enriched material emitted from 1P stars in binary configuration <cit.>.
While some clusters indeed exhibit a more centrally concentrated 2P, there are numerous examples where 1P and 2P stars are spatially mixed, and even clusters with a more centrally concentrated 1P are observed <cit.>. Recent studies suggest that the extent of this mixing between 1P and 2P stars may correlate with the dynamical age of the cluster, indicating that dynamically older clusters tend to show a more complete mixing of MPs. Nonetheless, despite the numerous observational constraints, a comprehensive picture of how MPs interact form and evolve over time remains elusive.
In recent years, a promising new path has emerged: the study of the internal kinematics of cluster stars. This approach is crucial for uncovering the physical processes behind the formation of multiple populations. Specifically, N-body simulations indicate that the dynamical evolution of 2P stars formed in different environment should differ significantly from that of more spatially extended 1P stars. Such differences may still be detectable in present-day GC kinematics <cit.>, provided the populations are not completely relaxed.
Over the past decade, a number of studies have analyzed the spatial distribution and the internal kinematics of MPs in various GCs, using HST photometry and proper motions <cit.>, MUSE/APOGEE line-of-sight (LoS) velocities <cit.>, ground-based photometry coupled with Gaia proper motions, or a combination of these methods <cit.>.
Such works presented the first evidence of significant dynamical differences between 1P and 2P stars in some GCs, qualitatively aligning with N-body simulations' results. However, the limited number of clusters and the small field of view prevented a complete and detailed characterization of the phenomenon.
In this work we aim to overcome the limitation of previous studies by investigating the internal dynamics of MPs in a sample of 28 GCs, combining photometric information from HST <cit.>, ground-based <cit.> and Gaia <cit.> and accurate HST <cit.> and Gaia <cit.> proper motions. This work presents the first comprehensive dynamical investigation of MPs in a large sample of clusters, extending from the central arcminutes to nearly the tidal radius. The paper is structured as follows: in Sec. <ref> we introduce the dataset and MPs selection process. Section <ref> presents the study of the internal dynamics in the analyzed clusters and the global dynamical profiles of MPs. Finally, the discussion and summary are presented in Sec. <ref> and <ref>, respectively.
§ DATA
To investigate the internal dynamics of MPs in Galactic GCs over a wide field of view, we combined HST photometry <cit.>, ground-based UBVI photometry <cit.>, Gaia XP spectro-photometry <cit.>[Gaia XP spectra could be effectively used in only 4 GCs, namely NGC 0104, NGC 3201, NGC 6121 and NGC 6752.], with HST <cit.> and Gaia DR3 proper motions <cit.>. We refer to the different works for a detailed description of the astro-photometric datasets. The total sample used in this work consists of 28 GCs for which we could reliably select MPs stars over a wide field of view.
To identify 1P and 2P among RGB stars, we have made use of the Chromosome Maps (ChMs) presented in <cit.> for HST data, photometric diagrams made with the index[defined as = (U-B) -(B-I)<cit.>], <cit.> for ground-based observations, and the ChMs introduced in <cit.> from Gaia spectro-photometry. ChMs effectively and accurately separate MPs in GCs, with 1P stars defining a clump of stars centered around (0, 0), and 2P stars spread over larger absolute values of Δ_BI and Δ_C_UBI (Δ_F275W, F814W and Δ_C_F275W, F336W, F438W in HST filters). In the next section, we discuss in detail the procedure used to identify 1P and 2P stars in the ground-based ChMs of 28 GCs, whereas we refer to <cit.> and <cit.> for the identification of MPs in HST and Gaia XP photometry.
The final selection of 1P and 2P stars for each cluster and the analyzed field of view are shown in App. <ref> as online supplementary material. [The sample of analyzed clusters also includes type-II GCs, i.e. clusters with internal variation of heavy-elements <cit.>. However, considering the small number of stars belonging to the anomalous populations, i.e. enriched stars, we only study stars with light-elements abundance variations.]
§.§ Multiple Populations from ground-based photometry
In a nutshell, the I vs. B-I and I vs. C_UBI=U - 2B + I CMDs of Red Giant Branch stars (RGB) have been used to derive the verticalized Δ_BI and Δ_C_UBI colors, that are respectively shown on the x and y axis of the ChMs. We refer to <cit.> for a detailed discussion on how these quantities have been derived.
To select 1P and 2P stars from the ChMs we adapted the procedure introduced in <cit.> on HST ChMs to our ground-based ChMs, as done in Jang et al., in preparation. Figure <ref> illustrates the procedure for the cluster NGC 2808. Briefly, we first selected by eye the bulk of 1P stars among the stars with (Δ_BI, Δ_C_UBI) ∼ (0, 0), and fitted a straight line to 1P stars (black solid line in Fig. <ref>, leftmost panel). We then rotated the ChMs by the angle θ defined by slope of the 1P best-fit line. We refer to the new rotated reference frame as and (Fig. <ref>, middle panel). By construction, 1P stars define a horizontal distribution in vs , and we can use the distribution to disentangle 1P and 2P stars. We used the python package <cit.> and employed Gaussian Mixture Models (GMM) to fit 2 Gaussians to the distribution. We remind here that GMM is applied directly to the data points, so that the results do not depend on the bin choice.
Finally, 1P stars (red points in the rightmost panel) have been selected as stars with < ^sep, with ^sep being the intersection between the two best-fit Gaussians, indicated by the horizontal line in the middle panel of Fig. <ref>. 2P stars are marked with blue points. Additionally, stars with unusual values of and have been excluded from the selection.
The ground-based ChMs with selected 1P and 2P stars of the 28 analyzed Galactic GCs are shown in App. <ref> as online supplementary material.
Additionally, we employed the V vs. pseudo-CMD to extend the regions analyzed in <cit.>, including stars at larger distance from the cluster centers. We refer to <cit.> for a detailed description of the selection of MPs.
§.§ Multiple populations from HST photometry
To study MPs among RGB stars in the innermost cluster regions we exploited the ChMs and selection carried out in <cit.>. In a nutshell, HST ChMs have been derived by means of the verticalized m_F814W vs. m_F275W - m_F814W (for the abscissa) and m_F814W vs. C_F275W, F336W, F438W[C_F275W, F336W, F438W = m_F275W - 2 m_F336W + m_F438W] CMDs (on the ordinate). 1P and 2P stars have been selected with the procedures detailed in <cit.>. Each HST ChM has then been cross matched with the proper motion catalogs of <cit.>.
§.§ Multiple populations from Gaia XP synthetic photometry
The recent Gaia DR3, <cit.> publicly released low-resolution spectra, namely XP spectra, for ∼200 million sources <cit.>, allowing to derive synthethic photometry in virtually any photometric systems <cit.>. In a recent work, <cit.> exploited Gaia XP spectra to compute synthetic photometry special filters designed to maximize the separation between 1P and 2P stars in GCs' RGB. Their analyzed field of view extends well beyond that covered by the ground-based photometric dataset of <cit.>, reaching their tidal radius. We refer to <cit.> for a detailed description of the procedure used to derive and separate the populations, and for the validation of such identification. In a nutshell, exploiting synthethic spectra with the typical composition of 1P and 2P stars, Mehta and collaborators defined a series of photometric filters to maximize the separation between 1P and 2P stars in 5 Galactic GCs. To validate their MPs identification, <cit.> compared the classification with available spectroscopic information, finding consistent results. In this work, we exploit their identification to extend our ground-based dataset and investigate the dynamics of 1P and 2P stars up to the clusters' tidal radii.
§ INTERNAL DYNAMICS OF MULTIPLE POPULATIONS
After carefully selecting MPs in the 28 analyzed clusters, we exploited HST <cit.> and Gaia DR3 <cit.> proper motions to investigate the internal dynamics of stars belonging to different populations. Specifically, we followed the procedure described in <cit.> and <cit.>[see also <cit.>.]. We first transformed the celestial coordinates and their proper motion components (α, δ, , )[=μ_αcosδ] into a Cartesian reference frame (x, y, , ), using the orthographic projection <cit.>, and
then the proper motions into their sky-projected radial and tangential components, defined as = x + y and = y - x, with pointing outward (positive) or inward (negative), and positive when counterclockwise. and here indicate the proper motion of each star relative to the motion of the cluster, determined by <cit.>. Furthermore, considering that the sample of analyzed stars is different from that analyzed in <cit.>, we repeated the analysis determining the mean cluster motion from our sample of stars, and accounting for systematic errors as in <cit.>[routines available at <https://github.com/GalacticDynamics-Oxford/GaiaTools>.], finding consistent results.
The uncertainties on the radial and tangential components of the proper motions have been determined from the uncertainties on and , accounting for the correlation between the proper motions. Finally, the radial proper motions have been corrected for the perspective expansion/contraction due the bulk motion of the cluster along the line of sight, as in <cit.>.
The mean motion and velocity dispersion of 1P and 2P stars have then been computed by dividing the field of view into different concentric annuli containing approximately the same number of stars. For each annulus we determined the mean motion (μ_R/T) and dispersion (σ_R/T) from Gaia DR3 radial and tangential components by minimizing the negative log-likelihood defined in <cit.> including the covariance term as in <cit.>. Uncertainties on the determination of the mean motion and dispersion have been determined using Markov Chain Monte Carlo algorithm <cit.>.
The dynamical profiles have been re-computed using different number of bins and bins with the same radial width to determine the effect of arbitrary bin choice, with consistent results.
Concerning the HST proper motions, we used the same likelihood without the covariance term, <cit.> and maximizing only for the radial/tangential dispersion. This is because HST proper motions are relative to the cluster's bulk motion and do not provide individual stellar absolute mean motion.
In the following sections, we present and discuss the results for selected individual clusters as well as the global dynamical profiles derived by combining data from all clusters. This approach enables us to explore the general dynamical properties of 1P and 2P stars. Our analysis focuses on the dispersion profiles, while we defer a detailed investigation of the rotation of 1P and 2P stars to a forthcoming paper that is currently in preparation.
§.§ Individual dynamical profiles
In this section, we present the study of the dynamical profiles of 1P and 2P stars in a few individual clusters with sufficiently large samples, namely NGC 0104, NGC 2808, NGC 5904, and NGC 6205. The dynamical profile of the remaining clusters are shown in App. <ref> as online supplementary material.
Figure <ref> shows, from top to bottom, the mean radial and tangential profiles, the radial and tangential dispersion, and the anisotropy profiles, defined as β=. The red and blue solid lines indicate the dynamical quantities, determined as described in Sec. <ref>, while the shaded rectangles indicate the relative uncertainties and the extension of each radial bin. The distance from the cluster center are in units of indicated in the middle panel <cit.>. Black diamonds indicate the values of the 1-dimensional dispersion from <cit.>, while the regions with available HST data is indicated by the vertical green lines. The top and right axis (shown in gray) of each panel display distances in units of parsec and km/s, converted considering cluster distances determined in <cit.>.
A visual inspection of the individual profiles reveals that, as expected, the mean radial motions of cluster stars are consistent with 0, as we already subtracted the perspective contraction/expansion. Consistently with <cit.>, we find that all four clusters exhibit rotation in the plane of the sky. The mean tangential profiles of NGC 0104 and NGC 5904 exhibit clear negative and positive values, respectively indicating counterclockwise and clockwise rotation. Possible differences in the rotation profile are consistent with the inferred uncertainties. NGC 2808 and NGC 6205, show possible hints of stellar rotation, even though the signal too weak to carry out a comparison.
Concerning the dispersion profiles, 1P stars exhibit a larger tangential and similar radial dispersion compared with 2P stars, especially outside ∼ 1-2, while 1P and 2P have similar radial dispersion profiles. As a result, 1P stars exhibit an isotropic motion in the central regions, possibly shifting toward tangentially anisotropic motion in the outer regions. On the other hand, the motion of 2P stars is consistent with being isotropic in the cluster center, becoming radially anisotropic beyond 1. A similar pattern is also observed in NGC 3201, consistently with <cit.>. The mean, dispersion and anisotropy profiles of the remaining clusters are displayed in App. <ref>. Overall, we find a good agreement between HST and Gaia dynamical profiles in overlapping or neighbouring regions.
The observed dynamical profiles of 1P and 2P stars are qualitatively consistent with the predictions from N-body simulations where 2P stars form in a more centrally concentrated environment. We refer to <cit.> for a detailed description of the assumptions adopted in the simulations and the results. Additional discussion is carried out in Sec. <ref>.
§.§ Global dynamical profiles
Given the low number of 1P and 2P stars in many clusters, a detailed analysis of the individual dynamical profiles as a function of cluster radius is often unfeasible. Therefore, similarly to <cit.>, we opted to investigate the internal dynamics in all clusters simultaneously, combining together the dispersion profiles of the individual clusters[We find worth mentioning that <cit.> normalized positions and proper motions of cluster stars and then derived the dispersion profiles. However, due to the non-relative nature of Gaia proper motions, we cannot use the same procedure on our dataset. Hence, we first derive the dynamical profiles in each cluster, and then normalize and combine the profiles.]. In order to properly compare different clusters, we normalized the radial coordinates to the cluster from <cit.> and the dispersion profiles to the central dispersion determined in <cit.> from HST data. Four clusters do not have determination of the central velocity dispersion from HST data, namely NGC 1904, NGC 4147, NGC 6712 and NGC 7492. For these clusters, we adopted the central dispersion determined in <cit.>. We remind here that such approach can be used only to compare dispersion profiles, but not mean motions, as the latter are not on a relative scale.
The normalized global dispersion and anisotropy profiles as a function of distance from cluster center for the 28 analyzed GCs are shown in Fig. <ref>. Global profiles for 1P and 2P stars are shown in the first and second columns, while the comparison between the average trend is shown in the third column. The average profiles were determined using the Locally Weighted Regression (LOESS) as implemented in <cit.>. To estimate the uncertainties in the average trend, we repeated the LOESS fitting on 1000 bootstrapped samples, drawing the values of σ_R/T in each realization from a Gaussian distribution centered on the observed value with a dispersion equal to the observed uncertainties. The uncertainties were determined as the 16^th and 84^th percentiles of the 1000 LOESS fits. This is a more robust approach than simply bootstrapping the sample, as it accounts for observed uncertainties as well. Finally, we estimated the significance of the observed differences between 1P and 2P stars accounting for uncertainties in both data and fitting. The details of the procedure are outlined in App. <ref>. In a nutshell, for each of the 1000 realizations of the LOESS fit, we computed the difference between 1P and 2P as a function of the radial coordinate. The 1, 2 and 3σ confidence regions are shown as gray shaded regions in the rightmost panels, where we display the difference between 1P and 2P a function of normalized radius. Additionally, we quantified the statistical significance of each point as the fraction of simulations which returned a difference smaller than the observed one, at the same radial location. The color of the line in the rightmost panels is indicative of of the statistical significance, in units of σ, as indicated in the right colorbar.
While there are no differences between the global radial dispersion profiles of 1P and 2P stars, 2P stars exhibit a lower tangential dispersion with respect to 1P stars, especially in the outer regions, e.g. R > 1-2. As a consequence, the 1P displays an isotropic/tangentially anisotropic motion in the outer regions, while the 2P is radially anisotropic. Due to the large uncertainties of the mean trend, the observed differences are significant beyond the 3σ level only between 2 and 4 . Nonetheless, the magnitude of the differences is consistent with the predictions from the theoretical simulations <cit.>. Additionally, the global profiles within are consistent with the results of <cit.>.
§ DISCUSSION
In the last decade, several theoretical works based on N-body simulations have investigated the dynamical evolution of MPs in GCs and their potential for distinguishing between different formation scenarios. The velocity dispersion and anisotropy profiles of MPs can provide fundamental insights into the origins of these stellar populations <cit.>. However, due to dynamical evolution and spatial mixing, possible differences in the internal dynamics of MPs disappear as the clusters reach relaxation. As shown in Fig.<ref> for individual clusters and Fig.<ref> for all clusters, the tangential dispersion and anisotropy profiles of 1P and 2P stars exhibit some differences, with 1P being isotropic across the entire cluster field and 2P stars shifting from isotropy to radially anisotropy beyond .
To investigate how these dynamical differences depend on the clusters' properties evolutionary state, we divided our sample of 28 clusters into different groups based on various internal and external factors: dynamical ages <cit.>, tidal filling factor defined as ℛ=/ <cit.> with being the Jacobi radius derived in <cit.>[Clusters with ℛ < 0.05 are considered in “isolated regime” or tidally underfilling clusters, while clusters with ℛ > 0.05 are in “tidal regime”, or tidally filling.],
spatial mixing of MPs <cit.>, tidal interaction with the Milky Way <cit.>[Consistently with literature works, we divide clusters with R_peri≶3.5 kpc], clusters' origin (e.g. in situ or accreted) as identified in <cit.> and escape velocity <cit.>[See e.g. the analysis of <cit.> and <cit.> and the connection between the cluster's and MPs. The value of the limiting escape velocity, i.e. 20 km/s, is determined examining the distribution of escape velocities from <cit.>.]. The groups are shown in the table provided in App. <ref>. We find worth mentioning that different groups have overlap, as some clusters' properties are not independent one from the other <cit.>. Additionally, we also divided clusters based on their internal bulk rotation <cit.>, but, to keep the figure simpler, we not show the results in Fig. <ref>. Nonetheless, the difference between 1P and 2P anisotropy profiles for the groups and the statistical significance are included in App. <ref>.
The dynamical profiles for different groups of clusters are displayed in Fig. <ref>. Each group is indicated in the top inset, together with the number of clusters in each group. The global profiles (solid lines) have been computed with the LOESS algorithm, while the uncertainties (shaded regions, 1σ) have been determined by bootstrapping with replacements a 1000 times and accounting for the observational uncertainties (see Sec. <ref> for a detailed description). The significance of the differences between the anisotropy profiles of 1P and 2P for each group is presented in App. <ref>, with the description of the procedure adopted to estimate it.
We find significant differences, i.e. above the 3σ level, in the average anisotropy profiles of many of the analyzed groups. Overall, we find that the 2P is more radially anisotropic beyond , while 1P is isotropic and become slightly tangentially anisotropic beyond 2-3. There are important differences in the 1P-2P relative dynamics among some of the analyzed groups. For example, while dynamically young clusters and those with centrally concentrated 2P stars show clearly different dispersion and anisotropy profiles, these differences are less pronounced in intermediate-age and mixed-population clusters. These findings are qualitatively consistent with the simulations by <cit.> and <cit.>, which suggest that due to different initial spatial configurations (such as centrally concentrated 2P stars) or initial isotropy, 1P stars tend to develop isotropic motion, while 2P stars display more radially anisotropic motion, especially in the outer regions. Over time, these dynamical differences gradually diminish as spatial and dynamical mixing of MPs occurs.
The comparison between clusters with small and large peri-Galactic radii (shown in the first two columns of the bottom row of Fig. <ref>) reveals some interesting patterns. In clusters with small peri-Galactic radii, 1P stars exhibit tangential anisotropy beyond , while in clusters with larger peri-Galactic radii, 1P stars are either isotropic or slightly radially anisotropic. In both cases, 2P stars become increasingly radially anisotropic as they move outward. These findings suggest that the interaction with the Milky Way’s tidal field plays a significant role in the dynamical evolution of MPs. Moreover, in clusters with small peri-Galactic radii, 1P stars appear to behave like tidally filling systems, where the outer regions develop slight tangential anisotropy, whereas 2P stars resemble tidally underfilling systems with radial anisotropy. In contrast, weaker interactions with the Milky Way’s tidal field result in a more isotropic 1P population. This scenario aligns qualitatively with the theoretical models presented in <cit.>.
To further explore this hypothesis, we also examined the internal dynamics of MPs in clusters classified as tidally underfilling (ℛ < 0.05) and tidally filling (ℛ > 0.05). While tidally underfilling clusters exhibit relative differences consistent with other groups, such as isotropic 1P and radially anisotropic 2P, MPs in tidally filling clusters display a different pattern. Specifically, 1P stars are tangentially anisotropic, while 2P stars exhibit isotropic motion across the entire field. Interpreting these results is challenging, as the tidal filling factor ℛ is closely linked to other properties, such as , R_peri, and cluster mass. Specifically, tidally underfilling clusters tend to be more massive, with larger and smaller R_peri.
A qualitatively similar pattern is observed in clusters with low and high escape velocities, with the latter often exhibiting tangentially anisotropic 1P stars. This difference may be due to the preferential loss of 1P stars on radial orbits in clusters with higher <cit.>. However, the relationship between and the fraction of MPs remains complex and is highly dependent on the specific formation scenario adopted for MPs.
1P and 2P stars exhibit comparable relative differences in clusters, regardless of whether they are in situ or accreted, and whether they are rotating or non-rotating (see, e.g. App. <ref>). In these cases, 2P stars remain radially anisotropic, while 1P stars display isotropic motion across the entire cluster field.
In summary, the results presented in Fig. <ref> indicate that clusters in a less dynamically evolved state show significant dynamical differences among MPs. Additionally, the interaction with the Galaxy appears to play a crucial role in shaping the evolution of different populations. These findings align with the conclusions of <cit.> regarding the internal dynamics of MPs in the innermost regions.
§ SUMMARY AND CONCLUSION
In this study, we present the first comprehensive investigation of the internal dynamics of multiple populations across a wide field of view, from the innermost arcmin to the clusters' outskirts, in a large sample of 28 Galactic GCs. Using HST, ground-based and XP synthethic photometry, we identified first- and second-population stars among RGB stars from the innermost regions to the clusters' outskirts. To achieve this, we exploited the pseudo CMD dubbed Chromosome maps as well as the index, where 1P and 2P stars form well-separated groups. The internal dynamics of MPs was investigated using HST and Gaia DR3 proper motions, allowing us to determine the mean, dispersion, and anisotropy profiles as function of distance from the cluster center. The analyzed cluster regions range from 0.2 to more than 10 . The analysis presented in this work confirms and extends the results of previous studies based on HST and Gaia proper motions <cit.>.
Our analysis of individual clusters reveals distinct dynamical differences in the anisotropy profiles of 1P and 2P stars in several cases. On average, 2P stars are more radially anisotropic beyond , whereas 1P stars generally exhibit isotropic motion, with some showing tangential anisotropy beyond 2-3. These anisotropy differences between 1P and 2P stars are primarily driven by a lower tangential dispersion of 2P, with no significant differences observed in the radial component. These results agree with the analysis of <cit.> for the innermost cluster regions.
Studying the global trends in dispersion and anisotropy profiles, derived by combining the dynamical profiles of various clusters, we observe significant differences between 1P and 2P stars, especially outside where the differences are significant beyond the 3σ level.
In dynamically young clusters, 1P stars are isotropic in the inner regions but become slightly tangentially anisotropic toward the outskirts. In contrast, 2P stars are isotropic in the cluster centers and become radially anisotropic in the outer regions. These patterns are also evident in clusters where 2P stars are more centrally concentrated and in clusters with escape velocities exceeding 20 km/s. However, dynamical evolved clusters, with spatially mixed MPs and lower escape velocities do not exhibit these dynamical differences. Rotating/non-rotating clusters, and in situ/accreted clusters show similar relative differences between 1P and 2P stars.
We also explore the influence of the Milky Way’s tidal field on the dynamical properties of MPs by analyzing clusters with different peri-Galactic radii. Our findings reveal a strong connection between the dynamical behavior of 1P stars and the strength of the Milky Way’s tidal field. In clusters with orbits closer to the Galactic center, where the tidal field is stronger, 1P stars tend to exhibit tangential anisotropy beyond 1-2. Conversely, in clusters with weaker interactions with the Milky Way’s tidal field, 1P stars display isotropic motion. This suggests that the Milky Way’s tidal field plays a crucial role in the dynamical evolution of MPs. Further supporting this conclusion, our analysis shows that tidally underfilling and filling clusters exhibit distinct relative patterns in the dynamical profiles of 1P and 2P stars.
The observed differences in the internal dynamics between 1P and 2P stars qualitatively align with the predictions of N-body and theoretical simulations by <cit.>. These studies indicate that the distinct dynamical properties of 1P and 2P stars are indicative of 2P stars forming in a more centrally concentrated environment.
Overall, the analysis presented in this paper offers key insights into the formation scenarios of multiple stellar populations and their relationship with both internal factors (such as and dynamical age) and external influences (such as interaction with the Milky Way’s tidal field).
§ ACKNOWLEDGMENTS
SJ acknowledges support from the NRF of Korea (2022R1A2C3002992, 2022R1A6A1A03053472). EPL acknowledges support from the “Science
& Technology Champion Project” (202005AB160002)
and from the “Top Team Project” (202305AT350002),
all funded by the “Yunnan Revitalization Talent Support Program”. TZ acknowledges funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie
Grant Agreement No. 101034319 and from the European Union – Next Generation EU. This work has received funding from “PRIN 2022 2022MMEB9W - Understanding the formation of globular clusters with their multiple stellar generations” (PI Anna F. Marino), and from INAF Research GTO-Grant Normal RSN2-1.05.12.05.10 - (ref. Anna F. Marino) of the “Bando INAF per il Finanziamento della Ricerca Fondamentale 2022”. EL acknowledges financial support from the European Research Council for the ERC Consolidator grant DEMOBLACK, under contract no. 770017
§ DATA AVAILABILITY
Relevant data underlying this work is available in the article. All other data will be shared upon reasonable request to the corresponding author.
mnras
§ SELECTION OF MULTIPLE POPULATIONS
In this Appendix we show the selection of MPs for all the 28 analyzed clusters. As discussed in Sec. <ref>, we used a combination of HST and ground-based photometry <cit.>, and Gaia XP synthetic photometry <cit.>. In Fig. <ref>- <ref> we display the analyzed field of view in the central panel, together with the photometric diagrams used to separate 1P and 2P, always shown with red and blue colors respectively. Specifically, ground-based photometry is shown in the leftmost panels, with the ChMs from <cit.> shown in the top one. HST and Gaia XP ChM are instead displayed in the right panels, respectively in the top and bottom one. The coloured circles in the central panels indicate the analyzed regions.
Concerning ground-based observations, we first used the selection in the ChM whenever available, and we adopted the selection for stars without ChM information. MPs outside the orange circles are selected by means of Gaia XP synthethic photometry. We remind here that Gaia XP synthethic photometry is only available for four clusters, namely NGC 0104, NGC 3201, NGC 6121 and NGC 6752 <cit.>.
§ INDIVIDUAL PROFILES OF ALL CLUSTERS
We present in this section the individual dynamical profiles of the analyzed clusters. The profiles are displayed in a similar fashion as Fig. <ref>. Mean trends are shown in Fig. <ref>-<ref>, while dispersion and anisotropy profiles are presented in Fig. <ref>-<ref>. Values computed from HST/Gaia data are displayed as crosses and circles respectively, while their uncertainties are indicated by the shaded rectangles. Mean motions and dispersion profiles have been converted to km/s adopting the clusters' distances derived in <cit.>. Red and blue colors indicate 1P and 2P stars, while black diamonds represent the values of the 1D-dispersion computed in <cit.>. Finally, the locations of the , , radii are indicated by the dash-dotted, solid and dashed lines respectively. is indicated only if contained within the analyzed field of view.
§ STATISTICAL SIGNIFICANCE OF THE OBSERVED GLOBAL PROFILES.
To assess the statistical significance of the differences in the global dynamical profiles of 1P and 2P shown in Fig. <ref> and <ref>, we adopted the following procedure. We refer for simplicity to the case with all clusters. First we created a 1000 realizations of bootstrapped samples of y=, , β, where each value has bee drawn from a Gaussian with dispersion equal to the observed uncertainty. For each realization we repeated the LOESS fit for 1P and 2P, and computed the difference the between the two fits. Finally, we estimate the 1, 2, 3σ confidence intervals from the distribution of simulated difference at each R/. The three confidence intervals are shown as gray shaded regions in the top panels of Fig. <ref>. In addition to this analysis, we also directly quantified the fluctuations introduced solely by the uncertainties. To do this, we repeated the same analysis on two identical distributions, testing the null hypothesis that 1P and 2P stars share the same dynamical profiles, and determined the fraction (f) of simulations with differences larger than the observed ones. Such fraction indicates the probability that uncertainties alone can reproduce a difference between 1P and 2P as large as the observed ones, and thus we computed the significance of the difference as p=1-f. The value of p is indicated by the color of the lines in the top panels of Fig. <ref>, as indicated by the colorbar.
We stress here that, instead of determining one single value of significance for each 1P/2P dynamical profiles, we compute the statistical significance for each radial coordinate.
|
http://arxiv.org/abs/2409.03696v1 | 20240905165112 | Molecular clouds as hubs in spiral galaxies : gas inflow and evolutionary sequence | [
"J. W. Zhou",
"Sami Dib",
"Timothy A. Davis"
] | astro-ph.GA | [
"astro-ph.GA"
] |
J. W. Zhou]
J. W. Zhou E-mail: [email protected]^1
Sami Dib ^2
Timothy A. Davis ^3
^1Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany
^2Max Planck Institute für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany
^3Cardiff Hub for Astrophysics Research & Technology, School of Physics & Astronomy, Cardiff University, Queens Buildings, Cardiff CF24 3AA, UK
2024
Molecular clouds as hubs in spiral galaxies : gas inflow and evolutionary sequence
[
Accepted XXX. Received YYY; in original form ZZZ
==================================================================================
§ ABSTRACT
We decomposed the molecular gas in the spiral galaxy NGC 628 (M74) into multi-scale hub-filament structures using the CO (2-1) line by the dendrogram algorithm. All leaf structures as potential hubs were classified into three categories, i.e. leaf-HFs-A, leaf-HFs-B and leaf-HFs-C. leaf-HFs-A exhibit the best hub-filament morphology, which also have the highest density contrast, the largest mass and the lowest virial ratio. We employed the FILFINDER algorithm to identify and characterize filaments within 185 leaf-HFs-A structures, and fitted the velocity gradients around the intensity peaks. Measurements of velocity gradients provide evidence for gas inflow within these structures, which can serve as a kinematic evidence that these structures are hub-filament structures.
The numbers of the associated 21 μm and H_α structures and the peak intensities of 7.7 μm, 21 μm and H_α emissions decrease from leaf-HFs-A to leaf-HFs-C. The spatial separations between the intensity peaks of CO and 21 μm structures of leaf-HFs-A are larger than those of leaf-HFs-C. These evidence indicate that leaf-HFs-A are more evolved than leaf-HFs-C.
There may be an evolutionary sequence from leaf-HFs-C to leaf-HFs-A. Currently, leaf-HFs-C lack a distinct gravitational collapse process that would result in a significant density contrast. The density contrast can effectively measure the extent of the gravitational collapse and the depth of the gravitational potential of the structure which, in turn, shapes the hub-filament morphology. Combined with the kinematic analysis presented in previous studies,
a picture emerges that molecular gas in spiral galaxies is organized into network structures through the gravitational coupling of multi-scale hub-filament structures. Molecular clouds, acting as knots within these networks, serve as hubs, which are local gravitational centers and the main sites of star formation.
– ISM: clouds
– ISM: kinematics and dynamics
– galaxies: ISM
– galaxies: structure
– galaxies: star formation
– techniques: image processing
Molecular clouds as hubs in spiral galaxies : gas inflow and evolutionary sequence
[
Accepted XXX. Received YYY; in original form ZZZ
==================================================================================
§ INTRODUCTION
Analyzing the dynamical interaction between density enhancements in giant molecular clouds and gas motion in their surrounding environment provides insight into the formation of hierarchical structures in high-mass star and star cluster- forming regions
<cit.>.
Detailed observations of high-mass star-forming regions with high resolution unveil the organized distribution of density enhancements within filamentary networks of gas, particularly evident in hub-filament systems. In these systems, converging flows channel material towards the central hub along the interconnected filaments
<cit.>.
In particular, <cit.> studied the physical properties and evolution of hub-filament systems across ∼ 140 protoclusters using spectral line data obtained from the ATOMS
(ALMA Three-millimeter Observations of Massive Star-forming regions) survey <cit.>.
They proposed that hub-filament structures exhibiting self-similarity and filamentary accretion appear to persist across a range of scales within high-mass star-forming regions, spanning from several thousand astronomical units to several parsecs. This paradigm of hierarchical, multi-scale hub-filament structures was generalized from clump-core scale to cloud-clump scale in <cit.> and from cloud-clump scale to galaxy-cloud scale in <cit.>.
Hierarchical collapse and hub-filament structures feeding the central regions are also depicted in previous works,
see <cit.> and references therein.
Kinematic analyses presented in <cit.> and <cit.> demonstrate the presence of multi-scale hub-filament structures within molecular clouds and spiral galaxies. The results notably show that intensity peaks, acting as hubs, are correlated with converging velocities, suggesting that surrounding gas flows are directed towards these dense regions. Filaments across various scales exhibit distinct velocity gradients, with a marked increase in these gradients at smaller scales.
Interestingly, the variations in velocity gradients measured at larger scales align with expectations from gravitational free-fall with higher central masses.
This correlation implies that inflows on large scales are driven by large-scale structures, potentially due to the gravitational coupling of smaller-scale structures.
Fig. 5 of <cit.> shows a vivid example of gravitational coupling, where multiple peaks are coupled together to form a gravitational potential well on larger scale, and each peak itself is also a local gravitational center. This aligns with the global hierarchical collapse (GHC) scenario proposed by <cit.>, which suggests that clouds are composed of multiple nested collapses occurring across a wide range of scales.
These observations agree with the hierarchical nature found in molecular clouds and spiral galaxies, and gas inflow from large to small scales.
Large-scale velocity gradients are consistently associated with numerous intensity peaks, reinforcing the idea that the clustering of smaller-scale structures can act as gravitational centers on larger scales.
Based on the kinematic evidence, the main goal of this work is to directly recover the multi-scale hub-filament structures in a spiral galaxy NGC 628 (M74 or the Phantom Galaxy).
§ DATA
We selected a face-on spiral galaxy NGC 628 (M74) from the PHANGS-ALMA survey. We used the combined 12m+7m+TP PHANGS-ALMA CO (2-1) data cubes to investigate gas kinematics and dynamics and identify hub-filament structures in the galaxy, which have a spectral resolution of 2.5 km s^-1 and an angular resolution ∼1.1”, corresponding
to a linear resolution ∼50 pc at the distance 9.8 Mpc <cit.>.
We also use the James Webb Space Telescope (JWST) 7.7 and 21 μm maps with an angular resolution ∼0.67” or a linear resolution ∼30 pc from the PHANGS-JWST survey and the H_α emission map with an angular resolution ∼0.92” or a linear resolution ∼45 pc from the PHANGS-MUSE survey.
Overviews of the PHANGS-ALMA, PHANGS-MUSE and PHANGS-JWST surveys' science goals, sample selection, observation strategy, and data products are described in <cit.>.
All the data are available on the PHANGS team website [<https://sites.google.com/view/phangs/home>].
The field-of-views (FOVs) of these observations are shown in Fig. <ref>.
NGC 628 was selected for the following reasons:
1. It is a face-on galaxy. The inclination angle of NGC 628 is only 8.9 degree <cit.>. Therefore, this facilitates the identification of the hub-filament structures embedded within the galaxy.
2. High-resolution CO data and the galaxy's relatively near distance can clearly reveal the hub-filament structures.
3. It is covered by multi-wavelength observations. As discussed below, high-resolution 21 μm and H_α emission are crucial to determine the evolutionary states of CO structures.
§ RESULTS
§.§ Dendrogram
We conducted a direct identification of hierarchical (sub-)structures based on the 2D intensity maps. As described in <cit.>, the dendrogram algorithm decomposes density or intensity data into hierarchical structures called leaves, branches, and trunks.
Using the astrodendro package [<https://dendrograms.readthedocs.io/en/stable/index.html>],
there are three major input parameters for the dendrogram algorithm: min_value for the minimum value to be considered in the dataset, min_delta for a leaf that can be considered as an independent entity, and min_npix for the minimum area of a structure.
For the CO (2-1) data cube,
there are two types of Moment 0 maps (strictly masked and broadly masked) in the data product of the PHANGS-ALMA survey [Details of the masking strategy and completeness statistics are presented in the PHANGS pipeline paper <cit.>.]. The strictly masked maps only include emissions that are identified as signals with high confidence in the data cube, which might filter out the relatively faint structures. The broadly masked maps offer superior completeness and cover larger areas compared to the strictly masked maps. However, due to the inclusion of more regions with faint emissions or areas close to bright emissions, they tend to be noisier and may contain false positives.
In order to ensure the reliability of the identified structures and because we are only interested in local dense structures, we select the strictly masked Moment 0 map to identify structures.
Since all the retained structures on the strictly masked Moment 0 map are reliable, we only require the smallest area of the identified structure be larger than 1 beam area. We do not set additional parameters in the algorithm to minimize the dependence of the identification on parameter settings. Finally, we obtained 773 leaf structures.
For the 21 μm and H_α emission, apart from min_npix= 1 beam area, we also take
the values of min_value= 3*σ_ rms, min_delta = 3*σ_ rms, where σ_ rms is the background intensity. The total numbers of 21 μm and H_α leaf structures are 1491 and 1965, respectively.
We first retained as much structures as possible using these lower standards, then eliminated the diffuse structures, as described in Sec. <ref>.
In Fig. <ref>, Fig. <ref> and Fig. <ref>,
the CO, 21 μm and H_α structures identified by the dendrogram algorithm exhibit a strong correspondence with the background intensity maps.
The algorithm characterizes the morphology of each structure by approximating it as an ellipse. Within the dendrogram, the root mean square (rms) sizes (second moments) of the intensity distribution along the two spatial dimensions define the long and short axes of the ellipse, denoted as a and b. As described in <cit.>, the initial ellipse with a and b is smaller, so a multiplication factor of two is applied to appropriately enlarge the ellipse.
Then the effective physical radius of an ellipse is R_ eff =√(2a × 2b)*D, where D is the distance of the galaxy.
For a structure with an area A and a total integrated intensity I_ CO, the mass of the structure can be calculated by
M = α^2-1_ CO× I_ CO× A,
where α^2-1_ CO≈ 6.7 M_⊙ pc^-2 ( K km s^-1)^-1 <cit.>.
§.§ Velocity components
For the identified molecular structures, we extracted the average spectrum of each structure to investigate its velocity components and gas kinematics. Large-scale velocity gradients from the galaxy's rotation contribute to the non-thermal velocity dispersion. To address this, and before extracting the average spectra, we removed the bulk motion due to the rotation of galaxy by creating gas dynamical models using the Kinematic Molecular Simulation (KinMS) package <cit.>, as shown in Fig. <ref>.
Then, following the procedure described in <cit.>, we fitted the averaged spectra of 773 leaf structures individually using the fully automated Gaussian decomposer <cit.> algorithm.
Almost all structures exhibit a single-peak profile. Only a few structures have a clear double-peak profile. Fig. <ref> displays the average spectra of the central regions of the structures presented in Fig. <ref>, Fig. <ref> and Fig. <ref>, marked by cyan ellipses.
§.§ Classification
From the line profile, we can fix the velocity range of each structure. Then, the Moment 0 map of each structure was reproduced in the corresponding velocity range to eliminate the overlap of potential incoherent velocity components.
All identified intensity peaks (leaf structures) of CO (2-1) emission are thought to be potential hubs.
Around these intensity peaks, we extended the spatial ranges to investigate the filamentary structures connected with them. After trying different enlargement factors, extending to 2.5 times the hub size (the effective radius of the leaf structure), we can recover the entire hub-filament structure and at the same time, avoid including many neighboring structures, as shown in Fig. <ref>, Fig. <ref> and Fig. <ref>.
All leaf structures were classified by eye into three categories only based on their morphology, i.e. leaf-HFs-A, leaf-HFs-B and leaf-HFs-C. The numbers of structures in the three categories are 234, 181 and 358. Some examples in the three categories are shown in Fig. <ref>, Fig. <ref> and Fig. <ref>.
From these maps, we can observe that leaf-HFs-C do not exhibit clear central hubs, meaning that the density contrast between the hub and the surrounding diffuse gas is not pronounced. In contrast, leaf-HFs-A and leaf-HFs-B feature distinct central hubs or high-density central regions, characteristic of hub-filament structures. Compared to leaf-HFs-B, leaf-HFs-A demonstrate more defined hub-filament morphology, with filamentary diffuse gas surrounding their central hubs.
The hubs of leaf-HFs-A are more outstanding.
While the boundary between leaf-HFs-A and leaf-HFs-B might be less distinct, the difference between leaf-HFs-A and leaf-HFs-C is significant.
Therefore, in the subsequent discussion, we focus on the comparison between leaf-HFs-A and leaf-HFs-C.
The initial classification was only based on the morphology, because other differences between the structures were unknown. In the subsequent analysis,
we can see that the physical properties of the structures in three categories are also significantly different. Therefore, the morphological differences are the result of certain physical processes shaping them and are not coincidental.
§.§ Filamentary structures
§.§.§ Identification
As done in <cit.>, we also use the FILFINDER algorithm to characterize the filamentary structures around the hub.
In <cit.>, we only focused on large-scale filaments along the spiral arms. Clear fluctuations in velocity and density were observed along these filaments. Each individual intensity peak reveals a local hub. In this work, we first identified the local intensity peaks (leaf structures), and then searched for hub-filament structures centered around these intensity peaks as potential hubs.
Now, we need to continue identifying small-scale filamentary structures within these local hub-filament structures. Due to the limited observational resolution,
even for the largest leaf-HFs-A structures, they still lack enough pixels to recognize the filaments. Therefore, we have to regrid the images and increase the number of pixels in the images from N_x*N_y to 2N_x*2N_y [We do this by using the zoom function from the scipy.ndimage module to regrid the image by setting an interpolation order of 3.]. To avoid introducing false structures, we only doubled the number of pixels. As shown in Fig. <ref>, the morphology of the structures remains unchanged. The identified filamentary structures also align well with the background. However, due to the resolution limitations, the filamentary structures do not appear very extended.
§.§.§ Velocity gradient
A kinematic feature of hub-filament structures is that the gas flow along the filament converges towards the central hub, resulting in a measurable velocity gradient along the filament. Since we are now studying each local hub-filament structure, we follow the same analysis presentend in <cit.> to fit the velocity gradients around the intensity peaks. In Fig. <ref>(a), the velocity gradients fitted in NGC 628 are quite comparable to those in NGC 4321 and NGC 5236 presented in <cit.>.
The measured velocity gradients predominantly fit within the mass range of approximately 10^5 to 10^7 M_⊙. This range aligns with the mass distribution of leaf structures shown in Fig. <ref>(b). This consistency suggests that local dense structures act as gravitational centers, accreting surrounding diffuse gas and thereby generating the observed velocity gradients.
Measurements of velocity gradients provide evidence for gas inflow within these structures, which can also serve as kinematic evidence that these structures can be regarded as hub-filament systems.
We note that only leaf-HFs-A structures were considered in the identification of filaments and the analysis of the velocity gradients. Furthermore, we successfully identified the filaments for only 185 leaf-HFs-A structures, and the filaments are long enough to show clear velocity gradients.
§.§ Physical properties
§.§.§ Density contrast
As shown in Fig.<ref>, a clear central hub implies a noticeable density contrast between the hub and the surrounding diffuse gas. We define the density contrast C as the ratio of the average column density in the hub region, N_ hub, to the average column density within an elliptical ring around the hub with the width equal to the hub size (the effective radius of the leaf structure), N_ a,
C=N_ hub/N_ a.
As expected, leaf-HFs-A with the best hub-filament morphology have the highest density contrast in Fig. <ref>(a). Moreover, leaf-HFs-A also possess the largest masses and the lowest virial ratios.
§.§.§ Association with 21 μm and H_α emission
Diffuse 21 μm and H_α emission should be unrelated to recent star formation activities. Therefore, for 21 μm and H_α structures identified in Sec. <ref>, we need to filter the faint structures.
21 μm and H_α emission that originate from the clusters born in the clouds should present intensity peaks. Similar to the CO structures, we calculated the intensity contrast for all identified 21 μm and H_α structures.
Before the calculation, we first shift the center of each structure to the pixel with the strongest emission.
Structures with peak intensities greater than the median peak intensity of all structures are retained.
Given that a large number of faint structures were identified using loose criteria in Sec.<ref>, this standard is not stringent.
For structures that do not meet this criterion, if their intensity contrast is greater than 1.5, they will also be retained.
The final numbers of 21 μm and H_α leaf structures are 841 and 1142, respectively.
As can be seen from Fig. <ref> and Fig. <ref>, the retained (bright) structures encompass all the significant emissions.
As shown in Fig .<ref>, JWST 21 μm emission has the smallest field-of-view in the observations.
Thus, we discarded the CO (2-1) and H_α structures which are beyond the FOV of the 21 μm observation.
Generally, the structure seen in CO is irregular and extended, so the densest part may not be at the effective center of the structure output by the dendrogram algorithm. However, the position with the highest column density truly represents the site of star formation. Therefore,
we calculate the spatial separations between the position of maximum CO column density and the positions of maximum 21 μm or H_α intensity.
If the separations are less than the effective radius of a CO structure, we consider the corresponding 21 μm or H_α structures are associated with the CO structure.
We note that CO could be optically thick, which may lead to an underestimation of the column density and, consequently, affect the estimation of the density center.
Finally, there are 313 21 μm structures (37%, 313/841) and 324 H_α structures (28%, 324/1142) associated with CO structures, where 185 CO structures have both strong 21 μm and H_α emissions.
Limited to the FOV of 21 μm observations, the total numbers of leaf-HFs-A, leaf-HFs-B and leaf-HFs-C are 163, 119, and 251, respectively.
The proportions of leaf-HFs-A, leaf-HFs-B and leaf-HFs-C associated with both 21 μm and H_α structures are 58% (95/163), 37% (44/119) and 18% (46/251), indicating that leaf-HFs-A are more evolved than leaf-HFs-C.
We also calculated the peak intensities of 7.7 μm, 21 μm and H_α emissions for each CO structure. As shown in Fig. <ref> (f)-(h), all emissions gradually increase from leaf-HFs-C to leaf-HFs-A.
§.§.§ Scale and density
In Fig. <ref>(a) and (e),
leaf-HFs-C have significantly smaller density contrasts and scales. However, leaf-HFs-C and leaf-HFs-B show comparable column density distributions in Fig. <ref>(d), although leaf-HFs-B have larger density contrasts. Therefore,
Leaf-HFs-C are not necessarily low-density structures. It is just that the density distribution in leaf-HFs-C structures is relatively uniform.
From these results,
one would argue that only when the structural scale is large enough, the density contrast can clearly manifest. In other words, the hub size of leaf-HFs-C could be too small to be resolved. Moreover, beam smearing effect may create an illusion of uniform density distribution.
In order to check these possibilities, in Fig. <ref>, we fitted the correlation between density contrast and scale. Individually, for leaf-HFs-A and leaf-HFs-C, there is almost no correlation between density contrast and scale. When the two categories were combined, a correlation between density contrast and scale does appear. However, since leaf-HFs-A and leaf-HFs-C have significant scale overlap, if leaf-HFs-A can discern the hubs, leaf-HFs-C should be able to as well. Therefore, relative to leaf-HFs-A, the absence of the hubs in leaf-HFs-C is real and not merely a resolution issue.
Two vertical dashed lines in Fig. <ref> mark the approximate range where leaf-HFs-A and leaf-HFs-C have significant scale overlap.
We confined the comparison of leaf-HFs-A, leaf-HFs-B and leaf-HFs-C to this scale range and obtained results consistent with previous findings, as shown in Fig. <ref>. Therefore, the difference in scale is not a significant factor affecting the physical properties of different types of structures.
§.§ Star formation
In this section, we examined the physical properties of embedded star clusters within molecular clouds.
Following the prescriptions described in <cit.>, the WISE 22 μm data can be used to calculate the local
star formation rate (SFR) surface density via
Σ_ SFR/ M_⊙ yr^-1 kpc^-2 = 3.8×10^-3(I_ 22 μ m/ MJy sr^-1)cosi,
where i is the inclination angle of the galaxy (i = 0 degree is face-on).
In this work, we used JWST 21 μm data to estimate the SFR surface density.
<cit.> and <cit.> have developed a statistically robust method to convert the observed spatial separations between cold gas and SFR tracers into their underlying timescales. This approach was employed by <cit.> to study NGC 628 using emission maps of CO, JWST 21 μm, and H_α emission maps which respectively trace molecular clouds, embedded star formation, and exposed star formation. This analysis yielded systematic constraints on the duration of the embedded phase of star formation in NGC 628. They defined the duration of the embedded phase of star formation as the time during which CO
and 21 μm emissions are found to be overlapping, t_fb, 21 μ m. In contrast, the “heavily obscured phase” refers to the period when both CO and 21 μm emissions are present without associated H_α emission, t_ obsc. Finally, they obtained
t_fb, 21 μ m≈ 5.1^+2.7_-1.4 Myr and t_ obsc≈ 2.3^+2.7_-1.4 Myr. In this work, we directly take the typical values t_fb, 21 μ m= 5.1 Myr and t_ obsc= 2.3 Myr. Therefore, the estimate is quite rough, given the significant uncertainty regarding the duration time.
For 185 CO structures associated with both 21 μm and H_α emissions, the duration time should be ∼2.3-5.1 Myr. The spatial separations d between the intensity peaks of CO and 21 μm structures calculated in Sec. <ref> can be used to distribute the age of the embedded star clusters in each molecular cloud.
The typical value of d is ∼ 0.64”, which is significantly larger than the typical positional uncertainty of each dataset.
Assuming t_0 = 2.3 Myr, the age of the embedded star cluster is
t = t_0 + t_fb, 21 μ m-t_ obsc/d_ max-d_ min× d,
where d_ max and d_ min are the maximum and minimum spatial separations. By combining the local SFR of each molecular cloud, we estimated the total mass of the corresponding embedded star clusters. Fig. <ref>(a) shows the mass distribution of embedded star clusters. The cluster mass of leaf-HFs-A is much larger than those of leaf-HFs-B and leaf-HFs-C.
The results here are quite consistent with the findings in <cit.>.
For the JWST 21 μm compact source population identified in <cit.>, using spectral energy distribution (SED) fitting, they found that the 21 μm
sources have SEDs consistent with stellar population masses of 10^2 < M_*/M_⊙ < 10^4.5. They also found a range of mass-weighted
ages spanning 2–25 Myr, though most objects are < 8 Myr.
In Fig. <ref>(b), the spatial separations between the intensity peaks of CO and 21 μm structures of leaf-HFs-A are also larger than leaf-HFs-C.
Since the spatial separation can measure the evolutionary timescales of molecular clouds, this is further evidence supporting that leaf-HFs-A is more evolved than leaf-HFs-C.
However, the sample size here is very limited. The numbers of leaf-HFs-A, leaf-HFs-B and leaf-HFs-C are 95, 44 and 46, respectively. More samples are necessary to quantitatively assess the correspondence between the separations and the evolutionary stages of molecular clouds.
§ DISCUSSION
§.§ Formation of hub-filament structures
The hub-filament morphology could be shaped by either gravitational collapse <cit.> or strong shocks due to turbulence <cit.>.
In the second case, there is no anticipated consistency between the velocity gradients and the masses of the hubs, while in the first case, a relationship between the velocity gradients and the hub masses is naturally expected. Thus, the results presented in Sec.<ref> clearly support the scenario in which the hub-filament structures form through gravitational contraction of gas structures on galaxy-cloud scales, which is also revealed in <cit.>. Similar findings on cloud-clump scales and clump-core scales are presented in <cit.> and <cit.>, also see <cit.> and references therein.
The kinematic characteristics of gas structures on galaxy-cloud scale are very similar to those on cloud-clump and clump-core scales.
Therefore, a picture emerges where dense cores act like hubs within clumps, clumps act like hubs within molecular clouds, and clouds act like hubs in spiral galaxies.
The interstellar medium from galaxy to dense core scales presents multi-scale/hierarchical hub-filament structures. Gas structures at different scales in the galaxy may be organized into hierarchical systems through gravitational coupling.
The hierarchical hub-filament structures are also present in the galaxy cluster (hub)–cosmic web (filament) picture, reflecting the self-similarity of the structural organization in the universe. The fundamental reason may be that structures at different scales in the universe primarily evolve under the influence of gravity.
Hubs are essentially structures with higher density compared to the surrounding more diffuse gas. As local gravitational centers, these hubs seem to accrete the surrounding diffuse gas, forming hub-filament structures. The filamentary structures seem to be associated with gas flows converging towards the hubs. A hub-filament structure essentially consists of a gravitational center and the gas flow converging towards it. This could be considered as a fundamental structural type in the interstellar medium (ISM). In short, as long as there are local dense structures within the relatively diffuse ISM, hub-filament structures will inevitably form.
Finally, we note that
although the measured velocity gradients support the gravitational collapse of gas structures on galaxy-cloud scales, the collapse is much slower than a pure free-fall gravitational collapse. In the free-fall, the velocity gradient ∇ v and the scale R satisfy ∇ v ∝ R^-1.5. However, Fig.<ref>(a) only gives ∇ v ∝ R^-0.9. For NGC 4321 and NGC 5236, <cit.> obtained a similar slope, i.e.
∇ v ∝ R^-0.8. As discussed in <cit.>, the deviation from the
free-fall may come from the measurement biases, the coupling with the galactic potential, the tidal forces from the galaxy or the neighboring structures, or the turbulence and magnetic field supports <cit.>.
§.§ Evolutionary sequence of molecular clouds
The results of this work seem to suggest an evolutionary sequence among molecular structures. First, we focus solely on leaf-HFs-A. We compared two types of leaf-HFs-A, i.e. without and with 21 μm and H_α emissions. As shown in Fig. <ref>, the structures with 21 μm and H_α emissions present larger density contrasts, lower virial ratios, higher column densities and larger scales (larger masses). In Fig. <ref> and Fig. <ref>,
from leaf-HFs-C to leaf-HFs-A, we can see the same trend for each physical quantity.
Leaf-HFs-A without and with 21 μm and H_α emissions represent structures from two different evolutionary stages.
Leaf-HFs-C and leaf-HFs-A may have the same relation.
A typical characteristic of hub-filament structures is that the hub region has significantly higher density compared to the surrounding filamentary structures, i.e. the density contrast. The hub-filament morphology is shaped by gravitational collapse. Hub-filament structures only become apparent when gravitational collapse has progressed to a certain extent. In this work, the identified dense CO structures are potential hubs, which act as local gravitational centers. Leaf-HFs-A have significantly greater mass than C, so they serve as stronger gravitational centers. Leaf-HFs-A also present much better hub-filament morphology than leaf-HFs-C, which reflects the extent of gravitational collapse. Leaf-HFs-C have not yet exhibited a pronounced gravitational collapse process that would lead to a substantial density contrast. From leaf-HFs-C to leaf-HFs-A, gravitational collapse will increase the density and density contrast of the structure. Thus, the density contrast C effectively gauges the degree of gravitational collapse and the depth of the gravitational potential well within a structure, which decisively shape the hub-filament morphology.
Fig. <ref> shows a clear correlation between the density contrast and the virial ratio. As expected, the density contrast decreases as the virial ratio increases. However, the classical virial analysis only considered the gravitational potential energy and internal kinetic energy may not completely reflect the true physical state of the identified structures. A more comprehensive virial analysis should include additional physical mechanisms, such as external pressure <cit.>, tidal forces <cit.> and magnetic fields <cit.>. Local dense structures are embedded within larger gas environments. Their interactions with the surrounding environment may significantly impact their physical state. After calculating a more accurate virial ratio, the correlation in Fig. <ref> should be stronger.
As the masses and scales of the structures also increase from leaf-HFs-C to leaf-HFs-A, the results of this work suggest that the structures inevitably accrete matter from their surrounding environment during the evolution.
It is consistent with the analysis in Sec. <ref> and the results presented in <cit.> for NGC 5236 (M83) and NGC 4321 (M100). In these spiral galaxies, there are clear velocity gradients around intensity peaks, and the variations in velocity gradient across different scales suggest a gradual and consistent increase in velocity gradient from large to small scales, indicative of gravitational collapse and gas inflow at different scales. Local dense structures act as local gravitational centers, naturally accreting matter from the surrounding diffuse gas environment and thereby accumulating mass.
§ SUMMARY
We decomposed the spiral galaxy NGC 628 into multi-scale hub-filament structures using the CO (2-1) line map. The main conclusions are as follows:
1. The intensity peaks as potential hubs were identified based on the integrated intensity (Moment 0) map of CO (2-1) emission by the dendrogram algorithm. Around all intensity peaks, we extracted the average spectra for all structures to decompose their velocity components and fix their velocity ranges. The final identification of hub-filament structures is based on the cleaned Moment 0 map of each structure made by restricting to the fixed velocity range to exclude the potential overlap of uncorrelated velocity components. In practice, some steps might not be necessary, as almost all structures show only one velocity component.
2. All leaf structures as potential hubs were classified into three categories, i.e. leaf-HFs-A, leaf-HFs-B and leaf-HFs-C. For leaf-HFs-C, the density contrast between the hub and the surrounding diffuse gas is not pronounced. Both leaf-HFs-A and leaf-HFs-B have clear central hubs. But leaf-HFs-A exhibit the best hub-filament morphology, which also have the highest density contrast, the largest mass and the lowest virial ratio.
3. We employed the FILFINDER algorithm to identify and characterize filaments within 185 leaf-HFs-A structures using integrated intensity maps. We also fitted the velocity gradients around intensity peaks, a process performed after removing the global large-scale velocity gradients attributed to the galaxy's rotation. Measurements of velocity gradients provide evidence for gas inflow within these structures, which can also serve as kinematic evidence that these structures can be regarded as hub-filament structures.
4. Leaf-HFs-C are not necessarily low-density structures. It is just that their density distribution is relatively uniform. There may be an evolutionary sequence from leaf-HFs-C to leaf-HFs-A. Currently, leaf-HFs-C lack a distinct gravitational collapse process that would result in significant density contrast.
The numbers of the associated 21 μm and H_α structures and the peak intensities of 7.7 μm, 21 μm and H_α emissions decrease from leaf-HFs-A to leaf-HFs-C. The spatial separations between the intensity peaks of CO and 21 μm structures of leaf-HFs-A are larger than leaf-HFs-C. These evidence indicate that leaf-HFs-A are more evolved than leaf-HFs-C.
5. There is a clear correlation between the density contrast and the virial ratio, and the density contrast decreases as the virial ratio increases.
The density contrast C effectively measures the extent of gravitational collapse and the depth of the gravitational potential well of the structure that shape the hub-filament morphology. In terms of reflecting the development and evolutionary stage of the structure, density contrast is more crucial than density itself.
6. Combining the local star formation rate (SFR) derived from the JWST 21 μm emission with the timescale revealed by the spatial separation between CO and 21 μm emissions yields mass estimates of embedded star clusters in molecular clouds comparable to those obtained from spectral energy distribution (SED) fitting. As expected, the cluster mass of leaf-HFs-A is much larger than those of leaf-HFs-B and leaf-HFs-C.
7. Combined with the kinematic evidence presented in <cit.>, a picture emerges where
molecular gas in spiral galaxies is organized into a network of structures through gravitational coupling of multi-scale hub-filament structures, consistent with the global hierarchical collapse scenario <cit.>.
Molecular clouds with sizes of hundreds of pc in NGC 628 are knots in networks of hub-filament systems, and are the local gravitational centers and the main star-forming sites.
§ ACKNOWLEDGEMENTS
We would like to thank the referee for the detailed comments and suggestions that significantly improve and clarify this work.
It is a pleasure to thank the PHANGS team, the data cubes and other data products shared by the team make this work can be carried out easily. This paper makes use of the following ALMA data:
ADS/JAO.ALMA#2012.1.00650.S and ADS/JAO.ALMA#2017.1.00886.L.
ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSTC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. Based on observations taken as part of the PHANGS-MUSE large program (Emsellem et al. 2021). Based on data products created from observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme(s) 1100.B-0651, 095.C-0473, and 094.C-0623 (PHANGS-MUSE; PI Schinnerer), as well as 094.B-0321 (MAGNUM; PI Marconi), 099.B-0242, 0100.B-0116, 098.B-0551 (MAD; PI Carollo) and 097.B-0640 (TIMER; PI Gadotti). This research has made use of the services of the ESO Science Archive Facility.
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These
observations are associated with program 2107.
§ DATA AVAILABILITY
All the data used in this work are available on the PHANGS team website.
[<https://sites.google.com/view/phangs/home>].
aasjournal
§ SUPPLEMENTARY MAPS
|
http://arxiv.org/abs/2409.02666v1 | 20240904124805 | SN 2021foa: Deriving a continuity between SN IIn and SN Ibn | [
"Anjasha Gangopadhyay",
"Naveen Dukiya",
"Takashi J Moriya",
"Masaomi Tanaka",
"Keiichi Maeda",
"D. Andrew Howell",
"Mridweeka Singh",
"Avinash Singh",
"Jesper Sollerman",
"Koji S Kawabata",
"Sean J Brennan",
"Craig Pellegrino",
"Raya Dastidar",
"Kuntal Misra",
"Tatsuya Nakaoka Miho Kawabata",
"Steve Schulze",
"Poonam Chandra",
"Kenta Taguchi",
"Devendra K Sahu",
"Curtis McCully",
"K. Azalee Bostroem",
"Estefania Padilla Gonzalez",
"Megan Newsome",
"Daichi Hiramatsu",
"Yuki Takei",
"Masayuki Yamanaka"
] | astro-ph.HE | [
"astro-ph.HE"
] |
firstpage–lastpage
Compression of high-power laser pulse leads to increase of electron acceleration efficiency
V. Yu. Bychenkov
September 4, 2024
===========================================================================================
§ ABSTRACT
We present the long-term photometric and spectroscopic monitoring campaign of a transitioning SN IIn/Ibn from -10.8 d to 150.7 d post V-band maximum. SN 2021foa shows prominent He i lines which are comparable in strength to the Hα line around peak luminosity, placing SN 2021foa between the SN IIn and SN Ibn populations. The spectral comparison with SNe IIn and SNe Ibn shows that it resembles the SN IIn population pre-maximum, becomes intermediate between SNe IIn/Ibn and at post-maximum matches with SN IIn 1996al. The photometric evolution shows a precursor at -50 d and a light curve shoulder around 17 d, which matches well with the light curve of the Type IIn SN 2016jbu and some other SNe with interactions. The peak luminosity and color evolution of SN 2021foa are consistent with those of most SNe IIn and SNe Ibn in our comparison sample. SN 2021foa shows the unique case of a SN IIn where the P-Cygni features in Hα appear at later stages, arising either due to complex geometry of the CSM or an interaction of the ejecta with a CSM shell/disk (similar to SNe 2009ip and 2015bh). Temporal evolution of the Hα profile favours a disk-like CSM geometry (CSM having both H and He) with a narrow (500 – 1200 km s^-1) component, intermediate width (3000 – 8000 km s^-1) and broad component in absorption early on. Hydrodynamical lightcurve modelling can be well-reproduced by a two-component CSM structure with different densities (ρ ∝ r^-2 – ρ ∝ r^-5), mass-loss rates (10^-3 – 10^-1 M_⊙ yr^-1) assuming a wind velocity of 1000 km s^-1 and having a CSM mass of 0.18 M_⊙. The overall lightcurve and spectral
evolution
indicates that SN 2021foa most likely originated from a LBV star transitioning to a WR star with the mass-loss rate increasing in the period from 5 to 0.5 years before the explosion or it could be due to a binary interaction.
spectroscopy, photometry, supernovae (SNe), supernova (SN), SN 2021foa
§ INTRODUCTION
Massive stars that eventually undergo core collapse when surrounded by some dense circumstellar material (CSM) are known as Type IIn/Ibn supernovae (SNe) <cit.>. This is signified in spectra by a bright, blue continuum and narrow emission lines at early times. SNe of Type IIn display H-emission lines with multi-component profiles showing narrow-width lines (NW), Intermediate-width (IW) lines and Broad width lines (BW). Narrow (∼ 100 - 500 km s^-1) components arise mostly in the photo-ionized, slow-moving CSM. Intermediate width emission lines (∼ 1000 km s^-1) arise from either electron scattering of photons in narrower lines or emission from gas shocked by supernova (SN) ejecta. Some events also show very broad emission or absorption features (∼ 10,000 km s^-1) arising from fast ejecta, typically associated with material ejected in the core-collapse explosion. An interesting extension of the Type IIn phenomenon was illustrated by SN 2006jc <cit.>, which exhibited narrow He i lines instead of NW/IW Hydrogen lines as spectral signatures of strong CSM interaction. Spectra of SN 2006jc had intermediate-width lines similar to those of SNe IIn (2000 – 3000 km s^-1), but seen mainly in He i emission lines – there was only a trace amount of Hydrogen in the spectra. It is therefore referred to as a ‘SN Ibn’ event, instead of a SN IIn. It has important implications for understanding the broader class of SNe IIn and Ibn with CSM interaction because it is also one of the few Type Ibn SN that was observed to have a non-terminal LBV-like outburst just 2 years prior to explosion <cit.>. Other than the traditional definition, the existence of transitional events that change type between Type IIn and Type Ibn over time (e.g ) suggests a continuum in the CSM properties of these events and consequently in the mass-loss history of their progenitor stars. We now also have a couple of new candidates of an interesting class of events named SNe Icn, which show narrow narrow emission lines of C and O, but are devoid of H and He <cit.>. <cit.> shows that the appearance of prominent narrow He emission lines in the spectra of SN 2023emq (Icn), around maximum light are typical of a SN Ibn. The family of interacting SNe thus occupy a unique space, but probably linked by a continuum of outer envelopes.
The eruptive mass-loss process expected in these transients has been associated with multiple mechanisms.
The energy deposited in the envelope by waves driven by advanced nuclear burning phases <cit.>, the pulsational pair-instability or late-time instabilities <cit.>, an inflation of the progenitors radius that triggers violent binary interactions like collisions or mergers before the core-collapse event <cit.>, or just the expansion of the envelope pre-collapse in massive stars <cit.>.
In mass-loss caused by binary interactions, one expects highly asymmetric distributions of CSM (disk-like or bipolar), relevant to the asymmetric line profiles seen in interacting SNe along with high degrees of polarisation <cit.>.
This brief period of enhanced mass loss likely influences the photometric evolution of the supernova (SN) (having single/double/multiple peaks), including duration and luminosity, along with the spectroscopic appearance of emission/absorption line profiles and their evolution (for example see ).
The traditional LBV stars as progenitors of Type IIn SNe are generally bright, blue and varying <cit.>. <cit.> introduced the first class of these events SNe 2005la and 2011hw, which as per the interpretation has progenitor star exploded as core-collapse while transitioning from the LBV to WR star phase. Theoretical models show that the event rate of appropriate binary mergers may match the rate of SNe with immediate LBV progenitors; and that the progenitor birthrate is ∼ 1 % of the CCSNe rate <cit.>. Observationally, the LBV/SN IIn connection inferred from properties of the CSM is reinforced by the detection of luminous LBV-like progenitors of three SNe IIn <cit.>. The standard evolution models
instead suggest that
massive stars are supposed to undergo only a very brief (10^4 – 10^5 yr) transitional LBV phase, and then spend 0.5-–1 Myr in the core-He burning Wolf-Rayet (WR) phase before exploding as a stripped envelope SN Ib/Ic <cit.>. This discrepancy between observational and theoretical numbers most likely exists due to insufficient mass-loss rate estimates, not considering binary evolution scenario and also not taking into account the criticality of the LBV phase <cit.>. This accounts for the fact that stellar evolution models are missing essential aspects of the end stages of massive stars.
While SNe IIn explosions make up 8–9 percent of all core-collapse SNe in the Lick Observatory Supernova Search sample <cit.>, the Type Ibn events like SN 2006jc represent a substantially smaller fraction. The fraction of SN 2006jc like event in this case constituted only 1 percent of the core-collapse sample which agrees with an independent estimate of the fraction of SN Ibn events by <cit.>. <cit.> updated this fraction for the Zwicky Transient Factory and found that SNe IIn consitute 14.2 percent of H-rich CCSNe while SNe Ibn constitute 9.2 percent of H-poor CCSNe. Given their rare occurrence, additional examples are valuable to demonstrate the diversity of the subclass. Among this whole sample, there are very few members of this peculiar SN class like SNe 2005la and 2011hw which had prominent narrow H and He lines <cit.>.
This motivates us to study another rare case of transitioning Type IIn/Ibn SNe, which belongs to the same category as SNe 2005la and 2011hw.
SN 2021foa were already investigated by (hereafter R22).
Here we present further detailed photometric and spectroscopic observations of SN 2021foa, which exhibits both H and He emission features and shows similarities with both Type IIn SNe and Type Ibn SNe at distinct phases of its evolution.
SN 2021foa like SNe show similarities in photometric evolution with both Type IIn and Type Ibn SNe <cit.>, but their diverse spectroscopic behaviour needs to be explored further to understand the division.
SN 2021foa (a.k.a. ASASSN-21dg, ATLAS21htp, PS21cae) was discovered by the All Sky Automated Survey for SuperNovae (ASAS-SN; ) on 15 March 2021 (MJD=59288.45) at a Sloan g apparent magnitude of 15.9, with the last non-detection 10 days earlier, at g of 17.9 mag <cit.>. The SN was discovered at a RA and Dec of α=13:17:12.29, δ=-17:15:24.19 (J2000) in a barred spiral galaxy IC 863 at a redshift of z=0.008386 <cit.>. The observational and data reduction details of this SN are presented Section <ref>. The spectroscopic classification was done on 09-04-2021 using Nordic Optical Telescope (NOT) <cit.>. The spectroscopic features and the details on the spectral luminosity estimates are provided in Section <ref>. We also show a comparison with other spectra of SNe IIn and SNe Ibn to highlight similarities and dissimilarities in Section <ref>. We discuss the photometric evolution, color, and absolute magnitude of SN 2021foa along with other members of the comparison sample in Section <ref>. The detailed lightcurve and spectral modelling is discussed in Section <ref>. Section <ref> gives an estimate of mass-loss rates in these SNe and relates it with progenitor activity. Section <ref> discusses the overall scenario of SN 2021foa and we summarise our results in Section <ref>.
This work presents an extended analysis on SN 2021foa after R22. In their analysis, R22 showed that SN 2021foa belongs to sub-class of SN IIn which are labelled as SN IId <cit.> and shows prominent narrow Hα early on with ejecta signatures later on. SN 2021foa, however, showed early prominent signatures of He i 5876 Å than other SNe IIn like 2009ip, 2016jbu. R22 quoted that SN 2021foa may be part of a bridge connecting H-rich SN 2009ip-like and Type Ibn SN events, indicating the possible existence of a continuum in properties, mass-loss history and progenitor types between these two types of peculiar transients. In our paper, we did a more robust modelling of the lightcurve, spectra and derived the physical parameters associated with the explosion. R22 indicated that SNe IId are probably connected to those objects by having similar a progenitor with LBVs transitioning to WR phase, but with a different mass-loss history or observed with a different orientation. We indeed, notice that our estimated mass-loss rates changes at different phases in the evolution of the SN, and spectral modelling shows an asymmetric CSM structure giving rise to Hα and He i at different strengths. Thus, our results are in concordance with what has been predicted by R22 and also an more elaborate description of it.
§ OBSERVATIONS AND DATA REDUCTION
§.§ Optical Observations
We observed SN 2021foa in UBgVriocRIJHK bands from day -34.3 to ∼150 d post V-band maximum (see section <ref>). The oc-band ATLAS data was reduced and calibrated using the techniques mentioned in <cit.>. The imaging observations were carried out using the 1.5m Kanata telescope (KT; ) of Hiroshima University; Japan, 3.8m Seimei Telescope <cit.> of Kyoto University at Okayama observatory, Japan. Several bias, dark, and twilight flat frames were obtained during the observing runs along with science frames. For the initial pre-processing, several steps, such as bias-subtraction, flat-fielding correction, and cosmic ray removal, were applied to raw images of the SN. We used the standard tasks available in the data reduction software IRAF[IRAF stands for Image Reduction and Analysis Facility distributed by the National Optical Astronomy Observatory, operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.] for carrying out the pre-processing. Multiple frames were taken on some nights and co-added in respective bands after the geometric alignment of the images to increase the signal-to-noise ratio.
Given the proximity of SN 2021foa to its host galaxy, host galaxy contamination was removed by performing image subtraction using IRAF. For the templates, we used a set of deep images obtained on 2022 when the SN went beyond the detection limit of the telescope.
For the optical photometry of KT data, local comparison star magnitudes were calibrated using the photometric standard stars <cit.> observed on the same nights.
The zero point and the color terms were derived from these comparison stars to calibrate the instrumental magnitudes. We also observed SN 2021foa with the Las Cumbres Observatory (LCO) network of telescopes as part of the Global Supernova Project.
The pre-processing of LCO data was conducted using the BANZAI pipeline <cit.>. The photometry was performed using the [<https://github.com/LCOGT/lcogtsnpipe/>] pipeline <cit.>. The template subtraction was performed using PyZOGY library <cit.> implemented within the pipeline. The UBVgri instrumental magnitudes were obtained from the difference images.
The gri apparent magnitudes of the local comparison stars were taken from the Sloan Digital Sky Survey (SDSS) catalog, and the UBV magnitudes of the local comparison stars were calibrated against standard Landolt fields observed on the same nights as the SN field. Then, instrumental magnitudes of the SN were converted to apparent magnitudes by using the zero point and color terms derived from these comparison stars.
Table <ref> reports the complete photometric lightcurve evolution of SN 2021foa taken from the LCO and the Japan Telescopes.
The near-infrared (NIR) data of SN 2021foa were obtained with the HONIR instrument of KT <cit.>. The sky-background subtraction was done using a template sky image obtained by dithering individual frames at different positions. We performed PSF photometry and calibrated the SN magnitudes using comparison stars in the 2MASS catalog <cit.>. The final NIR magnitudes in the SN field are shown in Table <ref>.
Low-resolution (R ∼ 400-700) optical spectroscopic observations were carried out using the FLOYDS spectrographs mounted on the LCO 2m telescopes. The 1D wavelength and flux calibrated spectra were extracted using the [<https://github.com/LCOGT/floyds_pipeline>] pipeline <cit.>. Spectroscopic observations were also carried out using the KOOLS-IFU <cit.> on the Seimei Telescope. Our spectral coverage spans from -10.8 d to +69.5 d. The spectra with KOOLS-IFU were taken through optical fibers and the VPH-blue grism. The data reduction was performed using the Hydra package in IRAF <cit.> and a reduction software developed for KOOLS-IFU data[<http://www.o.kwasan.kyoto-u.ac.jp/inst/p-kools>]. For each frame, we performed sky subtraction using a sky spectrum created by combining fibers to which the contributions from the object are negligible. Arc lamps of Hg, Ne, and Xe were used for wavelength calibration. Finally, the spectra were corrected for the heliocentric redshift of the host galaxy. The slit loss corrections were done by scaling the spectra with respect to the SN photometry.
The log of spectroscopic observations is reported in Table <ref>.
Along with that, we also obtained high-resolution spectroscopic data with High Dispersion Spectrograph (HDS) mounted on Subaru Telescope on 2021 April 22 (UT). The echelle setup was chosen to cover the wavelength range of 5700–7100 Å with a spectral resolution of ∼50,000 in the Red Cross Disperser mode. We followed standard procedures to reduce the data. The wavelength calibration was performed using Th–Ar lamps. A heliocentric velocity correction was applied to each spectrum. The sky subtraction was performed using data at an off-target position in the target frames. The spectra were not flux calibrated by a standard star but were scaled to photometric fluxes at similar epochs to account for any flux losses.
§.§ Radio Observations
The observing campaign of SN 2021foa was carried out using the Giant Meterwave Radio Telescope (GMRT), Pune, India in Bands 4 and 5 (PI: Poonam Chandra). There was no detection of the source on observations dated 11 January 2022 and 31 March 2022. Our rms obtained in Band 5 (1.265 GHz) and Band 4 (0.745 GHz) are 33 and 215 μJy and the 3-sigma limits corresponding to the non-detections are 100 μJy and 645 μJy in the two bands. There was a nearby radio-bright galaxy with extended emission which was contaminating the SN location due to which our Band 4 rms are high. We also checked the Very-Large Array Telescope archive and did not find any detection of this source. The obtained radio luminosities at these phases for Band 5 and Band 4 are 1.455 × 10^26 erg sec^-1 and 9.385 × 10^26 erg sec^-1. Figure 18 of <cit.> shows the radio luminosity of a group of core-collapse SNe which includes SNe IIn. We see that the minimum radio luminosity driving SNe IIn are in between 10^27 - 10^29 erg sec^-1. We do expect a radio emission at a phase of 584 d (1st observation) if it were a SNe IIn. However, given the fast decaying light curve of SN 2021foa, it's expected that the radio power will decrease, and thus, we get only radio upper limits.
§.§ Estimation of explosion epoch
<cit.> from the ASASSN team report the discovery of SN 2021foa (RA = 13:17:12.290; DEC = -17:15:24.19) on 2021-03-15 10:48:00
(MJD = 59288.45) at a discovery AB mag of 15.9 using the g filter. A non-detection of the source was reported on 2021-03-05 09:50:24 (MJD = 59278.41) at a limiting magnitude of 17.9 mag (g band). <cit.> report the classification of SN 2021foa from a spectrum taken with ALFOSC mounted on the Nordic Optical Telescope using gr4, which matches with an SN IIn.
To estimate the explosion epoch, we fitted a parabola function on the rising part of the g-band light curve. The early light curve shape is well-reproduced by a parabola. We performed the fit using 20000 iterations of Markov Chain Monte-Carlo (MCMC) simulations. Using this method, we find the explosion epoch to be MJD = 59284.8 ± 0.2, 3.6 days prior to the first detection. This estimate is consistent with the non-detection of the source, and we adopt it as the explosion epoch.
However, since it is often difficult to estimate the explosion epoch for the comparison SNe in the literature, we adopt the V-band maximum (MJD 59301.8) as the reference epoch. This value is in agreement with the estimate from .
§.§ Distance & extinction
Adopting H_0 = 73 km s^-1 Mpc^-1, Ω_m = 0.27 and Ω_Λ = 0.73, we obtain a distance of 34.89 ± 2.44 Mpc (μ = 32.71 ± 0.15) corrected for Virgo, Shapley and GA (corresponding to a redshift z=0.00839 [<https://ned.ipac.caltech.edu/byname?objname=IC0863 hconst=73 omegam=0.27 omegav=0.73 wmap=1 corr_z=2>]) for SN 2021foa. This value is the same as that adopted by . The Milky Way extinction along the line of sight of SN 2021foa is A_V = 0.224 mag <cit.>. We see a conspicuous dip at 5892.5 Å from Na1D in the spectra of SN 2021foa taken on 2021-03-20, 2021-03-23 and 2021-03-25. For estimating the extinction due to the host galaxy, we estimate equivalent widths of the Na1D line iteratively three times in the combined spectra of these three dates to increase the signal-to-noise ratio. Using the formulation by <cit.>, we estimate host galaxy A_V = 0.40 ± 0.19 mag. We multiply this reddening value by 0.86 to be consistent with the recalibration of Milky Way extinction by <cit.>.
Thus, we adopt a total extinction of A_V = 0.57 ± 0.16 mag. We use these values of distance and extinction throughout the paper, which is also consistent with the values quoted by .
§ SPECTROSCOPIC EVOLUTION
We conducted the spectroscopic follow-up of SN 2021foa from -10.8 d to 58.6 d post V-band maximum. The complete spectral evolution of SN 2021foa, which includes our data and those published in , are shown in Figure <ref>. The early time spectral sequence shows prominent lines of Hα (6563 Å) along with Hβ (4861 Å) and Hγ (4340 Å). From -10.8 d to -3.2 d, the He (5876 Å) line also starts developing but is not very prominent. The Hβ profile initially shows narrow emission, but develops a narrow P-Cygni feature at around -5.8 d in our spectra. The spectra from show an even earlier appearance of this narrow P-Cygni feature, which can be attributed to the higher resolution of their spectrum.
The Hα also develops this narrow P-Cygni feature in the -3.2 d spectra. The early spectral sequence till -3.2 d does not show prominent lines of Fe ii (4924, 5018, and 5169 Å), which are characteristic of a typical SN ejecta.
Post -3.2 d, the He i (5876 Å) line becomes prominent. The other He i (6678, 7065 Å) lines start developing at this phase. The narrow P-Cygni on top of Hα is clearly seen with the blue wing extending up to -1900 km s^-1.
From 7.2 d to 17.2 d, the Hα and Hβ show very complex profiles. This is also the phase where we see a second shoulder appearing in the light curve of SN 2021foa (c.f.r Section <ref>).
From 17.2 d, we see that the spectrum transforms significantly. This marks the onset of the phase where we see that the flux of He i 5876 Å is comparable to Hα. The higher excitation H-lines like Hβ and Hγ no longer show narrow emission lines. However, narrow emission and the corresponding narrow P-Cygni feature are still significant in the Hα line. We also see a red wing developing in the Hα profile, most likely due to He i 6678 Å. The blue part of the spectrum in this phase is also mostly dominated by the He lines, along with the Fe group of elements <cit.>.
From 26.8 d to 58.6 d, we see that both Hα and He i grow in strength.
This also marks the phase where we see ejecta signatures in the spectral evolution. The [Ca ii] 7291, 7324 Å lines emerge at 7300 Å, which could also be blended with He i 7281 Å. This phase also marks the appearance of the broad Ca II NIR triplet. At late times (>60 d), the intermediate to broad-width H-lines develop a slight blueshift in the observed profiles.
Overall, the spectral behaviour shows a striking similarity with an interacting SNe IIn early on, which later on is overtaken by a SN Ibn like behaviour with He lines. The very late spectra shows prominent ejecta signatures of Ca.
§.§ Line luminosities and line ratios:
We observed well-developed He i features in the spectra, and the line-luminosity of the He i 5876 Å line becomes comparable to the Hα line at about 17.2 d. To study the evolution of the He i 5876 Å line in comparison to the Hα line, we estimate the line luminosities of Hα and He i over the evolution of the SN.
To compare with other well-studied SNe, we selected a group of SNe IIn having diversity in the luminosity distribution and some having precursor detections, similar to SN 2021foa. We also include a set of classical, bright SNe Ibn, along with some that have some residual H-envelope. The sample includes- SNe IIn: 1996al <cit.>, 2009ip <cit.>, 2010mc <cit.>, 2015bh <cit.>, 2016jbu <cit.>, 2018cnf <cit.> and SNe Ibn: 2005la <cit.>, 2006jc <cit.>, 2010al <cit.>, 2011hw <cit.>, 2019uo <cit.>, 2019wep <cit.>.
To study whether SN 2021foa belongs to the SNe IIn or the SNe Ibn regime, we compare the evolution of Hα and He i line luminosities with the sample. The line luminosities are estimated by integrating the continuum-subtracted line regions of the de-reddened spectra. The top panel of the Figure <ref> shows the Hα luminosities of SNe IIn (blue) and SNe Ibn (pink) sample and SN 2021foa marked in black. The Hα luminosities of SNe IIn and SNe Ibn are well separated in luminosity scales. SN 2021foa shows similarity with SN 2011hw in the Hα space, which is an SN Ibn with a significant amount of residual Hydrogen <cit.>. On the contrary, in the He i luminosity scale (middle panel of Figure <ref>), SNe IIn and SNe Ibn do not show a clear distinction. The He i 5876 Å luminosity of SN 2021foa matches with SN 2006jc <cit.> over the evolution. The lower panel of Figure <ref> shows the luminosity ratios of He i 5876 Å to Hα. The distinction between these two classes of objects is the most prominent in this plot, however, we want to mention that there might be some residual Hα luminosity from the host galaxy in case of SNe Ibn.
The line-luminosity ratio of SN 2021foa, from -10.8 d to about 17.2 d, rises from 0.3 to 0.7, placing it in the intermediate region between SNe IIn and SNe Ibn. From 40 d, SN 2021foa shows similarities with SN 1996al, which is a SN showing signs of ejecta signatures and interaction signatures simultaneously for 15 years <cit.>. This also marks the onset of the phase where we see ejecta signatures arising for both SN 1996al and our object SN 2021foa.
To summarize, SN 2021foa shows a complete demarcation and lies intermediate between the SN IIn and SN Ibn population from -20 d to 20 d, which is taken over by its match with SN 1996al at late phases mostly dominated by the appearing ejecta signatures. Also, there is a probability of Hα to He i reaching a straight line for SNe IIn but would be affected by the sample size.
Figure <ref> aims to show how much the Hα and He i luminosity contributes to the total optical luminosity (V-band; L_v) at peak for a group of SNe IIn and SNe Ibn. To avoid possible biases in the distribution, we added more diverse SNe IIn and SNe Ibn sample for this comparison plot to highlight the position of SN 2021foa in the phase space. The additional SNe IIn used for this comparison plot are : SNe 1998S <cit.>, 2005ip <cit.>, 2006gy <cit.>, 2006tf <cit.>, 2007od <cit.>, 2009kn <cit.>, 2011ht <cit.>, PTF11oxu/2011jc <cit.>, ASASSN-14il <cit.>, 2015da <cit.> and ASASSN-15ua <cit.>. The additional SNe Ibn used for the comparison are : SNe OGLE-2012-SN-006 <cit.>, LSQ12btw <cit.>, LSQ13ccw <cit.>, ASASSN-15ed <cit.>, SNe 2014av <cit.>, 2015U <cit.> and 2015G <cit.>. We added SNe IIn/Ibn of different CSM configurations, long-lived, short-lived, with and without precursor to diversify this plot. Also, we chose only those SNe for which V-band observations around maximum are available with a good signal-to-noise ratio spectrum of min 10. We plot the ratio of Hα to the peak V-band optical luminosity against the ratio of He i to the peak V-band optical luminosity for SN 2021foa and the comparison sample. However, these SNe are in general have asymmetric CSM geometries <cit.> and the peak V-band line luminosities may be slightly affected by our viewing angle. We see that the SNe IIn and SNe Ibn in our sample are well separated in this phase space, with SN 2021foa (black star) lying in between the two sub-classes around peak luminosity. SN 2021foa shares remarkable similarities with SN 2011hw in this space as well. <cit.> have shown that SN 2011hw is also a SN Ibn with significant residual Hα. The distributions of SNe IIn and SNe Ibn also help in deciphering the fact that it is the Hα contributing to the optical luminosity that demarcates the two classes. He i distribution is blended for the SNe IIn and SNe Ibn population. However, we want to remark that a statistically decent sample would help in further verification of this. Nonetheless, SN 2021foa clearly lies at the junction between the two populations at this phase.
§.§ Spectral Comparison
In this section, we compare and classify SN 2021foa with a group of SNe IIn and SNe Ibn in the pre-maximum, about 20 d post maximum and around 40 d post-maximum to see the changing trend in the evolution of the SN.
Figure <ref> shows the pre-maximum spectral comparison of SN 2021foa with a group of SNe IIn and SNe Ibn. The pre-maximum spectral profile of SN 2021foa looks very similar to all the SNe IIn in our comparison sample. However, the H-lines are more prominent in the SNe IIn compared to SN 2021foa. In contrast, the SNe Ibn show little to no hydrogen in their spectra.
In SN 2011hw, an SN Ibn with significant residual hydrogen, both H and He lines are visible at this phase, while for SN 2021foa He i lines are not seen.
Most SNe Ibn at this phase show flash features, which is absent in our observed profile. Overall, at this phase, SN 2021foa behaves more like a SN IIn.
Figure <ref> shows the spectral comparison at 20 d after the V-maximum. This phase marks the remarkable transition of SN 2021foa, showing prominent lines of Hα and He i simultaneously. We also see narrow P-Cygni of Hα in SN 2021foa at this phase, similar to SNe 1996al and 2016jbu. The strength of Hα remains lower in strength for SN 2021foa than other SNe IIn; however, He i shows similar strength with most SNe Ibn in the sample. This separates SN 2021foa with the SNe IIn and SNe Ibn population, again justifying our case of SN 2021foa having strong Hα and He i emission simultaneously at mid epochs and at similar strengths.
We also show the late-time spectral comparison of SN 2021foa in Figure <ref>.
The spectral evolution of SN 2021foa at this phase matches very well with SNe 2005la and 2011hw, which are SNe Ibn with residual Hydrogen. SNe Ibn at this phase shows very strong He i lines, unlike SN 2021foa. Similarly, the He i lines are more prominent than traditional long-lasting SNe IIn. The Hα and He i line profiles of SN 2021foa also show similarities with SN 1996al at this phase, in addition to the similarities in the line-luminosity ratio, noted earlier.
Overall, we conclude that the early time spectral evolution is similar to that of traditional SNe IIn followed by a phase where the spectral evolution is intermediate between those of SNe IIn and SNe Ibn. During late times, the SN shows spectral evolution similar to SNe IIn that show ejecta signatures at late phases or with SNe Ibn having a residual Hydrogen envelope.
§ PHOTOMETRIC EVOLUTION
We present the complete photometric evolution of SN 2021foa from -50 d to about 150 d post maximum, which extends the current published dataset. Our light curve spans about 30 d more than the time evolution presented in , after which the object went behind the sun. The rise and peak of the light curve are well sampled in the V-band, and the adopted value to V-band maximum (MJD 59301.8) is the same as .
SN 2021foa showed prominent signatures of precursor from -50 d to about -23 d in ATLAS c and o-bands.
The precursor for SN 2021foa lasted for a shorter duration than SNe 2009ip, 2016jbu and 2018cnf where the precursor event was observed about -90 d to -200 d before maximum. This precursor activity can be mainly attributed to the mass-loss eruptions from the progenitor star that occurred months-years before the explosion (For example, SN 2009ip; <cit.> or SN 2006jc; ). This has been interpreted as LBV stars undergoing eruptions, but, since our CSM is a combination of Hydrogen and Helium, this could be attributed to a LBV star transitioning to a WR phase <cit.>. For SN 2021foa, the non-detection of the precursor before 50 d could be attributed to ATLAS upper limits reaching 20.4 mag (3σ detection), so, there is a chance the precursor activity might have lasted longer than the timescale of detections.
After the precursor, the lightcurve rose to peak in most of the optical bands in 6 d - 9 d which is consistent with SNe Ibn <cit.> and fast rising sample of SNe IIn <cit.>. From the peak to about 14 d, it did not have much evolution and changed by only 0.3 - 0.5 mag in the optical wavebands. At ∼ 14 d, we see a shoulder in the light curve of SN 2021foa. This also marks the phase where we see the He i features developing and appearance of a narrow P-Cygni on top of Hα (c.f.r Section <ref>).
The bump or the shoulder in the lightcurve of SN 2021foa is weaker in the redder bands than in the bluer bands. After this phase, the lightcurve drops sharply at a decline rate of about 3 mag in 50 days.
Post 75 d, the lightcurve shows a flattening lasting from 75 d to 150 d post maximum. The flattening in the light curve of SN 2021foa was also noticed by . The late time flattening in the redder bands have been attributed to the formation of dust <cit.>, but, we do not have any NIR observations to verify this scenario <cit.>. The late time flattening could also be due to interaction with a uniform density CSM as we also see in our lightcurve modelling section (c.f.r Section <ref>). A recent paper by <cit.> shows existence of dust in the spectral evolution of SN 2021foa, justifying the evidence of newly formed dust/old dust in the ejecta-CSM front of SN 2021foa.
The double peak or hump seen in the lightcurve of SN 2021foa can also be reproduced overall by the grid of models by <cit.> which are based on a CSM of mass ∼ 10 M_⊙ assuming a disk-like geometry of the CSM.
Figure <ref> shows the (B-V)_0 color evolution and the absolute magnitude lightcurve of SN 2021foa with other members of the SNe IIn and SNe Ibn subclass. The (B-V)_0 color curve of SN 2021foa increases by ∼ 0.5 mag in colors from -10 d to about 40 d post maximum. In the phase space of color evolution, we see two sectors of events. One set of SNe (1996al, 2006jc, 2016jbu, 2018cnf; Category 1) in our plot, increases from red to blue from -10 d to about maximum and then becomes flatter in the color curve evolution while for the latter, we see a constant rising of red to blue colors upto 30-40 d post maximum (SNe; Category 2) and then drops in the color evolution. Our SN 2021foa follows the latter trend in the evolution. The SNe of Category 1 are the SNe IIn which had a long term precursor activity. Also, for these events the flattening in color evolution is seen after the second peak in the SN light curve <cit.>. This is in contrary to SN 2021foa which shows a short term precursor and becomes red upto 50 d post maximum. Post 50 d, the colorcurve again becomes blue and continues upto 150d. This marks also the phase where we see a change in the mass-loss rates of the evolution (see subsection <ref>).
We compare the absolute magnitude (r/R-band) lightcurve of SN 2021foa with a group of SNe IIn and SNe Ibn. For the cases where r-band in not available, we use Johnson Cousin R/r-band. The absolute magnitude lightcurve of SN 2021foa behaves similarly with other events having precursor activities.
The precursor lightcurve (Event A) had an absolute magnitude ∼ -14 mag () similar to SNe 2009ip and 2016jbu. The second peak in the lightcurve (Event B) lies fairly intermediate (M_V = -17.8) among the SNe IIn and SNe Ibn (c.f.r Figure 1 of ). <cit.>, through their studies have found have found that most SNe IIn with precursor events typically rises to second peak around ∼ 17 days. The event B typically rises to a maximum with absolute magnitude ∼ -18 mag ± 0.5 <cit.> followed by a bumpy decline. Our SN 2021foa also reaches a peak mag at around ∼ 20 days, however, SN 2021foa has a low luminosity compared to the sample of <cit.>. Overall, the light curve of SN 2021foa is similar in luminosity to both SNe IIn and SNe Ibn population with L ∼ 10^42 - 10^43 erg sec^-1, but the lightcurve resembles more those of SNe IIn given the heterogeneity. The SNe Ibn in our comparison sample have instead
more short-lived and less bumpy lightcurves, in accordance with the sample presented by <cit.>. Also, interestingly, some events with precursor activity like SNe 2009ip and 2018cnf show a light curve shoulder similar to that of SN 2021foa, 20 d post V-band maximum. This is most likely associated with the change in the mass-loss rate happening years before explosion. This is also affected by the opacity effects influencing the lightcurve behaviour.
§ SPECTRAL AND LIGHTCURVE MODELLING
Figure <ref> shows the zoomed-in spectral evolution of the line profiles of Hα, and He i 5876, 6678, 7065 Å. Hα shows a very complex profile throughout the evolution. Initially, Hα shows a narrow line on the top of a broad component. Around -3.2 d, we see a narrow P-Cygni component appearing on top of a broad Hα profile. Thereafter, the Hα profile is complex and highly asymmetric. Post 14.2 d, the red part of Hα starts developing, possibly due to contamination from He i 6678 Å. The He i 5876 Å line also shows evolution, with the FWHM varying between 2500 – 4000 km s^-1. The narrow notch on top of the He i profile is mostly due to Na I D. The He i 5876 Å grows in strength and by 7.2 d its luminosity becomes comparable to that of Hα. The He i 7065 Å line develops later, at 14.2 d, and grows in strength thereafter.
Interestingly, the FWHM of the He i 5876 Å line is similar to that of the Hα line throughout the evolution, which again may indicate a mixed composition of the CSM. The implications of these line profiles with regards to the geometry of the SN is discussed in Sect. <ref>.
To discern the origin of the narrow P-Cygni profile of Hα, we selected the broad emission and absorption components as a continuum and normalize the spectra with respect to it.
Figure <ref> shows that in the continuum normalized spectra, from -10.8 d to 49.5 d, we still see a narrow emission of FWHM ∼ 500 km s^-1.
No absorption features are seen in the spectra before -8.3 d. The narrow P-Cygni absorption starts to appear at -5.8 d with its blue edge reaching up to velocities of ∼ -1000 km s^-1. The narrow P-Cygni becomes really prominent at 7.2 d, with the blue edge extending up to -2000 km s^-1. The FWHM of the P-Cygni profile typically varied between 500 km s^-1 to 800 km s^-1.
To compare the origin of the appearance of narrow P-cygni absorption/emission profiles in interacting SNe, we compared the time evolution of the appearance of the P-Cygni profiles for our group of SNe IIn and SNe Ibn (see Figure <ref>).
For most of the events in our comparison sample (SNe 1996al, 2005la, 2010al, 2016jbu, 2019uo, 2019wep), the narrow P-Cygni lines is present from the beginning of the observed evolution of the SN.
This arises due to the presence of pre-shock CSM present along the line of sight <cit.>.
However, some objects (SNe 2006jc, 2010mc, 2010al, 2011hw) show only narrow emission profiles. In this case, we are not seeing the ionized pre-shock CSM along the line of sight.
In addition to that, SN 2015bh shows a delayed onset of a P-Cygni profile similar to what is seen for SN 2021foa <cit.>, and SN 2009ip shows narrow emission early on, followed by narrow P-Cygni at intermediate times and then narrow emission lines again at a later stage of the evolution. <cit.> suggested that in SN 2009ip, narrow P-Cygni initially arose when the star was in the LBV phase. Just after the explosion, it showed narrow emission lines due to interaction with a dense CSM and thereafter from interacting with another shell moving at -2000 km s^-1. <cit.> also associated the origin of the narrow P-Cygni as due to outbursts a few decades prior to a “hyper eruption" or the final core-collapse. For the case of SN 2021foa, we also see that the velocity of the blue edge extends up to -2000 km s^-1 and we see narrow P-Cygni features developing, which indicate the presence of a shell/disk of CSM along the line of sight. A detailed interpretation of this associated with the geometry of the CSM is also described in Sect. <ref>.
§.§ Hα decomposition
To decipher the origin of the complex Hα structure, we tried to deconvolve the line profiles of this SN.
From the typical explosion circumstances of interacting SNe <cit.>, we expect a narrow component from the unshocked CSM, an intermediate width component either from the e-scattering of the narrow line photons or from the cold, dense shell (CDS), a broad width component from the uninterrupted ejecta which sometimes may show an associated absorption component as well. Therefore, we try to deconstruct Hα profile in terms of these components.
The Hα profile was fitted with combinations of different line profiles in order to reproduce the overall line profile seen in SN 2021foa at different stages of its evolution. We used i) A narrow Gaussian component that is slightly redshifted from the center ii) A Lorentzian/Gaussian intermediate width component that is redshifted from the center, iii) A broad Gaussian component in absorption. At early times, the overall Hα profile is better represented by a Lorentzian intermediate width component as the emission lines are dominated by electron scattering. On top of that, we have a narrow emission mostly arising from pre-shocked CSM.
During the middle phases, the narrow emission is replaced by a narrow P-Cygni profile, and at late times (>30 d), a Gaussian intermediate width component better reproduces the overall line profile as it evolves into a more complex multi-component structure. The choice of the continuum is very critical for performing these fits. The continuum is selected to be far from the line region by at least 50 Å. The continuum is varied between 50 ± few Å to check the consistency of the fits. The spectral evolution and the corresponding fits at representative epochs are shown in <ref>. The parameters of the components obtained from the fitting at all epochs are given in <ref>. The errors listed represent only fitting errors, and other uncertainties like resolution matching, subtraction of host-spectra, and imperfections in wavelength calibration have not been included.
During -10.8 d to -3.2 d, the blue edge of the line profile extends up to -4500 km s^-1 with an estimated FWHM of 3532 km sec^-1. We want to remark that the fitting FWHM from -10.8 to -3.2 generates a limiting FWHM of 3500 km sec^-1 which is our prior in fitting. An increase in prior shifts the absorption center to the redder wavelength, which is unphysical for a P-Cygni profile. Thus, we want to remark that at pre-maximum times, there is a shallow component in absorption mostly due to the freely expanding SN ejecta. Diverse geometry ranging from a disk-like CSM in SN 2012ab <cit.> or a clumpy CSM like in SN 2005ip <cit.> can facilitate direct line of sight to the freely expanding ejecta. In addition to that, we have an intermediate width Lorentzian profile varying in FWHM between 3600 - 4500 km sec^-1 whose wings are mostly dominated by electron scattering and arising due to the ejecta interacting with dense CSM <cit.>. We also see a narrow emission varying in FWHM between 700 - 800 km sec^-1. This indicates that the photosphere lies in the unshocked CSM at this phase. The UV (and bolometric) light curves peak after this phase, which ionizes the unshocked CSM <cit.>. An emission component was fitted in the early spectrum of SN 2021foa at this phase as the P-Cygni was not resolved.
From 7.2 d to 40.2 d, we notice a significant increase in the Lorentzian emission FWHM of Hα, indicating an enhanced interaction between the SN ejecta and the CSM. The FWHM of the emission component typically varied between 5500 km sec^-1 - 7800 km sec^-1. The absorption component at this phase decreases with a reduction in the systematic blueshift. The 26.2 d spectrum marks the onset of the optically thin regime where we no longer see absorption in the Hα profile. Since we have prominent He i emission in the line profiles of SN 2021foa, post 17.2 d, we do not fit the right bump of the profile of Hα which arises possibly due to He i 6678 Å. The beginning of 7.2 d also marks the prominent strength of narrow P-Cygni features appearing in the spectral evolution of SN 2021foa.
After day 40.2, we see only the intermediate width Lorentzian component in the line profiles of SN 2021foa with FWHM varying between 3900 km sec^-1 - 5200 km sec^-1. However, after 40.2 d, a blueshift can be noticed in the intermediate width component of Hα, which has now centered between ∼ -27 to -377 km s^-1. The late-time blueshift can be explained by dust formation in the post-shock CSM or ejecta (similar to SNe 2005ip, 2010jl, and 2015da; ). The narrow P-Cygni has also now reduced in FWHM between 500 km sec^-1 - 600 km sec^-1.
Figure <ref> shows the deconvolved high resolution Hα region spectrum of SN 2021foa. The narrow component in the spectra of SN 2021foa can be well reproduced by two Lorentzians in absorption and emission with FWHMs of 319 ± 20 km sec^-1 and 782 ± 15 km sec^-1. Along with that, we see an additional IW Hα Lorentzian component of FWHM 900 km sec^-1. We see a very narrow component of FWHM 33 km sec^-1 in the Hα profile which is most likely from the host galaxy contribution. We, thus, see that the narrow component seen in our high resolution spectrum is in concordance with our model fittings validating the narrow emission to narrow P-cygni transition in SN 2021foa.
The detailed physical interpretation corresponding to the line geometries is explained in Section <ref>.
§.§ Radius and Temperature Evolution
As the SN ejecta is expanding, the shock breakout from the surface of the progenitor is
followed by a rapid cooling due to the rapid expansion driven by the shock <cit.>. This would lead to a rapidly increasing photospheric radius and a decrease in the temperature of the SN ejecta within a couple of hours of the explosion. However, this will be affected for extended stars with larger radius. For the case of interacting SNe, this is not always the case as the ejecta is masked by CSM <cit.>. Figure <ref> shows the radius and temperature evolution of SN 2021foa. SN 2021foa first shows a decrease in temperature evolution from 14000 K to about 8000 K. We then see a slight rise in the temperature evolution of SN 2021foa from 7300 K to 8300 K between 17 d and 23 d post maximum, during the shoulder in the lightcurve, and then the temperature evolution becomes flatter. This rise in the temperature indicates an injection of energy in the cooling ejecta, perhaps due to interaction with additional CSM or interaction with regions of enhanced CSM density. This is in turn affected by opacity effects in the CSM ejecta interacting zone.
The black body radius of SN 2021foa increases from 5500 R_⊙ to 15500 R_⊙ at ∼ 10 d past maximum light. From 7 d to ∼ 20 d past max, the radius stays at 16000 R_⊙. The radius evolution then shows a small shoulder similar to the temperature evolution, fluctuating between the values of 16000 R_⊙ to 15000 R_⊙ and then decreases to a value of 2800 R_⊙. The late time radius evolution is flat, as is the temperature evolution and the luminosity as well.
We thus see that the temperature increases at the point in time when we expect an interaction to occur, a few days after maximum light and when the Hα and He i lines start appearing at similar strength in the spectral evolution. A similar flattening behaviour is also noticed in the radius evolution of SN 2021foa at this phase. The radius and temperature evolution are well in synergy with the light-curve evolution.
§.§ Hydrodynamical Modelling
We conducted light-curve modeling of SN 2021foa using the one-dimensional multi-frequency radiation hydrodynamics code <cit.>. Because treats radiation hydrodynamics in multi-frequencies, can construct pseudo-bolometric light curves that can be directly compared with the observed ones.
Figure <ref> presents our light-curve models and the initial density structure that can reproduce the overall light-curve properties of SN 2021foa assuming a spherically symmetric configuration. We approximate the SN ejecta by using the double power-law density structure (ρ_ejecta∝ r^-1 inside and ρ_ejecta∝ r^-7 outside, e.g., ). The SN ejecta are assumed to expand homologously. The SN ejecta start to interact with CSM at 10^14 cm. This radius is chosen arbitrarily but it is small enough not to affect the overall light-curve properties. The SN ejecta have an explosion energy of 3× 10^51 erg and a mass of 5 M_⊙. We want to remark here that there is a degeneracy in the ejecta mass and energy, and thus the particular set taken here (5 M_⊙) is an assumption.
We found that the CSM with two power-law density components can reproduce the overall light-curve properties of SN 2021foa. The wind velocity is assumed to be 1000 km s^-1. The inner CSM component has ρ_CSM∝ r^-2. The CSM with 10^-1 M_⊙ yr^-1 can account for the early-time luminosity around the light-curve peak. After around 30 days, the luminosity decline becomes faster than expected from interaction with a CSM with ρ_CSM∝ r^-2. We found that the fast luminosity decline can be reproduced when the CSM density structure follows ρ_CSM∝ r^-5 from 3× 10^15 cm. This CSM component has mass of 0.18 M_⊙.
The pseudo-bolometric light-curve of SN 2021foa flattens from around 80 days. In order to reproduce the luminosity flattening, an extended CSM component that is flatter than ρ_CSM∝ r^-5 is required. We found that if the extended CSM with ρ_CSM∝ r^-2 is attached above 1.5× 10^15 cm, we can reproduce the flat phase in the light curve when 10^-3 M_⊙ yr^-1 is assumed. Assuming a wind velocity of v_wind=1000 km s^-1, this kind of the CSM structure can be achieved if the mass-loss rate of the progenitor gradually increase from 10^-3 M_⊙ yr^-1 to 10^-1 M_⊙ yr^-1 from about 5 years to 1 year before the explosion, and the mass-loss rate is kept at 10^-1 M_⊙ yr^-1 in the final year before the explosion.
Also, we want to mention that the model is stable around the peak time, however, the latter part of the lightcurve is not well-established due to numerical limitations of . Hence, a two zone CSM for SN 2021foa well-reproduces the lightcurve until 80 d.
§ MASS-LOSS RATES
The mass-loss rates are governed by the ejecta-CSM interaction in SNe IIn/Ibn and can be estimated from spectral profiles as well <cit.>. Assuming that the luminosity of the ejecta-CSM interaction is fed by energy at the shock front, the progenitor mass-loss rate Ṁ can be calculated using the relation of <cit.>:
Ṁ=2L/ϵv_w/v_SN^3
where ϵ (<1) is the efficiency of conversion of the shock's kinetic energy into optical radiation (an uncertain quantity), v_w is the velocity of the pre-explosion stellar wind, v_SN is the velocity of the post-shock shell, and L is the bolometric luminosity of the SN. The above equation is derived assuming a spherical symmetry and assuming M_ej ≥ ≥ M_csm. We see changes in the spectral line profile for SN 2021foa around maximum light, so we estimate the mass-loss rates at both -5.8 d and at 7.2 d post maximum.
Since we see narrow emission lines of both Hα and He i at different stages of the evolution, we assume a typical unshocked wind velocity as observed for LBV winds of v_w∼100 km s^-1 and v_w∼1000 km s^-1 for the WR stars. The shock velocity is inferred from the intermediate width component. We want to remark, however, that the first phase might be affected by electron scattering, and there may be contamination by the ejecta signatures at these epochs. We take the shock velocities to be 3612 km sec^-1 and 7843 km sec^-1 at -5.8 d and 7.2 d. Using the bolometric luminosity at day -5.8 (L = 3.77 × 10^42 erg sec^-1), wind speeds of 100 (and 1000) km sec^-1 and assuming 50% conversion efficiency (ϵ=0.5), the estimated mass-loss rate is found to be 0.05 (and 0.5) M_⊙ yr^-1. The estimated mass-loss rate at 7.2 d when the narrow P-Cygni line profile becomes prominent in the spectra are estimated similarly for the wind speed of 100 (and 1000) km sec^-1 and assuming L=5.48 × 10^42 erg sec^-1 gives 0.007 (and 0.07) M_⊙ yr^-1.
The estimated value of mass-loss rate are consistent with the typical LBV winds <cit.> and are consistent with most SNe IIn, which are of the order of 0.1 M_⊙ yr^-1 as observed in some giant eruptions of LBVs <cit.>. These values are higher than those of normal-luminosity SNe IIn like SN 2005ip (2-4×10^-4 M_⊙ yr^-1; ). It is also much larger than the typical values of RSG and yellow hypergiants (10^-4-10^-3 M_⊙ yr^-1; ), and quiescent winds of LBV (10^-5-10^-4 M_⊙ yr^-1, ). The obtained mass-loss rate in SN 2021foa indicates the probable progenitor to be an LBV star transitioning to a WR star that underwent an eruptive phase and transitioning mass-loss rates. The CSM may be the result of interaction with a binary companion as well, which would explain the asymmetry we see in the line profiles of SN 2021foa.
In between the forward shock and the reverse shock, there is a contact discontinuity between the shocked CSM and shocked ejecta where material cools, mixes via Rayleigh-Taylor instabilities, and piles up. This is often
called the cold, dense shell (CDS), and we define the velocity at this front as v_CDS. If we look at the narrow component of the Hα evolution, it did not change significantly over the evolution of the SN however, V_CDS changed at this boundary of 7.2 d in the Hα evolution. Considering these two epochs of time at -5.8 d and 7.2 d, estimating the time periods preceding the explosion when the progenitor showed some activity is calculated by t = t_obs x (v_w/v_CDS). Assuming v_w from the narrow component of Hα at these two phases indicates that the progenitor had undergone a change in the eruptive activity at two stages corresponding to 0.5 – 1 year before exploding as an SN. This is much lower than the traditional SNe IIn and super luminous SNe IIn like 2006gy, 2010jl and 2017hcc <cit.> which showed eruptive activity 6-12 years before the explosion. On the contrary, this is quite similar to SN 2019uo <cit.>, resulting in a shorter-lived light curve as SNe Ibn. This has also been observed previously for the case of SN 2020oi which is a SN Ic showing radio interaction signatures and showed progenitor activity/ expelled shells 1 yr before the explosion <cit.>.
§ DISCUSSION:
We have presented the photometric and spectroscopic analysis of SN 2021foa, and hereafter, we discuss the major properties of the SN and summarise our results.
In this paper, we present the unique case of SN 2021foa, where we see line luminosity ratios intermediate between SNe IIn and SNe Ibn. We also come to the conclusion that it is Hα line luminosity that better separates the two populations, while the two classes can not be segregated based on He i 5876 Å line luminosities. SN 2021foa exhibits the classic evolution of line profile shapes that is common in strongly interacting SNe IIn, which transition from symmetric Lorentzian profiles at early times (before and during peak), to irregular, broader, and asymmetric shapes at late times well after peak. The phenomena is understood as a shift from narrow CSM lines broadened by electron scattering to emission lines formed in the post-shock cold dense shell CDS <cit.>. In addition, for SN 2021foa, we also see asymmetric He i lines which broaden over time from 2000 km sec^-1 to 5000 km sec^-1 and show line luminosities comparable to Hα around the lightcurve peak.
The spectral evolution also shows narrow and intermediate-width H lines at pre-maximum times. Around -3.8 d, we see narrow P-Cygni appearing in Hα. The narrow P-Cygni in Hα appears later than the narrow P-Cygni in Hβ. Additionally, a recent study by <cit.> explores the relations between the line shapes and CSM structure by Monte Carlo radiative transfer codes. They find that a narrow line exhibits a P-Cygni profile only when an eruptive mass-loss event forms the CSM. The CSM structure from a steady mass loss will have a negative velocity gradient after the SN event due to radiative acceleration. Therefore, an Hα photon emitted at the deeper CSM layers, traveling outwards, will never be able to undergo another Hα transition. However, if there is an eruptive mass-loss that comes into play after a steady wind scenario, then a positive velocity gradient would give rise to narrow P-Cygni lines formed along the line of sight. We see from subsection <ref> and Section <ref>, the mass-loss rates typically varies from 10^-3 - 10^-1 M_⊙ yr^-1 during this phase. Also, the light curve modelling helps us to infer that the density-radius variation occurs from ρ ∝ r^-2 to ρ ∝ r^-5. This validates our scenario that probably the change in this mass-loss rates and possible eruption would have given rise to this P-Cygni arising along the line of sight. The changing mass-loss rates and the eruptive activity seen at 0.5 – 5 years before the explosion typically also govern the dual peaked light curve and appearance of P-Cygni seen in the light curve and spectral evolution of SN 2021foa. <cit.> checked the pre-cursor activities of SN 2021foa and have detected it only upto -50 days pre-explosion. This implies that even though there is an activity in the progenitor driving the peak luminosities at 0 and 17 d post maximum, it was not significant enough to be detected as a precursor in the light curve a year before the explosion. <cit.> have explained the origin of double-peak due to the interaction with a shell at a later point in time, which can also be a possible case of SN 2021foa. But, we find that a disk-like geometry better explains our overall line profiles, and we discuss below in detail the physical scenario (see subsection <ref>) governing the CSM and the explosion geometry.
§.§ Physical scenario and asymmetry:
Figure <ref> describes a physical scenario for SN 2021foa, although not unique <cit.>, which explains all the observable of SN 2021foa through various stages of its evolution. In this scenario, the ejecta of the SN interacts with a disk-like CSM structure. The viewing angle of the observer is located at an angle from the horizontal plane of the disk.
Figure <ref>(a) describes a scenario where the photosphere lies in the pre-shock CSM surrounding the interaction region. The early profiles ≤ -4 d are characterised by a narrow line emission. These early profiles have broad wings that follow a symmetric Lorentzian shape, which is most likely due to incoherent electron scattering of narrow emission from pre-shock gas <cit.>. At these particular epochs, pre-SN mass-loss speeds are mostly determined from the width of the line profiles while the line wings are caused by thermal broadening and not reflecting the expansion velocities. So, as shown in Panel (a) of our cartoon diagram, at ≤ -4 d the continuum photosphere is in the CSM ahead of the shock, which will hide the emission from the SN ejecta and the CDS. However, we mentioned in Section <ref> that we do see ejecta signatures as well, in the form of broad absorption at FWHM ∼ 3500 km sec^-1. This indicates the geometry of the CSM is asymmetric and the viewing angle is not inclined to a plane but rather at an angle. This is also the phase where we see Lorentzian profiles in the spectral evolution along with absorption components, which further indicates an asymmetric CSM configuration. So, up to -3.2 d, a disk-like CSM configuration with the observer placed at a viewing angle which also justifies our case where we see the freely expanding ejecta, narrow emission lines from pre-shocked material and intermediate width components due to interaction of the SN ejecta with the disk-like CSM.
Figure <ref>(b) marks the onset of narrow P-Cygni features along with IW Lorentzian profiles as seen in the spectral profiles. As the photosphere recedes, the interaction zone of the photosphere and the disk comes along our line of sight, giving rise to a narrow P-Cygni profile. This phase also marks the onset of the phase where we see strong IW/BW interaction signatures with FWHM ∼ 3000 – 5000 km s^-1, which most likely arises from the combination of both post-shock gas in the CDS and freely expanding ejecta. The absorption component in Hα vanishes at ∼ 26 d. The transition occurs for a phase where the ejecta becomes optically thin and the essential properties of this transition are reproduced in the radiative transfer simulation of <cit.>. These simulations also indicate that the early time data is driven by electron scattering in the CSM and the late time data is driven by emission from post-shock gas at later times. An interesting aspect of the broad component that we see is given by the fact that even though we see it in the absorption profiles of spectral evolution, we cannot separate the contribution of CDS or ejecta in the emission profiles. Seeing strong emission lines for ejecta in SNe IIn are not common as continuum optical depths often hide emission from underlying ejecta or the IW component due to interaction with a dense CSM dominates the line profiles. Previous examples of SN 2010jl <cit.> and SN 2006tf <cit.> have only showed IW Hα emission and weak He i absorption at fast blueshifts. We also see the case of SN 2017hcc where we see freely expanding SN ejecta in emission which mostly arises due to viewing angle from polar regions <cit.>. Thus, similar explosion/CSM geometry have been proposed for SNe 2010jl, 2006tf, 2015da, and 2017hcc <cit.> but with different viewing angles. <cit.> have generated the lightcurves of SNe IIn using a grid of CSM masses, different viewing angles and assuming a disk-like geometry. We see that disk-like geometry fairly reproduces the double-peaked lightcurve (seen for SN 2021foa) with a CSM mass of 10 M_⊙ and a viewing angle between 30 – 60 degrees.
The Hα profile of SN 2021foa shows a deficit in the flux in the red side of the wing and a systematic blueshift in the line centers from -27 km s^-1 to 377 km s^-1. This deficit is visible as an overall blueshift of the line after day 40 and this increases with time.
Blueshifts can arise due to various reasons in a SN IIn/Ibn. Radiative acceleration, asymmetric CSM or lop-sided ejecta, and obscuration of the receding material by the continuum photosphere can all give rise to
asymmetric CSM. Blueshifted line profiles that become more prominent with time can also arise from the formation or re-growth of dust grains in either the post-shock zone of the CDS or in the unshocked SN ejecta. Both ejecta and CSM components have a differing relative contribution to the total dust at different times in the evolution of SNe IIn/Ibn. The optical to IR analysis of SN 2010jl at early and late times demonstrated dust formation in the post-shock CDS and continual grain growth in the SN ejecta <cit.>.
To investigate the case of SN 2021foa, we see that the blueshift increases with time. In the case of a blueshift caused by the radiative acceleration of the CSM, the blueshift should decrease as the luminosity drops, and we expect no significant wavelength dependence for this case. Additionally, in the case of a blueshift caused by the radiative acceleration scenario, the original narrow line photon source should be blueshifted as well which is not true for SN 2021foa <cit.>. For obscuration by the continuum photosphere, the blueshift should be strongest in early times and decrease later on as continuum optical depth drops. For a lopsided CSM structure as well, blueshift should be present from early times which is not consistent with observations of SN 2021foa. So, we propose that the arising blueshift is most-likely arising due to newly formed dust grains in the ejecta or post-shock CSM. We do see a late-time flattening in the optical light curves of SN 2021foa. However, we should remark that other signatures of dust formation which involves increase in NIR flux is not seen in the case of SN 2021foa due to lack of observations and also because it went behind the sun. Nonetheless, dust formation is very common in SNe IIn/Ibn 2006tf <cit.>, 2006jc <cit.>, 2010jl <cit.> and should be the most-likely case happening for SN 2021foa as shown in the recent paper by <cit.>.
<cit.> investigated the case of SN 2014C, a SN Ib, which showed narrow emission lines of Hydrogen about 127 d post-explosion. They proposed a torus-like geometry which is also consistent with <cit.>. Our proposed disk-like scenario also supports a case similar to SN 2014C as well. While some asymmetries may be produced by single stars, they proposed a binary evolution <cit.> that led to a common envelope phase that was responsible for the formation of the Hydrogen-rich CSM. The likely distribution of matter in a system that has undergone binary evolution with the ejection of a common envelope is that the Hydrogen-rich envelope material substantially will be confined to the equatorial plane with a He-rich star as the progenitor. SN 2021foa could also be in such a scenario where the progenitor would be a star that stripped both H and He in the CSM and blew a fast wind that interacted with the main-sequence secondary that facilitated the past expulsion of the progenitor’s outer H, and He layers in a common envelope interaction. The secondary blows a slower Hydrogen-rich wind that would be entrained by the fast Hydrogen + Helium wind of the primary, thus forming a torous/disk-like structure in the equatorial plane.
An interesting point and open question to the behaviour of the SN is if the intermediate/narrow lines are from the CSM, what is going on behind the SNe IIn to SNe Ibn transition is indeed hard to explain. If this would simply reflect the CSM composition, it requires that the fraction of He in the mass-loss wind/ejecta is decreasing toward the SN, the opposite from the standard picture. At the same time, it might be difficult to explain this transition in terms of ionization, as the SN luminosity decreases. So, even though the claims of a single star exploding while transitioning from a LBV to WR phase is explained in past studies, the He envelope in the CSM could be due to a star in binary composition giving rise to narrow emission lines. A detailed theoretical interpretation is essential to describe the plausible scenario giving rise to these kind of SNe.
§ SUMMARY:
* SN 2021foa is a unique member in the transitional SN IIn to SN Ibn subclass with Hα to He i line ratios intermediate between those of SNe IIn and SNe Ibn around maximum light.
* Early time spectral comparison shows that SN 2021foa is similar to SNe in the Type IIn class while the mid and late-time spectral evolution indicates its similarity with SNe Ibn. At ∼ 7 – 14 d, we also see that the He i line luminosity is of comparable strength to the Hα luminosity, justifying the transitional nature of the SN.
* SN 2021foa shows a dramatic lightcurve evolution with a precursor activity (M_ v ∼ -14 mag) and reaching a secondary maximum at -17.8 mag, a shoulder at about ∼ 17 d and a late-time flattening. The SN lies in the middle of the luminosity distribution of SNe IIn and SNe Ibn. Even though the light curve shows a short-duration precursor, the colors are more similar to SNe without precursor activity.
* The Hα evolution is complex having a NW (500 – 1000 km s^-1), IW component in emission (2000 – 4000 km s^-1) and a BW component in absorption at ∼ 3500 km s^-1. We see a narrow P-Cygni profile in the Hα line arising after the line is seen in emission, which could be due to either precursor activity, viewing angle and geometrical effects of the CSM, or interaction with another shell of CSM.
* We propose that the shoulder in the lightcurve arises due to the geometry of the CSM and the late-time flattening is most-likely arising due to either formation of new dust or pre-existing dust as seen from the systematic blueshift in the Hα profile at late phases which is confirmed by the presence of thermal dust by <cit.>.
* Hydrodynamical lightcurve modelling using indicates that the lightcurve until 80 d can be reproduced by a two-component CSM with ρ_CSM∝ r^-2 – ρ_CSM∝ r^-5 starting from 3× 10^15 cm, with a CSM mass of 0.18 M_⊙ and mass-loss rate of 10^-1 M_⊙. If the extended CSM with ρ_CSM∝ r^-2 is attached above 1.5× 10^15 cm, then we can also reproduce the late time flattening of the lightcurve.
* Combining spectral and lightcurve modelling, the mass-loss rates would have increased from 10^-3 M_⊙ yr^-1 to 10^-1 M_⊙ yr^-1 from 5 years to 1 year before the explosion, and also varied between 0.05 – 0.5 M_⊙ yr^-1 with mass expelled at both 0.5 year to 1 year before the explosion assuming a wind velocity of 1000 km sec^-1. This changing mass-loss rate is most probably an indicator of the precursor activity and also explains the shoulder appearing in the light curve of SN 2021foa.
* We see that a disk-like geometry like SN 2009ip best reproduces our observed profiles but the composition of CSM is most likely mixed composition with both Hα and He i.
* The composition of the CSM, the line ratios, spectral and temporal evolution, mass-loss rates all points towards a scenario where SN 2021foa most-likely arose from the explosion of an LBV star which was transitioning to its WR phase. However, we cannot completely rule out the possibility of a binary scenario as proposed for the case of SN 2014C <cit.>.
§ ACKNOWLEDGEMENTS
This work makes used of data from the Las Cumbres Observatory global telescope network. The LCO group is supported by NSF grants AST-1911151 and AST-1911225. C.P. acknowledges support from ADAP program grant No. 80NSSC24K0180 and from NSF grant AST-2206657.
§ DATA AVAILABILITY
mnras
§ PHOTOMETRY
|
http://arxiv.org/abs/2409.03456v1 | 20240905120902 | LM-Gaussian: Boost Sparse-view 3D Gaussian Splatting with Large Model Priors | [
"Hanyang Yu",
"Xiaoxiao Long",
"Ping Tan"
] | cs.CV | [
"cs.CV"
] |
Journal of Class Files, Vol. 14, No. 8, Sep 2024
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
LM-Gaussian: Boost Sparse-view 3D Gaussian Splatting with Large Model Priors
Hanyang Yu^1,
Xiaoxiao Long^1,
and Ping Tan^1
^ Xiaoxiao Long is the corresponding author ([email protected]).
^1 The Hong Kong University of Science and Technology
Submitted for review on Sep. 4th, 2024.
September 9, 2024
=============================================================================================================================================================================================================================================
§ ABSTRACT
We aim to address sparse-view reconstruction of a 3D scene by leveraging priors from large-scale vision models. While recent advancements such as 3D Gaussian Splatting (3DGS) have demonstrated remarkable successes in 3D reconstruction, these methods typically necessitate hundreds of input images that densely capture the underlying scene, making them time-consuming and impractical for real-world applications. However, sparse-view reconstruction is inherently ill-posed and under-constrained, often resulting in inferior and incomplete outcomes. This is due to issues such as failed initialization, overfitting on input images, and a lack of details.
To mitigate these challenges, we introduce LM-Gaussian, a method capable of generating high-quality reconstructions from a limited number of images. Specifically, we propose a robust initialization module that leverages stereo priors to aid in the recovery of camera poses and the reliable point clouds. Additionally, a diffusion-based refinement is iteratively applied to incorporate image diffusion priors into the Gaussian optimization process to preserve intricate scene details. Finally, we utilize video diffusion priors to further enhance the rendered images for realistic visual effects.
Overall, our approach significantly reduces the data acquisition requirements compared to previous 3DGS methods. We validate the effectiveness of our framework through experiments on various public datasets, demonstrating its potential for high-quality 360-degree scene reconstruction. Visual results are on our websitehttps://runningneverstop.github.io/lm-gaussian.github.io/(lm-gaussian.github.io).
sparse-view, reconstruction, gaussian splatting, large models.
§ INTRODUCTION
3D scene reconstruction and novel view synthesis from sparse-view images present significant challenges in the field of computer vision. Recent advancements in neural radiance fields (NeRF) <cit.> and 3D Gaussian splatting (3DGS) <cit.> have made notable progresses in synthesizing novel views, but they typically require hundreds of images to reconstruct a scene. Capturing such a dense set of images is often impractical, raising the inconvenience for utilizing these technologies. Although efforts have been made to address sparse-view settings, existing works are still limited to straightforward facing scenarios, such as the LLFF dataset <cit.>, which involve small-angle rotations and simple orientations. For large-scale 360-degree scenes, the problems of being ill-posed and under-constrained hinder the employment of these methods. In this work, we present a new method that is capable of producing high-quality reconstruction from sparse input images, demonstrating promising results even in challenging 360-degree scenes.
There are three main obstacles that prevent 3D Gaussian splatting from achieving high-quality 3D reconstruction with sparse-view images. 1) Failed initialization: 3DGS heavily relies on pre-calculated camera poses and point clouds for initializing Gaussian spheres. However, traditional Structure-from-Motion (SfM) techniques <cit.> cannot successfully handle the sparse-view setting due to insufficient overlap among the input images, therefore yielding inaccurate camera poses and unreliable point clouds for 3DGS initialization. 2) Overfitting on input images: Lacking sufficient images to provide constraints, 3DGS tends to be overfitted on the sparse input images and therefore produces novel synthesized views with severe artifacts. 3) Lack of details: Given limited multi-view constraints and geometric cues, 3DGS always fails to recover the details of the captured 3D scene and the unobserved regions, which significantly degrades the final reconstruction quality.
To tackle these challenges, we introduce LM-Gaussian, a novel method capable of producing high-quality reconstructions from sparse input images by incorporating large model priors. The key idea is leveraging the power of various large model priors to boost the reconstruction of 3D gaussian splatting with three primary objectives: 1) Robust initialization; 2) Overfitting prevention; 3) Detail preservation.
For robust initialization, instead of relying on traditional SfM methods <cit.>, we propose a novel initialization module utilizing stereo priors from DUSt3R <cit.>. DUSt3R is a comprehensive stereo model that takes pairs of images as input and directly generates corresponding 3D point clouds. Through a global optimization process, it derives camera poses from the input images and establishes a globally registered point cloud.
However, the global point cloud often exhibits artifacts and floaters in background regions due to the inherent bias of DUSt3R towards foreground regions.
To mitigate this issue, we introduce a Background-Aware Depth-guided Initialization module. Initially, we use depth priors to refine the point clouds produced by DUSt3R, particularly in the background areas of the scene. Additionally, we employ iterative filtering operations to eliminate unreliable 3D points by conducting geometric consistency checks and confidence-based evaluations. This approach ensures the generation of a clean and reliable 3D point cloud for initializing 3D Gaussian splatting.
Once a robust initialization is obtained, photo-metric loss is commonly used to optimize 3D Gaussian spheres.
However, in the sparse-view setting, solely using photo-metric loss will make 3DGS overfit on input images. To address this issue, we introduce multiple geometric constraints to regularize the optimization of 3DGS effectively.
Firstly, a multi-scale depth regularization term is incorporated to encourage 3DGS to capture both local and global geometric structures of depth priors. Secondly, a cosine-constrained normal regularization term is introduced to ensure that the geometric variations of 3DGS to be aligned with normal priors. Lastly, a weighted virtual-view regularization term is applied to enhance the resilience of 3DGS to unseen view directions.
To preserve intricate scene details, we introduce Iterative Gaussian Refinement Module, which leverages diffusion priors to recover high-frequency details.
We leverage a diffusion-based Gaussian repair model to restore the images rendered from 3DGS, aiming to enhance image details with good visual effects. The enhanced images are used as additional pseudo ground-truth to optimize 3DGS. Such a refinement operation is iteratively employed in 3DGS optimization, which gradually inject the image diffusion priors into 3DGS for detail enhancement.
Specifically, the Gaussian repair model is built on ControlNet with injected Lora layers, where the sparse input images are used to finetune the Lora layers so that repair model could work well on specific scenes.
By combining the strengths of different large model priors, LM-Gaussian can synthesize new views with competitive quality and superior details compared to state-of-the-art methods in sparse-view settings, particularly in 360-degree scenes. The contributions of our method can be summarized as follows:
* We propose a new method capable of generating high-quality novel views in a sparse-view setting with large model priors. Our method surpasses recent works in sparse-view settings, especially in large-scale 360-degree scenes.
* We introduce a Background-Aware Depth-guided Initialization Module, capable of simultaneously reconstructing high-quality dense point clouds and camera poses for initialization.
* We introduce a Multi-modal Regularized Gaussian Reconstruction Module that leverages regularization techniques from various domains to avoid overfitting issues.
* We present an Iterative Gaussian Refinement Module which uses diffusion priors to recover scene details and achieve high-quality novel view synthesis results.
§ RELATED WORK
3D Representations for Novel-view synthesis
Novel view synthesis (NVS) involves rendering unseen viewpoints of a scene from a given set of images. One popular approach is Neural Radiance Fields (NeRF), which uses a Multilayer Perceptron (MLP) to represent 3D scenes and renders via volume rendering. Several works have aimed to enhance NeRF's performance by addressing aspects such as speed <cit.>, quality <cit.>, and adapting it to novel tasks <cit.>. While NeRF relies on a neural network to represent the radiance field, 3D Gaussian Splatting (3DGS)<cit.> stands out by using an ensemble of anisotropic 3D Gaussians to represent the scene and employs differentiable splatting for rendering. This approach has shown remarkable success in efficiently and accurately reconstructing complex real-world scenes with superior quality. Recent works have further extended the capabilities of 3DGS to perform various downstream tasks, including text-to-3D generation<cit.>, dynamic scene representation <cit.>, editing <cit.>, compression <cit.>, SLAM <cit.>, animating humans <cit.>, and other novel tasks <cit.>.
Sparse View Scene Reconstruction and Synthesis
Sparse view reconstruction aims to reconstruct a scene using a limited number of input views. Existing methods can be classified into regularization techniques and generalizable reconstruction priors. Several works <cit.> address this challenge by employing strategies like depth regularization and frequency regularization. Some approaches <cit.> utilize pre-trained models as a regularization mechanism to guide training based on established knowledge. Another research direction <cit.> focuses on training priors to synthesize novel views across diverse scenes. Building on the achievements of 3D Gaussian Splatting, recent methods such as SparseGS <cit.>, pixelSplat <cit.>, and MVSplat <cit.> leverage stereo view interpolation to facilitate training. GeNVS <cit.> and latentSplat <cit.> utilize rendering view-conditioned feature fields followed by 2D generative decoding to generate novel views. While these methods have demonstrated improved results in new view synthesis, they still encounter challenges in producing clear views under high uncertainty and may encounter difficulties in handling 360-degree scenes.
Unposed Scene Reconstruction
The methods mentioned above all rely on known camera poses, and Structure from Motion (SfM) algorithms often struggle to predict camera poses and point clouds with sparse inputs, mainly due to a lack of image correspondences. Therefore, removing camera parameter preprocessing is another active line of research. For instance, iNeRF <cit.> demonstrates that poses for new view images can be estimated using a reconstructed NeRF model. NeRFmm <cit.> concurrently optimizes camera intrinsics, extrinsics, and NeRF training. BARF <cit.> introduces a coarse-to-fine positional encoding strategy for joint optimization of camera poses and NeRF. GARF <cit.> illustrates that utilizing Gaussian-MLPs simplifies and enhances the accuracy of joint pose and scene optimization. Recent works like Nope-NeRF <cit.>, LocalRF <cit.>, and CF-3DGS <cit.> leverage depth information to constrain NeRF or 3DGS optimization. While demonstrating promising outcomes on forward-facing datasets such as LLFF <cit.>, these methods encounter challenges when dealing with complex camera trajectories involving significant camera motion, such as 360-degree large-scale scenes.
§ PRELIMINARY
§.§ 3D Gaussian Splatting
3D Gaussian Splatting (3D-GS) represents a 3D scene with a set of 3D Gaussians. Specifically, a Gaussian primitive can be defined by a center μ∈ℝ^3, a scaling factor s ∈ℝ^3, and a rotation quaternion q ∈ℝ^4. Each 3D Gaussian is characterized by:
G(x)=1/(2π)^3/2|Σ|^1/2e^-1/2(x - μ)^TΣ^-1(x - μ)
where the covariance matrix Σ can be derived from the scale s and rotation q.
To render an image from a specified viewpoint, the color of each pixel p is computed by blending K ordered Gaussians {G_i | i=1, ⋯ ,K} that overlap with p using the following blending equation:
c(p)=∑_i=1^K c_i α_i ∏_j=1^i-1(1-α_j),
where α_i is determined by evaluating a projected 2D Gaussian from G_i at p multiplied by a learned opacity of G_i, and c_i represents the learnable color of G_i. The Gaussians covering p are sorted based on their depths under the current viewpoint. Leveraging differentiable rendering techniques, all attributes of the Gaussians can be optimized end-to-end through training for view reconstruction.
Rasterizing Depth for Gaussians
Following the depth calculation approach introduced in RaDe-GS <cit.>, the center μ_i of a Gaussian G_i is initially projected into the camera coordinate system as μ_i^'. Upon obtaining the center value (x_i', y_i', z_i') for each Gaussian, the depth (x,y,z) of each pixel is computed as:
d = z_i' + 𝐩[ Δ x; Δ y ],
μ_i^' = [[ x_i'; y_i'; z_i' ]]=𝐖μ_i+𝐭,
where z_i' represents the depth of the Gaussian center, Δ x = x_i' - x and Δ y = y_i' - y denote the relative pixel positions. The vector 𝐩 is determined by the Gaussian parameters [𝐖, 𝐭] ∈ℝ^3 × 4.
Rasterizing Normal for Gaussians
In accordance with RaDe-GS, the normal direction of the projected Gaussian is aligned with the plane's normal. To compute the normal map, we transform the normal vector from the 'rayspace' to the 'camera space' as follows:
𝐧 = -𝐉^⊤(
𝐳_𝐢'/𝐳_𝐢𝐩 1 )^⊤,
where 𝐉 represents the local affine matrix, and the vector 𝐩 has been defined earlier.
§.§ Diffusion model
In recent years, diffusion models have emerged as the state-of-the-art approach for image synthesis. These models are characterized by a predefined forward noising process {𝐳_𝐭}_t=1^T that progressively corrupts the data by introducing random noise ϵ.
z_t = √(α_t)z_0 + √(1-α_t)ϵ, ϵ∈𝐍(0, 𝐈),
where t ∈ [1, T] denotes the time step and α_t=α_1·α_2...α_t represents a decreasing sequence. These models can generate samples from the underlying data distribution given pure noise by training a neural network to learn a reversed denoising process. Having learned from hundreds of millions of images from the internet, diffusion priors exhibit a remarkable capacity to recover real-world details.
§ METHOD
§.§ Overview
In this paper, we introduce a new method called LM-Gaussian, which aims to generate high-quality novel views of 360-degree scenes using a limited number of input images. Our approach integrates multiple large model priors and is composed of four key modules:
1) Background-Aware Depth-guided Initialization: This module extends DUSt3R for camera pose estimation and detailed 3D point cloud creation. By integrating depth priors and point cleaning, we achieve a high-quality point cloud for Gaussian initialization (see Section <ref>).
2) Multi-Modal Regularized Gaussian Reconstruction: In addition to the photometric loss used in 3DGS, we incorporate depth, normal, and virtual-view constraints to regularize the optimization process (see Section <ref>).
3) Iterative Gaussian Refinement: We use image diffusion priors to enhance rendered images from 3DGS. These improved images further refine 3DGS optimization iteratively, incorporating diffusion model priors to boost detail and quality in novel view synthesis (see Section <ref>).
4) Scene Enhancement: In addition to image diffusion priors, we apply video diffusion priors to further enhance the rendered images from 3DGS, enhancing the realism of visual effects (refer to Section <ref>).
§.§ Background-Aware Depth-guided Initialization
Traditionally, 3DGS relies on point clouds and camera poses calculated through Structure from Motion (SfM) methods for initialization. However, SfM methods often encounter challenges in sparse view settings. To address this issue, we propose leveraging stereo priors <cit.> as a solution. DUSt3R, an end-to-end dense stereo model, can take sparse views as input and produce dense point clouds along with camera poses. Nevertheless, the point clouds generated by DUSt3R are prone to issues such as floating objects, artifacts, and distortion, particularly in the background of the 3D scene.
To overcome these challenges, we introduce the Background-Aware Depth-guided Initialization module to generate dense and precise point clouds. This module incorporates four key techniques:
1) Camera Pose Recovery: Initially, sparse images are used to generate point clouds for each image using DUSt3R. Subsequently, the camera poses and point clouds are aligned into a globally consistent coordinate system.
2) Depth-guided Optimization: Depth-guided optimization is then employed to refine the aligned point cloud. In this step, a monocular estimation model is used as guidance for the optimization process.
3) Point Cloud Cleaning: Two strategies are implemented for point cloud cleaning: geometry-based cleaning and confidence-based cleaning. During optimization, after every ξ iterations, a geometry-based cleaning step is executed to remove unreliable floaters. Following the optimization process, confidence-based cleaning is applied to distinguish between foreground and background, utilizing specific filtering techniques to preserve the final output point cloud.
Next, we will provide detailed insights into the implementation of each component within this module.
Camera pose recovery
We first use minimum spanning tree algorithm <cit.> to align all camera poses and point clouds into a unified coordinate system. An optimization scheme is then utilized to enhance the quality of the aligned point clouds. Initially, following the approach of DUSt3R, a point cloud projection loss ℒ_pc is minimized. Consider the image pair {I_k, I_l} where P_k and P_l denote the point map in the k_th and l_th camera's coordinate system. The objective is to evaluate the consistency of 3D points in the k_th coordinate system with those in the l_th coordinate system. The projection loss is computed by projecting the point map P_l to the k_th coordinate system using a transformation matrix T_k,l that converts from the l_th coordinate system to the k_th coordinate system. The loss parameters include the transformation matrix T_k,l, a scaling factor σ_k,l, and P_k. This process is repeated for the remaining image pairs.
ℒ_pc = ∑_k∈ K∑_l∈ K∖{k}η_k·η_l ‖P_k - σ_k,lT_k,lP_l‖
The purpose of this loss function is to systematically pair each input image like I_k with all other images such as I_l. For the image pair {I_k, I_l}, the loss function measures the disparity between the point map P_k in the k_th coordinate system and the transformed point map σ_k,lT_k,lP_l. These comparisons are weighted by their respective confidence maps η_k and η_l.
Depth-guided Optimization
The optimization based solely on the projection loss may not be sufficient for reconstructing large-scale scenes, as it could lead to issues like floaters and scene distortion that can impact subsequent reconstructions. To tackle scene distortion, we integrate a robust model prior to guide the optimization network. Marigold, a diffusion-based monocular depth estimation model known for its top-tier performance in this domain, is employed to provide insights into the scene's depth information.
The monocular depth estimation model significantly enhances depth perception across different scales. Its guidance is pivotal in mitigating distortion issues and improving overall scene depth perception.
Within the optimization network, we merge DUSt3R outputs with depth guidance by incorporating a point cloud projection loss, a multi-scale depth loss and a depth smoothness loss.
ℒ_opt = ℒ_pc + α_dℒ_D + α_sℒ_smooth
where ℒ_opt refers to the total optimization loss. α_d and α_s are loss weights of multi-scale depth loss ℒ_D and depth smoothness loss ℒ_smooth. The smoothness loss encourages depth map smoothness by penalizing depth gradient changes, weighted by the image gradients and details of the multi-scale depth loss term ℒ_D would be discussed later (see Sec <ref>).
Point cloud Cleaning
In order to eliminate floaters and artifacts, we implement two strategies for cleaning the point cloud: geometry-based cleaning and confidence-based cleaning.
In geometry-based cleaning, we adopt an iterative approach to remove unreliable points during the depth optimization process. For a set of K input images, as illustrated in Figure <ref>, the method involves systematically pairing image I_k with all other K-1 images within a single iteration. For the image pair I_k, I_l, a pixel q in I_l corresponds to a 3D point Q in the scene, represented as (X_q, Y_q, Z_q) in the l_th coordinate system. This point can be translated into the k_th coordinate system using the transformation matrix T_k,l. The projected point intersects with I_k at pixel r, and its depth in the k_th coordinate system is denoted as Z_q'. Conversely, pixel r also corresponds to another 3D point R, denoted as (X_r, Y_r, Z_r) in the k_th coordinate system.
To tackle the floating issue, if we detect that the difference between the projected depth Z_q' and the depth Z_r of point R exceeds a threshold τ_1, and the confidence η_q of point Q exceeds the confidence η_r of point R by more than τ_2, we label point R as unreliable. As a result, we remove this point from the set of 3D points.
|Z_r - Z_q'| > τ_1 and η_q - η_r > τ_2 ⇒Exclude point R
The cleaning operation of the point clouds is executed once every ξ iterations, with τ_1 and τ_2 serving as hyperparameters.
In addition to the geometry-based cleaning process, we also implement a confidence-based cleaning step post-optimization. Each point within the point clouds is assigned a confidence value. The original DUSt3R method applies a basic confidence threshold to filter out points with confidence below a certain level. However, due to the distance bias, this approach may inadvertently exclude many background elements. To tackle this challenge, as depicted in Figure <ref>, we differentiate between foreground and background regions by arranging the depths of all points and selecting the median depth as the separation boundary. Points in the foreground, typically observed in multiple-view images, tend to have higher confidence levels. Consequently, we establish a high-confidence threshold for foreground objects. On the contrary, the background area, often captured from a distance and present in only a few images, tends to exhibit lower confidence levels. Hence, we adopt a more lenient strategy for this region, employing a lower confidence threshold for point cleaning.
§.§ Multi-modal Regularized Gaussian Reconstruction
Dense point clouds and camera poses are acquired through Background-Aware Depth-guided Initialization. These variables serve as the initialization of Gaussian kernels. Vanilla 3DGS methods utilize photo-metric loss functions such as ℒ_1 and ℒ_SSIM to optimize 3DGS kernels and enable them to capture the underlying scene geometry.
However, challenges arise in scenarios with extremely sparse input images. Due to the inherent biases of the Gaussian representation, the Gaussian kernels are prone to overfitting on the training views and cause degradation on unseen perspectives. To mitigate this issue, we enhance the Gaussian optimization process by integrating photo-metric loss, multi-scale depth loss, cosine-constrained normal loss, and norm-weighted virtual-view loss.
Photo-metric Loss In line with vanilla 3DGS, we initially compute the photo-metric loss between the input RGB images and Gaussian-rendered images. The photo-metric loss function combines ℒ_1 with an SSIM term ℒ_SSIM.
ℒ_pho = (1-λ) ℒ_1+λℒ_SSIM
where λ represents a hyperparameter, and ℒ_pho denotes the photo-metric loss.
Multi-scale Depth Regularization
To address above challenges, depth information can be incorporated into the Gaussian scene to provide regularization during training. Similar to the Initialization Module, we initially employ the monocular estimation model Marigold to predict depth images {D̂_k}_k=0^K-1 from sparse input images. Since monocular depth estimation models typically offer relative depth predictions without realistic scene scale information, we utilize the Pearson Correlation Coefficient (PCC) <cit.> as a metric to gauge the similarity between depth maps.
The Pearson Correlation Coefficient is a fundamental statistical correlation coefficient that quantifies the linear correlation between two data sets. Essentially, it assesses the resemblance between two distinct distributions X and Y.
PCC(X,Y) = E[XY] - E[X]E[Y]/√(E[Y^2] - E[Y]^2)√(E[X^2] - E[X]^2)
where E represents the mathematical expectation.
Inspired by previous works <cit.> <cit.>, to enhance the capture of local structures, we go beyond assessing the correlation between depth maps at the source scales. We divide depth images into small patches and compare the correlation among these depth patches. The Pearson correlation coefficient has a strong connection with normalized cross-correlation, implying that the utilization of this loss function promotes high cross-correlation values for patches at corresponding locations in both depth maps, regardless of depth value variations.
During each iteration, we randomly select F non-overlapping patches to evaluate the depth correlation loss, defined as:
ℒ_depth = 1/F∑^F-1_f=0 1 - PCC(Γ̅_f, Γ̂_f)
where Γ̅_f denotes the f_th patch of Gaussian-rendered depth maps and Γ̂_f denotes the f_th patch of depth maps predicted by monocular estimation model. Intuitively, this loss works to align depth maps of the Gaussian representation with the depth map of monocular prediction, mitigating issues related to inconsistent scale and shift.
Cosine-constrained Normal Regularization While depth provides distance information within the scene, normals are also essential for shaping surfaces and ensuring smoothness. Therefore, we introduce a normal-prior regularization to constrain the training process.
We utilize cosine similarity to quantify the variance between the predicted normal maps got from normal prior <cit.> and the normal maps rendered using Gaussian representations.
ℒ_normal = 1/K∑^K-1_k=0 1 - COS(N̅_k, N̂_k)
where N̅_k ∈ℝ^H× W× 3 represents the Gaussian-rendered normal maps, and N̂_k signifies the normal maps predicted by the monocular estimation model. The function COS() denotes the cosine similarity function.
Weighted Virtual-view Regularization
In cases where the training views are sparse, the Gaussian scene may deteriorate when presented with new views due to the lack of supervision from these training views. Hence, we introduce a virtual-view regularization strategy to preserve the original point cloud information throughout the optimization process.
As illustrated in Figure <ref>, we randomly sample K_v virtual views in 3D space. For each virtual camera, we project the point clouds onto the 2D plane of the view. A weighted blending algorithm is employed to render the 3D points into RGB point-rendered images. These point-rendered images serve as guidance for the Gaussian optimization process.
When creating a point-rendered RGB image from a virtual view, the color of each pixel i is determined by the U nearest projected 3D points. As shown in Figure <ref>, these points are selected based on their proximity to pixel i within a radius π, where π is defined as one-third of the pixel width. Subsequently, these selected points are arranged in order of their distances {d_u}_u=0^U-1 from the viewpoint. Weights are then allocated to these ordered points according to their distances, with closer 3D points to the virtual viewpoint receiving higher weights.
c(i) =
∑_u=0^U-1 c_u w_u,
w_u = e^-d_u/∑_u=0^U-1 e^-d_u if valid points
c_bg otherwise
Here, c(i) represents the color of pixel i after point rasterization. c_u denotes the color of the u_th 3D point, and w_u is its corresponding weight. In cases where a pixel has no valid point projection (i.e., U=0), we assign the pixel the predefined background color c_bg, which, in this instance, is white.
By employing the norm-weighted blending algorithm, we obtain K_v point-rendered RGB images denoted as {I_k^pr}_k=0^K_v-1. These images are subsequently utilized to regulate Gaussian kernels, thereby imposing constraints on optimization and preventing overfitting. The virtual-view loss function at this stage is presented below.
ℒ_vir =(1-λ) ℒ_1(I̅_k,I_k^pr) +
λℒ_SSIM(I̅_k,I_k^pr),k ∈ K_v
where λ, ℒ_1, ℒ_SSIM are the same as original 3d Gaussian splatting and I̅_k is the Gaussian-rendered RGB image from k_th view.
Multi-modal Joint Optimization Throughout the Multi-modal Regularized Gaussian Reconstruction phase, in addition to the photo-metric loss, Multi-scale Depth Regularization, Cosine-constrained Normal Regularization, and Norm-weighted Virtual-view Regularization are incorporated to steer the training process. These methodologies are pivotal in alleviating overfitting and upholding the output quality.
ℒ_multi = ℒ_pho+β_virℒ_vir + β_depℒ_depth + β_norℒ_normal
where ℒ_multi represents the loss function utilized in the Multi-modal Regularized Gaussian Reconstruction. The weights β_vir, β_dep, β_nor serve as hyperparameters to regulate their impact, with further elaboration provided in the Experiment Section.
§.§ Iterative Gaussian Refinement
During this phase, we implement an iterative optimization approach to progressively enhance scene details. Initially, we uniformly enhance the Gaussian-rendered images from virtual viewpoints using a Gaussian repair model. This model refines blurry Gaussian-rendered images into sharp, realistic representations. Following this enhancement, these refined images act as supplementary guidance, facilitating the optimization of Gaussian kernels in conjunction with depth and normal regularization terms. After ζ optimization steps, we re-render the Gaussian images and subject them to the repair model once more, replacing the previously refined images for another iteration of supervision.
§.§.§ Iterative Gaussian Optimization
Initially, we generate K_v images using Gaussian kernels from virtual viewpoints. Subsequently, the Gaussian Repair Model is utilized to uniformly enhance these images, resulting in a set of repaired images denoted as {I_k^repair}_k=0^K_v-1.
To maintain scene coherence and reduce potential conflicts, we set the denoise strength to a low value and gradually reintroduce limited details to the Gaussian-rendered images during each repair process. These repaired virtual-view images, along with monocular depth and normal maps from training views as outlined in Section <ref>, are then employed to regulate the Gaussian optimization. The overall optimization loss in the Gaussian refinement stage, denoted as ℒ_refine, is determined by:
ℒ_refine = ℒ_pho+β_repℒ_rep + β_depℒ_depth + β_norℒ_normal
Here, ℒ_depth and ℒ_normal correspond to the Multi-modal Regularized Gaussian Reconstruction. β_rep signifies the weight of the repair loss. The repair loss ℒ_rep is defined as follows:
ℒ_rep =(1-λ) ℒ_1(I̅_k,I_k^repair) +
λℒ_SSIM(I̅_k, I_k^repair),k ∈ K_v
In this formulation, we leverage the photo-metric loss between the repaired images {I_k^repair}_k=0^K_v-1and the Gaussian-rendered images {I̅_k}_k=0^K_v-1 in one loop, utilizing the repaired images as a guiding reference. The parameters λ, ℒ_1, and ℒ_SSIM remain consistent with the original 3D Gaussian splatting methodology. The above operations will be repeated.
Through this iterative optimization strategy, the newly generated images gradually enhance in sharpness without being affected by blurring caused by view disparities. The optimization process persists until it is ascertained that the diffusion process no longer produces satisfactory outcomes, as evidenced by deviations from the initial scene or inconsistencies across various viewpoints.
§.§.§ Gaussian Repair Model
In this section, we present the Gaussian repair model utilized earlier. Its primary objective is to enhance blurry Gaussian-rendered images into sharp, realistic images while preserving the style and content of the original image.
Model Architecture The architecture of the Gaussian Repair Model is illustrated in Figure <ref>. It takes Gaussian-rendered images and real-world input images as inputs. In Figure <ref>(b), the Gaussian-rendered image I̅ undergoes image encoding to extract latent image features. The real-world input images are processed through a GPT API for a description prompt σ, then encoded to obtain text latent features. These image and text latent features act as conditions for a ControlNet <cit.> to predict noise ϵ_θ and progressively remove noise from the Gaussian-rendered image. The model is a controlnet finetuned by injecting lora into its layer and it can produce the repaired Gaussian-rendered image. Figure <ref>(c) provides insight into the Lora-ControlNet, where Lora <cit.> weights are integrated into each transformer layer of the ControlNet's UNet. We maintain the original parameters of the transformer blocks constant and solely train the low-rank compositions A, B, where A ∈ℝ^d × r, B ∈ℝ^r × k, with a rank r ≪min(d, k). Concerning the text encoder, as depicted in Figure <ref>(d), Lora weights are integrated into each self-attention layer of the encoder. The input of the Lora-Text Encoder is the scene prompt, and the output is the text embedding.
Training process In this section, we will explore the training process of the Gaussian repair model. Initially, for data preparation, we collect image pairs from Section <ref>, where input images within each scene act as reference images. For each training perspective, we randomly select ω Gaussian-rendered images and pair them with input images to create training pairs.
Subsequently, these image pairs are utilized to train our Gaussian Repair Model. As shown in Figure <ref>(a), the input image I undergoes a forward diffusion process. Specifically, the image is input into an image encoder to extract the latent representation z_1. This latent representation then undergoes a diffusion process where noise ϵ is gradually introduced over T steps. After obtaining latent z_T, a reverse diffusion process is initiated, as illustrated in Figure <ref>(b), where a Lora-UNet and a Lora-ControlNet are employed to predict noise ϵ_θ at each step. This predicted noise, combined with the noise introduced during the diffusion process, contributes to calculating the loss function, aiding in the training process. The loss function for any input image can be defined as follows.
ℒ_Control = E_I,t,I̅,ϵ∈𝒩(0, 1)[‖(ϵ_θ(I, t, I̅, σ)-ϵ)‖_2^2]
Here, σ refers to the text prompt of the reconstructed scene. I represents the actual input image, and I̅ is the Gaussian-rendered image obtained from Section <ref>. E_I,t,I̅,ϵ∈𝒩(0, 1) denotes the expectation over the input image I, the time step t, the text prompt σ, the condition image I̅, and the noise ϵ drawn from a normal distribution with mean 0 and standard deviation 1. ϵ_θ indicates the predicted noise.
§.§ Scene Enhancement
Given the sparse input images and the restricted training perspectives, it is expected that rendered images from adjacent new viewpoints may display discrepancies. In order to ensure high-quality and consistent rendering along a specified camera path, we propose a View Enhancement module, which utilizes video diffusion priors to improve the coherence of rendered images.
This module concentrates on enhancing the visual consistency of rendered images without delving into Gaussian kernel refinement. Initially, multiple images are rendered along a predetermined camera trajectory and grouped for processing. Subsequently, a video diffusion UNet is employed to denoise these images to generate enhanced images.
In the video diffusion model, DDIM inversion <cit.> is utilized to map Gaussian-rendered images back to the latent space, the formulation can be expressed as:
z_t+1=α_t+1/α_tz_t+(1/α_t+1-1-1/α_t-1)ϵ_θ(z_t,t,σ),
where t∈[1,T] is the time step and α_t denotes a decreasing sequence that guides the diffusion process. σ serves as an intermediate representation that encapsulates the textual condition.
The rationale behind mapping Gaussian-rendered images to a latent space is to leverage the continuous nature of the latent space, preserving relationships between different views. By denoising images collectively in the latent space, the aim is to enhance visual quality without sacrificing spatial consistency.
In the scene enhancement model, we utilized Zeroscope-XL as the video-diffusion prior and set the denoising strength to 0.1.
§ EXPERIMENTS
§.§ Experimental Setup
Dataset Our experiments were conducted using three datasets: the Tanks and Temples Dataset <cit.>, the MipNeRF360 Dataset <cit.>, and the LLFF Dataset <cit.>. The Tanks and Temples and MipNeRF360 datasets feature 360-degree real-world scenes, while the LLFF dataset comprises feed-forward scenes. From the Tanks and Temples Dataset, we uniformly selected 200 images covering scenes like Family, Horse, Ignatius, and Trunk to represent the entire 360-degree environments. In the MipNeRF360 dataset, we chose the initial 48 frames capturing various elevations across a full 360-degree rotation, including scenes such as Garden, Bicycle, Bonsai, and Stump. Additionally, scenes like flowers, orchids, and ferns from the LLFF Dataset have been incorporated as well.
Train/Test Datasets Split For 360-degree scenes, we varied the view number K from 4 to 16 to assess all the algorithms under consideration. In a training set comprising K images, the remaining images were allocated to the test set. Concerning the feed-forward dataset, we adhered to the setup outlined in previous research <cit.>, employing 3 views for training and the remainder for testing.
Metrics In the assessment of novel view synthesis, we present Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM)<cit.>, and Learned Perceptual Image Patch Similarity (LPIPS)<cit.> scores as quantitative measures to evaluate the reconstruction performance.
Baselines We compare our method against 7 baseline approaches. Our evaluation included sparse-view reconstruction methods such as DNGaussian <cit.>, FreeNeRF <cit.>, SparseNeRF <cit.>, PixelNeRF <cit.>, MVSNeRF <cit.>, DietNeRF <cit.>, and RegNerf <cit.>. Additionally, we compared our method with the vanilla 3DGS approach. Notably, all these baseline methods employed Colmap for the pre-computation of camera parameters.
§.§ Implementation Details
We implemented our entire framework in PyTorch 2.0.1 and conducted all experiments on an A6000 GPU. In the Background-Aware Depth-guided Initialization stage,
loss weights α_g, α_s and α_l were set to 0.01, 0.01 and 0.1. Geometry-based cleaning was applied every 50 iterations. For Confidence-based cleaning, τ_1 and τ_2 were set to 0.1 and 0, respectively. Moving on to the Multi-modal Regularized Gaussian Reconstruction stage, we trained the Gaussian model for 6,000 iterations with specified loss function weights: β_vir = 0.5, β_dep = 0.3, and β_nor = 0.1 across all experiments. We use ControlNet <cit.> as our foundational model.
The low-rank r was set to 64, and the model was trained for 2000 steps. In the Iterative Gaussian Refinement Module, the denoising strength was set at 0.3 for each repair iteration, and the repair process was repeated every 4,000 iterations. The value of β_rep was adjusted based on the distance between the virtual view and its nearest training view, within the range of (0, 1). This iterative cycle was set to 3 in our experiments.
§.§ Quantative and Qualitative Results
Tanks and Temples & MipNerf360 The quantitative results, presented in Table <ref> and Table <ref>, demonstrate that our method outperforms others in terms of PSNR, SSIM, and LPIPS metrics, showcasing superior performance and finer details. Visual results are also showcased in Figure <ref> and Figure <ref>. With 16 input images, LM-Gaussian produces high-quality reconstruction results, preserving most structures and details, whereas DNGaussian and SparseNerf yield cluttered outcomes. Despite DNGaussian and SparseNerf utilizing frequency and depth regularization to prevent overfitting, their performance falls short for two primary reasons. Firstly, their dependence on Colmap for initializing point clouds and camera poses proves challenging in sparse-view scenarios where traditional SfM methods struggle to provide reliable outputs. Secondly, these methods neglect to introduce significant model priors to restore intricate details. Given the sparse nature of the input data, critical details may be lost without additional information.
LLFF Besides challenging 360-degree large-scale scenes, we also conducted experiments on feed-forward scenes like the LLFF Dataset to ensure the thoroughness of our study and validate the robustness of our method. Following previous sparse-view reconstruction works, we take 3 images as input, with quantitative results presented in Table <ref> and qualitative results shown in Figure <ref>. It is observed that sparse-view methods like DNGaussian also demonstrate commendable visual outcomes and relatively high PSNR and SSIM values. This can be attributed to the nature of the LLFF Dataset, which does not encompass a 360-degree scene but rather involves movement within a confined area. This results in higher image overlap and fewer unobserved areas, making it easier to reconstruct the scene. However, despite the impressive performance of these methods, our approach still exhibits certain advantages. In addition to the numerical enhancements demonstrated in TABLE <ref>, taking the flower scene as an example, our method excels in visual results by restoring finer details such as flower textures. Moreover, our method maintains superior performance in regions with less overlap, as exemplified by the leaves in the surroundings.
§.§ Ablation Study
Colmap Initialization Initially, we investigated the conventional Colmap method within our sparse-view settings. It fails to reconstruct point clouds with 8 input images in 360-degree scenes. We progressively increased the number of input images until Colmap could eventually generate sparse point clouds. With 16 input images, as illustrated in Figure <ref>, Colmap's resulting point clouds were significantly sparse, comprising only 1342 points throughout the scene. In contrast, our Background-Aware Depth-guided Initialization method excels in generating high-quality dense point clouds.
Effect of Background-Aware Depth-guided Initialization
Through visual demonstrations, we highlight DUSt3R's limitations in reconstructing high-quality background scenes, plagued by artifacts and distortions. As depicted in Figure <ref>(a)(b), DUSt3R either lacks background details or presents poor background quality, resulting in subpar reconstructions.
In contrast, as illustrated in Figure <ref> (c), our module adeptly reconstructs background scenes with minimal distortion while preserving high-quality foreground scenes, exhibiting fewer artifacts and floaters compared to Figure (b)'s point clouds. Additionally, we present visualizations of depth maps before and after depth optimization. In Figure <ref>, the original DUSt3R's depth maps reveal background blurriness, blending elements like the sky and street lamps. With guidance from Marigold, our Background-Aware Depth-guided Initialization notably enhances the reconstruction of background scenes compared to the original DUSt3R output.
Moreover, Table <ref> provides quantitative comparisons among our method, Colmap, and DUSt3R. While Colmap yields less favorable results, DUSt3R shows significant improvement. Despite DUSt3R's performance, our initialization module achieves state-of-the-art outcomes, leading to improvements in PSNR and SSIM metrics.
Effect of Multi-modal regularized Gaussian Reconstruction
We conducted ablation studies on the multi-modal regularization, incorporating depth, normal, and virtual-view regularization. The quantitative outcomes are detailed in Table <ref>. Following the integration of these regularization techniques, the novel view synthesis demonstrates improved results, indicated by higher PSNR and SSIM values, as well as finer details with lower LPIPS scores. With the implementation of multi-modal regularization, as depicted in Figure <ref>, the Gaussian-rendered images showcase smoother surfaces and reduced artifacts within the scene. Conversely, images lacking regularization exhibit black holes and sharp angles, diminishing the overall quality.
Effect of Iterative Gaussian Refinement We further explore the usefulness of the iterative Gaussian refinement module. As illustrated in Figure <ref>, we present a comparison between the images before and after Gaussian Repair. The noticeable outcomes highlight that the repaired images exhibit enhanced details and a reduction in artifacts, emphasizing the effectiveness of the Gaussian Repair Model. Quantitative results before and after the Iterative Gaussian Refinement are also detailed in Table <ref>. While a marginal improvement in PSNR is noted, more intricate metrics such as LPIPS and SSIM demonstrate substantial enhancements. These findings align seamlessly with our primary objective of restoring intricate details within the images.
The number of input images In Figure <ref>, we assess our method using different number of sparse input images N. We compare LM-Gaussian with the original 3DGS across view splits of growing sizes K ∈{4, 8, 16, 24, 32}. Notably, in sparse view scenarios, our method consistently enhances the performance of 3DGS. Furthermore, as the number of views increases, we observe that our proposed techniques, leveraging extensive model priors for regularization and repair, diminish in significance. While LM-Gaussian's rate of quality enhancement slows with increasing views, 3DGS showcases a steady linear progression.
§ CONCLUSIONS
We introduce LM-Gaussian, a sparse-view 3D reconstruction method that harnesses priors from large vision models. Our method includes a robust initialization module that utilizes stereo priors to aid in recovering camera poses and reliable Gaussian spheres. Multi-modal regularizations leverage monocular estimation priors to prevent network overfitting. Additionally, we employ iterative diffusion refinement to incorporate extra image diffusion priors into Gaussian optimization, enhancing scene details. Furthermore, we utilize video diffusion priors to further improve the rendered images for realistic visual effects. Our approach significantly reduces the data acquisition requirements typically associated with traditional 3DGS methods and can achieve high-quality results even in 360-degree scenes. LM-Gaussian currently is built on standard 3DGS that only works well on static scenes, and we would like incorporate dynamic 3DGS techniques to enable dynamic modeling in the future.
plain
|
http://arxiv.org/abs/2409.03106v1 | 20240904220921 | Spatial Diffusion for Cell Layout Generation | [
"Chen Li",
"Xiaoling Hu",
"Shahira Abousamra",
"Meilong Xu",
"Chao Chen"
] | cs.CV | [
"cs.CV"
] |
C. Li et al.
Stony Brook University, Stony Brook, NY, USA Harvard Medical School, Boston, MA, USA
Spatial Diffusion for Cell Layout Generation
Chen LiEmail: [email protected] Xiaoling Hu2 Shahira Abousamra1 Meilong Xu1 Chao Chen1
September 9, 2024
====================================================================================================
§ ABSTRACT
Generative models, such as GANs and diffusion models, have been used to augment training sets and boost performances in different tasks. We focus on generative models for cell detection instead, i.e., locating and classifying cells in given pathology images. One important information that has been largely overlooked is the spatial patterns of the cells. In this paper, we propose a spatial-pattern-guided generative model for cell layout generation. Specifically, a novel diffusion model guided by spatial features and generates realistic cell layouts has been proposed. We explore different density models as spatial features for the diffusion model. In downstream tasks, we show that the generated cell layouts can be used to guide the generation of high-quality pathology images. Augmenting with these images can significantly boost the performance of SOTA cell detection methods. The code is available at <https://github.com/superlc1995/Diffusion-cell>.
§ INTRODUCTION
Cell detection focuses on identifying and locating multiple types of cells in images or videos. Although deep models have achieved great success in cell detection tasks <cit.>, their deployment in the real world is largely constrained due to the demand for large amounts of labeled training data.
Collecting and annotating cell detection datasets is high-cost.
Despite the rich literature on the generative model, its application in cell detection is still limited. Although GAN or diffusion models can generate high-quality images with a single or a small number of objects, these models cannot generate images with hundreds or thousands of cells. The primary issue is the lack of an explicit modeling of cell spatial distributions. Numerous factors, including cell-cell interactions, morphogenesis, and cellular functions, make the cells follow a specific spatial pattern <cit.>.
If a generative model cannot learn these spatial distributions, it will not be able to generate realistic images for data augmentation purposes.
This issue, however, has been largely overlooked by existing generative models. Most existing methods either generate object layouts randomly <cit.> or use existing layouts directly <cit.>.
Another constraint for layout generation is the backbone methods. Generative adversarial network (GAN) <cit.> cannot handle the large amount of densely packed cells.
Recent years have witnessed the rise of the diffusion model <cit.>. Diffusion models outperform GANs in image generation <cit.>, and have shown great potential in many other tasks.
The diffusion model's ability to generate realistic images with fine details makes it ideal for layout generation.
We propose a spatial-distribution-guided diffusion model for cell layout generation.
Our diffusion model learns to generate binary masks, in which each cell is represented by a square marker.
To incorporate the spatial distribution into the diffusion model, we propose two major ideas. First, to handle the large variation of sparse/dense layouts, we propose to condition the diffusion model on the number of cells.
Second, we incorporate spatial distributions into the model. Due to the heterogeneity of spatial distribution, it is unrealistic to simply condition the generation on summary statistics of the spatial density. Instead, we design the model to jointly generate both the layout map and the spatial density maps simultaneously. This way, the model will gradually learn the density distribution through the diffusion process.
As another contribution, we explore and analyze different density distribution models for layout generation: (1) Kernel density estimation (KDE); (2) Gaussian mixture model (GMM); and (3) Gaussian Mixture Copula Model (GMCM) <cit.>. While KDE is more flexible, GMM is more constrained and is less likely to overfit. GMCM is a compromise between the two. Our experiments provide a comprehensive analysis of the strengths and weaknesses of the three density models.
Finally, we introduce metric spatial-FID, to evaluate the quality of generated object layouts. The metric maps the generated layouts into a spatial representation space and compares them with the spatial representation of real layouts. This provides an appropriate evaluation metric for cell layouts.
In experiments, we verify the effectiveness of our method through spatial-FID.
Furthermore, we show that the generated layouts can guide diffusion models to generate high quality pathology images. These synthetic images can be utilized as an augmentation to boost the performance of supervised cell detection methods.
§ RELATED WORK
Diffusion Model.
Diffusion models generate samples from random noise by learning to eliminate the noise in a multi-stage process <cit.>. Diffusion models are gaining attention because of their superior generation performance compared to GAN models <cit.>.
There are extensive applications of diffusion models: semantic segmentation <cit.>, point cloud generation <cit.>, and video generation <cit.>. Saharia et al. <cit.> propose the super-resolution method SR3 by conditioning the diffusion model on low-resolution images. Dhariwal et al. <cit.> boost the quality of conditional generation for the diffusion model by using classifier guidance.
Zhu et al. <cit.> apply the diffusion model to align the crowd distribution for various domains. Unlike previous works, we focus on the cell layout and pathology image generation for cell detection.
Layout Generation.
Layout generation is the task of synthesizing the arrangement of elements. Generative model based methods <cit.> are widely used in this field. LayoutGAN <cit.> and LayoutVAE <cit.> represent pioneering efforts in employing GAN and VAE to generate graphic and scene layouts. LayoutDM <cit.> explores the diffusion based model's capability in layout generation. The works above need access to geometric parameters (location and size) of objects in training layouts. However, only the center coordinates of cells are available for cell detection layouts. TMCCG <cit.> attempts to generate cell layouts with GAN. Considering the superiority of the diffusion model in various generation tasks, we construct a diffusion model based cell layout generation framework.
§ METHOD
It is essential to generate realistic pathology images for cell detection tasks, and one challenge is to properly model the spatial distribution of these cell layouts. This paper explores multiple ways to summarize the spatial context in cell layouts. By incorporating spatial information in the training process, diffusion models can learn the underlying distribution of cells and generate realistic layouts. We represent the cells in pathology images as a series of 3×3 square markers in layout maps. The markers are in the center of cells. Using narrow markers can prevent layout maps of highly dense cells from collapsing into a big mass. The generated layouts can guide the generation of realistic pathology images.
§.§ Layout and Image Generation
In this part, we start by explaining our approach to training spatial-aware diffusion models to generate realistic cell layouts. Next, we introduce the diffusion framework for generating synthetic pathology images under the guidance of generated layouts.
Diffusion Model.
Given data _0 q(_0), the forward diffusion process of diffusion models corrupts original data _0 into _T by introducing Gaussian noise with variance schedule β_1,…,β_T. This process is depicted as a Markov chain formulated as follows:
q(_1,…,_T|_t-1) := ∏^T_t=1q(_t|_t-1)
q(_t|_t-1) := 𝒩(_t; √(1-β_t)_t-1, β_t I )
If T is large enough, the corrupted data _T will nearly follow an isotropic normal distribution.
The diffusion models sample data from q(_0) by reverse of the diffusion process. The generative process q(_t-1|_t) is approximated by a neural network:
p_θ(_t-1 | _t) = 𝒩(_t-1; μ_θ(_t, t), σ^2_t I )
μ_θ(_t, t) = 1/√(α_t) (_t - 1-α_t/√(1- α_t)ϵ_θ(_t, t) )
where ϵ_θ(x_t,t) is infered by a denoising neural network. Both ϵ_θ(x_t,t) and σ_t are learned by optimizing a hybrid learning objective <cit.>.
Due to the sparsity of cell layout maps, it is challenging for diffusion models to learn the underlying distribution of cells from the standard training process. To address this problem, we introduce spatial density maps. By modeling the spatial information in a dense way, the spatial density maps teach the model the spatial distribution behind cell layouts. The bond between layout maps and
density maps assures the generated cell layouts follow the
spatial distribution of the training set. To incorporate the spatial density map into the training process efficiently, we construct _0 by concatenating cell layout map _0^p and spatial density maps _0^d together: _0 = concat(_0^p, _0^d). We construct _0^p and _0^d by combining the cell layouts and spatial density maps of different cell types. We generate spatial density maps for each cell type independently. As shown in Fig. <ref>, the generated cell layouts are distributed in the corresponding spatial density generation pattern. More samples are available in the supplementary.
The spatial distribution of cells in pathology images is influenced by density. To make the generation model realize this correlation, we incorporate cell counting in the image patch as prior information for layout generation. However, due to the lack of samples for each quantity, using cell counting as a condition directly will lead to poor-quality layout generation. Here, we split the training image patches by the number of cells in each patch. We define C as the cell count distribution of the training set, and C_p is the p-th quantile of this distribution. We can divide the cell counting space into K ∈ℕ^* parts: [C_i/K, C_(i+1)/K], i∈{0… K-1}. During training, we treat the layouts belonging to the part i as a counting category e_i and condition the diffusion model on these counting categories. In the inference stage, we sample the same number of layouts from each counting category so that the generated layouts share the same counting distribution as the original data. An overview of layout generation and learning process is in Fig. <ref>.
We generate pathology images I using a diffusion model conditioned on the generated layouts. The layout maps _g^p are fed into the denoising neural network as a condition to guide the cell distribution of the generated pathology image. Our layout conditional diffusion model p(I |_g^p) generates pathology images with the layout map _g^p through the neural network approximated denoising process p_w(I_t-1|I_t, x_g^p), where w is the parameter of neural network for pathology image generation. An illustration is shown in Fig. <ref>.
Given the generated high-quality pathology images and their cell layouts, we can augment the existing cell detection methods for better performance.
§.§ Spatial Feature Extractors
An excellent spatial feature extractor should be able to capture the cluster patterns in cell layouts, including the position, density, and area of cell clusters, which is a challenging task. Here, we present three different ways to extract spatial features: Kernel density estimation (KDE), Gaussian Mixture Model (GMM) <cit.>, and Gaussian Mixture Copula Model (GMCM) <cit.>.
Kernel density estimation is a non-parametric probability estimation framework, which is widely used for point pattern analysis. Intuitively, KDE treats training data points as density sources, and the combination of effects from training data points will create a smoothed estimate of probability distribution. For each estimating location, the closer data points have higher density contributions. The bandwidth is a crucial parameter for KDE. A lower bandwidth results in an estimation with more details but potentially introduces extra noise. On the other hand, a larger bandwidth produces a smoother result with the risk of oversmoothing. Here, to reach a good balance, we use Scott’s Rule to select bandwidth <cit.>.
Unlike KDE, the Gaussian mixture is a parametric model, modeling the distributions of object cluster patterns as Gaussian distributions. Therefore, GMM has better interpretability with limited parameters. Here, we introduce another parametric model – the Gaussian mixture copula model (GMCM). The copula function is used to capture the dependency between marginal densities. GMCM uses GMM to estimate the marginal distribution of point cloud and Gaussian mixture copula function, derived from a mixture of Gaussians, to capture the marginal dependency. Due to the non-identifiability nature of the components in GMCM and GMM, we adopt an EM algorithm <cit.> to optimize the parameters of GMCM and GMM. Bayesian Information Criterion (BIC) decides the optimal number of components for both GMM and GMCM.
§ EXPERIMENT
We evaluate the effectiveness of cell layout and pathology image generation frameworks. We propose spatial-FID as a Fréchet Inception Distance (FID) modification to measure the quality of generated cell layouts. We also evaluate the effectiveness of our layout conditional pathology image generation framework on cell detection tasks.
Datasets. We evaluate our methods on cell detection dataset BRCA-M2C <cit.>. BRCA-M2C has 80, 10, and 30 pathology image patches for training, validating, and testing, respectively. This dataset has three cell types: tumor, lymphocyte, and stromal.
Implementation details.
We separate the training set into five counting categories. To ensure the image patches for training have abundant spatial structures to learn, we crop 464 × 464 patches from whole training images for training diffusion models on BRCA-M2C. The diffusion model of layout generation is a U-Net architecture <cit.> with multiple attention heads at resolution: 32× 32, 16 × 16, and 8× 8. For the image generation framework, we initialize the model with weights from pre-trained super-resolution diffusion model <cit.>.
For cell detection tasks, we use the state-of-the-art methods MCSpatNet <cit.> and U-Net <cit.> as the cell detection frameworks.
Evaluation metric.
To evaluate the quality of generated layout maps, we propose spatial-FID (↓)[The lower the better.]. The FID is used to measure the performance of the generative model on natural image datasets, e.g., ImageNet and CIFAR-10/100. It thus relies on the visual features from a pre-trained inception model. However, there is a significant difference between natural images and layout maps. Therefore, we train an autoencoder (Fully Convolutional Networks) on the layout maps from the training set to capture the spatial information in the layouts. We use the spatial feature from the middle of the encoder (f_s(·)) to replace the visual features.
Moreover, introduce spatial-FID formed as follows:
s^2(( μ_T, Σ_T), (μ_D, Σ_D)) =
μ_T - μ_D _2^2 +Tr(Σ_T + Σ_D - 2( Σ_TΣ_D)^(1/2))
where μ_T (μ_D) and Σ_T (Σ_D) are the mean and covariance of the spatial features extracted by f_s(·) from training layouts (generated layouts), respectively.
We use F-scores to evaluate the cell detection performance.
§.§ Results
In this part, we train the model on the training set, generate 200 layouts with corresponding pathology images for each counting category, and evaluate our method from layout generation and cell detection augmentation.
As shown in Tab. <ref>, the spatial density map generated by GMM achieved the best performance due to the distribution of cells in BRCA-M2C conforming to the mixture of Gaussian best. The cell layout distribution of BRCA-M2C data has distinct subgroups or clusters, and GMM can effectively capture these patterns. The flexibility of KDE leads to noisy density estimation results in our case, preventing our framework from getting better layout generation. GMCM models dependencies between variables using copulas, which can introduce additional complexity. The mismatches between GMCM's assumption and data distribution lead to inferior layout generations.
According to Tab. <ref>, the pathology image generations directed by GMM boost the cell detection performance best, reflecting that better cell layouts lead to higher-quality pathology image generations. As shown in Fig. <ref>, our pathology image generations are deceptively realistic.
This is mainly attributed to high-quality cell layout generations and the excellent performance of diffusion models in pathology image generation.
§.§ Ablation study
We conduct ablation studies to show each component's effectiveness and the effects of hyper-parameters on the generation framework. The supplementary includes more ablation studies.
Counting categories.
We conduct ablation studies by discarding this condition or setting different numbers of counting categories (10 or 20) to show the effectiveness of counting categories in cell layout generation. According to Tab. <ref>, counting categories are important for generating realistic cell layouts, and our method is robust to the change of counting categories.
§ CONCLUSION
In this paper, we propose a spatial-distribution-guided diffusion framework for generating high-quality cell layouts and pathology images.
To represent the spatial distribution of cell layouts properly, we explore three alternative tools for spatial feature extraction: KDE, GMM, and GMCM.
They all can significantly boost the generative quality of cell layouts. Due to the underlying cell layouts of BRCA-M2C complying with the mixture of Gaussian best, GMM achieves the best performance.
We treat generated cell layouts as conditions for pathology image generation.
These high-quality generated pathology images can improve the performance of SOTA cell detection methods.
§ SUPPLEMENTARY
§.§ Ablation study
§.§.§ Bandwidth.
A good bandwidth selection is critical for the quality of spatial information represented by KDE. Here, we study the effect of bandwidth using constant values. As shown in Tab.<ref>, the Scott method is more valid for extracting spatial information from cell layouts.
§.§.§ Number of components
As we show in Tab. <ref>, selecting the component number of GMM by BIC is essential for generating high-quality spatial density maps by GMM.
§.§ Samples
Here, we show more generated layouts, corresponding generated pathology images, and density maps. As shown in Fig. <ref>, the density map can reflect the spatial distribution of cells well.
Acknowledgements: This research was partially supported by the National Science Foundation (NSF) grant CCF-2144901, the National Institute of General Medical Sciences (NIGMS) grant R01GM148970, and the Stony Brook Trustees Faculty Award.
Disclosure of Interests: The authors have no competing interests to declare that are relevant to the content of this article.
splncs04
|
http://arxiv.org/abs/2409.02088v2 | 20240903174024 | SELCC: Coherent Caching over Compute-Limited Disaggregated Memory | [
"Ruihong Wang",
"Jianguo Wang",
"Walid G. Aref"
] | cs.DB | [
"cs.DB",
"cs.DC",
"cs.ET"
] |
plain
Purdue University
[email protected]
Purdue University
[email protected]
Purdue University
[email protected]
§ ABSTRACT
Disaggregating memory from compute offers the opportunity to better utilize stranded memory in data centers. It is important to cache data in the compute nodes and maintain cache coherence across multiple compute nodes to save on round-trip communication cost between the disaggregated memory and the compute nodes. However, the limited computing power on the disaggregated memory servers makes it challenging to maintain cache coherence among multiple compute-side caches over disaggregated shared memory. This paper introduces SELCC; a Shared-Exclusive Latch Cache Coherence protocol that maintains cache coherence without imposing any computational burden on the remote memory side. SELCC builds on a one-sided shared-exclusive latch protocol by introducing lazy latch release and invalidation messages among the compute nodes so that it can guarantee both data access atomicity and cache coherence. SELCC minimizes communication round-trips by embedding the current cache copy holder IDs into RDMA latch words and prioritizes local concurrency control over global concurrency control. We instantiate the SELCC protocol onto compute-sided cache, forming an abstraction layer over disaggregated memory. This abstraction layer provides main-memory-like APIs to upper-level applications, and thus enabling existing data structures and algorithms to function over disaggregated memory with minimal code change. To demonstrate the usability of SELCC, we implement a B-tree and three transaction concurrency control algorithms over SELCC's APIs. Micro-benchmark results show that the SELCC protocol achieves better performance compared to RPC-based cache-coherence protocols. Additionally, YCSB and TPC-C benchmarks indicate that applications over SELCC can achieve comparable or superior performance against
competitors over disaggregated memory.
SELCC: Coherent Caching over Compute-Limited Disaggregated Memory
Walid G. Aref
Submitted December 13, 2023 / Accepted July 26, 2024
=========================================================================
printfolios=true
§ INTRODUCTION
Memory disaggregation has emerged as an important trend in cloud databases
in both
academia, e.g., <cit.> and industry, e.g., <cit.>. The primary motivation behind disaggregated memory is to utilize the large amounts of stranded memory <cit.> in the data center. Stranded memory refers to
memory that is inaccessible to the local server because all the available cores have been allocated to virtual machines <cit.>.
Memory disaggregation addresses this issue by physically decoupling the memory resources from compute servers, accessing the stranded memory via high-speed networks.
By establishing disaggregated memory over stranded memory, cloud providers can significantly enhance memory utilization, and reduce the total cost of ownership (TCO).
With this type of memory disaggregation, the memory nodes have near-zero computing power, and the CPU cycles used for data communication between compute and memory nodes should be minimized.
The key enabler
for
memory disaggregation is the network advancement <cit.>, e.g., the Remote Direct Memory Access technology (RDMA), because it can fully bypass the CPU on the remote memory when transferring the data,
and hence
achieving low latency[In this paper, we assume one-sided RDMA as the primary method for data transfer between compute and memory nodes, while allowing two-sided RDMA messages among compute nodes.].
Furthermore, disaggregated memory offers significant benefits to cloud native databases <cit.>.
The independent provisioning of compute and memory resources introduces elasticity to applications <cit.>. More importantly, memory disaggregation enables the sharing of main memory among multiple compute nodes, embracing the shared-memory architecture. This advancement facilitates the next generation of multi-primary
architectures (e.g., PolarDB MP <cit.>), resolving the conflict among the multiple writers distributively via
one-sided RDMA rather than heavy-weight consensus algorithms (e.g., Paxos) <cit.> or centralized log servers <cit.>.
However, developing database
systems
over disaggregated shared memory remains challenging, particularly due to the
data synchronization
problems
when involving
multiple writer nodes.
Based on the existing literature on disaggregated memory, we identify two technical challenges in supporting concurrent writer nodes. Additionally, we recognize a common research limitation encountered in many existing studies. This paper aims to address these two technical challenges with a unified approach and proposes a research methodology to avoid the identified limitation.
Challenge 1: Access atomicity over RDMA.
The atomicity between concurrent one-sided RDMA reads and writes is not guaranteed by the network card that potentially results in corrupted data being returned by RDMA reads.
A common approach
for
addressing this challenge is using the one-sided shared-exclusive latch along with RDMA atomic operations. Research suggests that RDMA atomic operations may suffer from low performance, and has proposed optimizations, e.g., versioning and checksum <cit.> to resolve the read-write conflicts optimistically. However, recent study <cit.> revisits these optimizations, and indicates that some of those optimizations are problematic. It concludes that the one-sided shared-exclusive latch turns out to be the most efficient and reliable solution for ensuring RDMA access atomicity. However, the proposed shared exclusive latch is still a prototype with unresolved challenges,
e.g.,
latch upgrade/downgrade and latch fairness.
Its performance
is yet to be
verified in a real disaggregated setup, e.g., in a disaggregated index or a disaggregated transaction engine.
Challenge 2: Cache coherence among multiple compute nodes.
Given that the latency for RDMA is approximately 10 times
slower
than main-memory access, minimizing RDMA round-trips in the critical paths of transactions becomes essential. Compute-side caching is an effective solution to reduce RDMA round trips by leveraging locality, but it introduces the cache coherence problem among the multiple participating compute nodes.
Existing cache coherence protocols over RDMA,
e.g.,
GAM <cit.> and
the
lock fusion module in PolarDB-MP <cit.>
use remote procedure calls (RPC), relying on the computing power of the memory nodes.
These protocols
become
bottlenecked by the limited computing power when applied to stranded memory.
In addition, some index studies simplify the cache coherence problem by only caching the metadata of a data structure (e.g., the internal nodes of a B-tree, the hash directory) <cit.>, but these caches are constrained in size, and are not adaptable to the size of available local memory. Furthermore, the implementation of metadata caching strongly depends on the specific data structure, limiting its generalizability. Therefore, there is significant need for a generative cache coherence protocol that eliminates the need for computing over remote memory.
The need for a proper abstraction layer.
In realizing databases over disaggregated memory, existing academic research
focuses only on specific data structures or algorithms.
Integrating these research outcomes into a unified DB system is challenging due to their heterogeneous, inefficient, and even unsafe designs in addressing the data synchronization problems.
In addition to
optimizing individual data structures and algorithms,
it is important to
establish a high-performance and generative disaggregated memory abstraction layer that effectively addresses the two aformentioned technical challenges underneath.
Subsequently, applications,
e.g.,
indexes and transaction engines, can be built on top of this layer.
The advantages of an abstraction layer over disaggregated memory are threefold: (1) It conceals the complexity of RDMA programming from developers. (2) It prevents developers from implementing intuitive yet problematic optimizations for data synchronization. As highlighted in <cit.>, guaranteeing the correctness of RDMA synchronization requires expertise in RDMA programming. (3) It manages various types of data, including data tables and indexes, within a unified cache framework, thereby simplifying the regulation of local memory for compute-side caching.
Existing abstraction layers over distributed shared memory (e.g., FaRM <cit.>, NAM <cit.>, GAM <cit.>) either do not leverage local caches to explore data locality or rely heavily on the computing power of the memory nodes.
Our approach.
This paper presents Shared-Exclusive Latch based Cache Coherence protocol (SELCC), an innovative solution for data synchronization problem over disaggregated shared memory.
By introducing lazy latch release and invalidation messages, the one-sided RDMA shared-exclusive latch protocol can be upgraded to address the cache coherence problem with sequential consistency.
This unified protocol effectively guarantees RDMA access atomicity and cache coherence simultaneously,
minimizing the RDMA round trips incurred when addressing data synchronization problems. Compared with the RPC-based solutions, SELCC does not involve any computing on the remote memory and potentially reduces RDMA round trips by converting two send-and-reply RDMA messages into one combined one-sided RDMA operation when fetching the data from disaggregated memory. To optimize performance, SELCC protocol embeds
cache directory entries into RDMA latch words and prioritizes local concurrency control over global concurrency control. Additionally,
SELCC enhances the fairness of the RDMA spin latch protocol by attaching priority into invalidation messages. We implement the SELCC protocol within lightweight LRU caches on the compute nodes, establishing an efficient disaggregated memory abstraction layer. SELCC provides main-memory-like APIs to upper-level applications, facilitating the migration of data structures and algorithms and achieving performance comparable to
competitors optimized for the disaggregated shared memory
(More elaboration in these issues in <ref> and <ref>).
Contributions. This paper makes several key contributions: (1) It introduces SELCC, an upgraded one-sided RDMA latch protocol that simultaneously resolves the RDMA access atomicity and cache coherence issues.
(2) It envisions an innovative approach to support multi-primary design via disaggregated shared memory. Compared to the data synchronization approach in PolarDB-MP, SELCC frees remote memory from performing computing during data synchronization, and thus making remote memory more suitable for stranded memory disaggregation.
(3) This paper instantiates the SELCC protocol into
an abstraction layer that provides main-memory-like APIs. It facilitates the seamless migration of data structures and algorithms from local memory to disaggregated memory, and thus, simplifying database systems research and development over disaggregated shared memory.
(4) It presents a thorough experimental study of SELCC, demonstrating its performance benefits, and identifying its favorable workload patterns.
§ BACKGROUND
RDMA Technology. Remote Direct Memory Access (RDMA) is a high-speed inter-memory communication method with low latency. It allows direct access to the memory of a remote node <cit.>. RDMA bypasses the host operating system when transferring data to avoid extra data copy. RDMA's kernel-bypassing and low-latency features make it applicable to high-performance data centers <cit.>.
ibverbs is a C++ library for RDMA programming that provides low-level implementation of RDMA primitives. There are five types of primitives in ibverbs: RDMA send, RDMA receive, RDMA write, RDMA read, and RDMA atomic <cit.>. The memory buffer involved in the RDMA primitives needs to be registered into the RDMA network card in advance by .
RDMA write and RDMA read are one-sided RDMA primitives that directly access the remote server's memory without involving the remote server's CPU. Two-sided RDMA primitives (including RDMA send and RDMA receive) involve both sides of the compute and memory servers.
RDMA atomic
includes
two primitives: () and (). These primitives ensure the atomicity of a group of operations on data of at most 8 bytes. Additionally, and can be leveraged to implement shared-exclusive latch over RDMA (SEL), guaranteeing atomicity among RDMA reads and writes <cit.>.
Cache-Coherence Protocols.
Cache coherence is a concept in multiprocessor systems ensuring that multiple copies of data in various CPU caches remain consistent <cit.>. In traditional multiprocessor systems, consistency is ensured via hardware-level cache-coherence protocols. However, in the context of disaggregated memory systems,
these
hardware-level protocols are not present. Consequently, a software-level cache-coherence mechanism becomes necessary when local caches are deployed in compute nodes.
Although quorum-based protocols, e.g., <cit.>, can maintain strong data consistency, they are primarily designed for data replication rather than data caching. These protocols broadcast messages to all compute nodes for every read and write, which contradicts the fundamental principle of caching: minimizing network access by exploring data locality. Consequently, they are not the optimal solution for the cache coherence problem in disaggregated memory.
An effective approach to addressing cache coherence should follow the methods used in multiprocessor systems. Cache coherence protocols, e.g., MSI, MESI, and MOESI <cit.> in multiprocessor systems maintain cache consistency by tracking the state of each memory block and enforcing rules for read and write operations. These protocols fall into two primary categories based on how compute nodes are informed of operations from other processors: snoop-based protocols and directory-based protocols.
Snoop-based protocols monitor a common bus to detect whether a cache block is being read or written by another processor, while directory-based protocols utilize a directory to keep track of which caches have copies of each memory block, sending messages only to the processors with valid cache copies.
§ SYSTEM OVERVIEW
In the proposed abstraction layer, all compute nodes share the same disaggregated memory space, provided by a group of memory servers. Data within this space can be addressed using an 8-byte global pointer (NodeID, offset), where NodeID is the unique identifier of the memory server and offset specifies the memory offset inside the server. Compute nodes interact with remote memory via compute-side caching, leveraging access locality to minimize unnecessary RDMA round trips.
As in Figure <ref>, the disaggregated memory space is divided into blocks of configurable sizes, referred to as Global Cache Lines (GCLs). GCL serves as the fundamental data manipulation unit between the compute and memory nodes and comprises 3 components: A one-sided global latch word, a user-defined header, and the data region. The global latch word (8 bytes) is a crucial element, ensuring one-sided RDMA access atomicity and cache coherence.
The user-defined header is an application-specific header, similar to page headers in traditional databases. Finally, the data region stores data objects,
e.g.,
tuples for data tables and key-value pairs for indexes.
SELCC exposes a straightforward interface to upper-level applications (See Table <ref>).
Users can allocate or deallocate global cache lines by calling . Each data access is conducted via the local cache, and is protected by an SELCC latch that consists of a hierarchical data structure containing a local latch in the cache entry and a global latch in the remote memory.
The acquisition of a SELCC latch () ensures that both the local and global latch are obtained, thereby guaranteeing access atomicity and cache coherence across compute nodes. Upon acquisition, the API returns a cache handle pointing to the local copy of the target GCL. The release of the SELCC latch () ensures the immediate release of the local latch while deferring the release of the global latch until another compute node accesses the same GCL. Additionally, SELCC provides APIs for global atomic operations that can be utilized to generate global timestamps or sequential numbers.
This layer of abstraction allows
users to disregard the intricacies of RDMA programming. Many data structures and algorithms
for
monolithic servers can be migrated onto SELCC seamlessly ( <ref>), as the RDMA access atomicity and cache coherence problem has already been resolved underneath the abstraction.
§ THE SELCC PROTOCOL
In this section, we
introduce the SELCC Protocol;
Shared-Exclusive Latch-based Cache Coherence protocol (SELCC), addressing the two fundamental issues over disaggregated shared memory: (1) Cache coherence across compute-side caches in the compute nodes, and (2) Atomicity for concurrent RDMA read and write operations. The main idea is to upgrade the existing shared-exclusive latch protocol (SEL) to solve the cache coherence problem.
SELCC
follows the design principle of disaggregated memory to use one-sided RDMA solely for data transfer between the compute and memory layers while allowing two-sided RDMA for communication among the compute nodes.
This principle ensures high scalability, as the protocol does not rely on the computing power of the memory nodes. Furthermore,
SELCC adheres
to design guidelines <cit.> that ensure correctness and efficiency of RDMA programming.
§.§ Main Idea
The SELCC protocol is developed based on the one-sided shared-exclusive latch protocol (SEL)
that
ensures RDMA access atomicity <cit.>. To address the cache-coherence problem, we draw inspiration from the MSI protocol <cit.>, a cache coherence protocol for multi-processor systems. MSI maintains cache-entry states, and employs a state machine to ensure the freshness of data reads and writes. In Figure <ref>, the MSI protocol comprises 3 states: Modified, Shared, and Invalid, with state transitions triggered by local processor read and write (PrRd and PrWr) operations, or by bus messages from other processors (BusRd, BusWr, or BusUpgr). Cache coherence is maintained as long as the cache state adheres to the state machine depicted in Figure <ref>c.
Interestingly, we observe that the states of the MSI protocol have similar semantic meanings to those of the SEL protocol. In SEL protocol, the Exclusive state implies a locally modified copy, the Shared state denotes a locally shared copy, and the Latch Off states represents that the local copy is invalid. However, the conditions for state machine transitions differ. The compute node eagerly releases the RDMA latch once local access is complete, resulting in invalidated data copies immediately,
whereas MSI invalidates cache states lazily upon receiving bus signals from the other processors. By representing cache states with latch states and aligning the SEL protocol's state machine with that of the MSI protocol, the cache coherence problem can be resolved.
To achieve the state machine alignment, SELCC introduces the concept of lazy latch release and the invalidation messages (PeerRd, PeerWr, PeerUpgr) in Figure <ref>b among compute servers. An invalidation message is issued when a compute node fails to acquire the global latch. Compute nodes do not immediately release the latch after completing the access; instead, they defer latch release until receiving an invalidation message from a peer compute node or until the cache entry is being evicted. Consequently, SELCC's state machine mirrors that of the MSI protocol, as in Figures <ref>b and <ref>c. When a compute node successfully acquires the latch, it stores the fetched copy in the local cache, and uses latch states to represent corresponding cache states.
§.§ Distributing Cache Directory into Latch Words
With lazy latch release, when a compute node accesses disaggregated memory that is latched by other nodes, invalidation messages must be sent to those nodes. The biggest challenge
is how to determine the target server to send the invalidation message efficiently.
Broadcasting messages for every operation would exhaust network bandwidth. Thus, a cache directory is essential on the disaggregated memory to track the status of each global cache line, including the cache state and cache copy holder IDs.
However, maintaining
this
directory in disaggregated memory is challenging, particularly when no extra RDMA round trip is expected.
Observe
that traditional a one-sided RDMA latch does not fully utilize the 64 bits of the latch word. Thus, it is feasible to distribute and embed the cache directory entries into the RDMA latch words of the global cache lines (see Figure <ref>). The benefits of this approach are twofold: (1) No additional RDMA round trips are introduced to maintain the cache directory, and (2) The atomicity of directory changes is naturally ensured. When a compute node fails to find a valid cache entry in local cache, it
attempts
to acquire the latch by RDMA atomic operations. If lock acquisition fails, the compute node can acquire the server IDs of the current cache copy holders through the RDMA atomic operation's return, enabling the determination of invalidation message recipients.
In Figure <ref>, a latch word consists of 64 bits; the maximum data length supported by an RDMA atomic operation. We divide these 64 bits into two parts: (1) An exclusive latch holder's ID (8 bits), and (2) A reader holders' ID bitmap (56 bits). With this new latch word structure, the RDMA latch can record both shared and exclusive latch holder IDs. This protocol can support up to 56 compute nodes, exceeding the 32 compute nodes typically supported by modern cloud-native databases <cit.>. With multi-cores on each compute node, a system using SELCC can support thousands of cores.
§.§ Revisiting RDMA Latch Procedures
As the latch data structure has been changed in SELCC, it is necessary to revisit the procedures for acquiring and releasing RDMA shared-exclusive latches.
This subsection illustrates the steps for acquiring and releasing shared-exclusive latches using the new proposed latch structure.
Initially, the global latch is off, represented by (0, 0b00...0). Each reader or writer in the compute nodes must acquire the global latch, read the cache line in a single combined RDMA round trip, and then access the data locally.
(a) Exclusive latch acquisition.
When acquiring the exclusive latch, the writer atomically compares the entire latch word against the value (0, 0b00...0) and swaps it with (NodeID, 0b00...0)
If the fails, the latch word before the operation is returned to the compute node, from which the compute node parses the shared/exclusive latch holders' IDs, and sends the invalidation messages. This procedure is repeated until the succeeds.
(b) Shared latch acquisition. When acquiring the shared latch, readers atomically fetch the value of the latch word, and sets its bitmap by value (1 << NodeID) via . The reader checks the return of the to see whether there is a writer holding the latch. If so, the latch acquisition
fails.
It sends an invalidation message according to the returned exclusive latch holder ID, and resets its bit in the bitmap by another .
These procedures are repeated until the return implies that no exclusive latch holder exists.
(c) Shared/exclusive latch release.
In SELCC, global latches are not released until an invalidation occurs or the cache entry is evicted from the cache. RDMA latch release is handled by background threads dedicated to processing invalidation messages. When releasing the exclusive latch, the compute node atomically fetches and subtracts the latch word by (NodeID, 0b00...0). We do not
adopt
the method in <cit.> for releasing the exclusive latch via , as write releases could spuriously fail due to concurrent read lock operations, resulting in livelock. When releasing the reader latch, the compute node resets its bit in the bitmap via .
(d) Latch up/downgrading.
According to the state machine in Figure <ref>, the exclusive latch may need to be downgraded to a shared latch based on an invalidation message from a peer reader. To achieve this, the compute node atomically compares and swaps the latch words from (NodeID, 0b0000) to (0, 1 << NodeID). Conversely, the compute node may need to upgrade its local latch from a shared to an exclusive latch. To achieve this, the compute node first attempts to atomically compare and swap the latch words from (0, 1 << NodeID) to (NodeID, 0b0000). However, a deadlock could occur if two nodes try to upgrade the same global latch simultaneously. To resolve this, after several attempts of atomic upgrade, upgrading the latch falls back to a two-step process consisting of shared latch release and exclusive latch acquisition.
§ OPTIMIZATIONS
We instantiate the SELCC protocol into the compute-side cache over the disaggregated memory,
creating a high-performance abstraction layer. The local cache is a lightweight hash table with LRU replacement policy and is sharded to support highly concurrent local access. Even though the SELCC protocol can theoretically address the cache coherence problem,
several implementation challenges remain for its instantiation: (1) Efficiently implementing invalidation messages across compute nodes, (2) Coordinating local concurrency control and global concurrency control efficiently, given that each compute node can support many local threads concurrently accessing the local cache, (3) Avoiding latch starvation to maintain fairness among the compute nodes.
§.§ Efficient Invalidation Messages
Invalidation messages play an crucial role in the SELCC protocol. Poor design can result in slow or blocked read/write over the shared data. Invalidation messages are realized by RPC through RDMA, and contains information,
e.g.,
the global address of the target Global Cache Line (GCL) and type of invalidation message (PeerWr, PeerRd or PeerUpgr). Note that the RPC only exists between compute servers, while the communication between compute and memory servers is purely one-sided.
Each pair of compute nodes is interconnected via a limited number of RDMA queue pairs.
Whenever a thread fails to acquire the global latch for a cache line, it issues invalidation messages to prompt the current latch holders to release the latch (shared/exclusive) and write back any dirty data if applicable.
These messages are handled by background threads,
termed
RPC handlers, on the receiver side that release the global latch on behalf of the sender. Before releasing the global latch, the background threads acquire the local latch to synchronize with local accessors. Invalidation messages may be dropped by the RPC handlers if the cached entry has already been invalidated by other compute nodes or if the target cache line has already been evicted. Therefore, a resending mechanism for invalidation message is necessary to address message dropping.
Efficiently processing invalidation messages involves three main challenges: (1) Preventing RPC handling threads from being blocked that can lead to blocked reads and writes at the sender if the local read/write thread holds the local latch for too long; (2) Avoiding excessive resending messages that could saturate network bandwidth; and (3) Ensuring that each message is processed only once, as duplicate processing of invalidation messages can reduce cache hit rates and overload the invalidation message handler.
To address the first challenge, the invalidation message handler uses to acquire the local latch before processing messages. The function is non-blocking, ensuring that the RPC handler is never blocked. If the fails, the handler drops the message, and processes the next one.
To prevent network saturation from excessive resending of messages, there is a time interval T between each resend, inversely related to the total number of global latch retries. During the interval between resending messages, the message sender retries the global latch to obtain the latest information on the valid cache copies, and adjusts the targets for the next invalidation messages accordingly.
To guarantee the messages being processed at most once, each invalidation message is assigned a unique ID (cache line ID + cache line version), allowing the handler to verify if the message has already been processed.
§.§ Coordinating 2-Level Concurrency Control
The next challenge is efficiently coordinating concurrency control between local and global accesses.
This complexity arises from the two-level hierarchy, as illustrated in Figure <ref>. The first level, local concurrency control, resolves access conflicts
within a single compute node using local latches. The second level, global concurrency control, addresses RDMA conflicts on global cache lines among compute nodes using global latches.
The hierarchical concurrency control design aims to minimize RDMA network traffic by effectively leveraging local concurrency control. RDMA network traffic primarily stems from two sources: RDMA lock acquisition and invalidation messages.
As in <cit.>, RDMA atomic operations encounter bottlenecks under highly contended workloads. To mitigate the network traffic associated with RDMA lock acquisition, we utilize the local cache to store not only the cache line but also the global cache states (Modified, Shared, or Invalid) that correspond to the global latch state. This enables local threads to verify both data freshness and the global latch state through the local cache entry. Moreover, a local shared-exclusive mutex is installed on each cache entry to ensure the atomicity of local read and write operations. This approach, in Figure <ref>, reduces RDMA round trips on the global latch by resolving the conflicts locally.
To prevent network bandwidth saturation from excessive invalidation messages, we optimize the protocol for its worst-case scenario.
A cache coherence protocol could perform poorly under highly skewed workloads due to the large volume of invalidation messages exchanged among compute nodes. To address this issue, we prioritize local concurrency control over global concurrency control, ensuring that conflicts within the same compute node are resolved by local latches first.
To achieve this, the invalidation message handling threads use "" to acquire the latch for the cache entry.
Since fails immediately when there is a conflict, rather than waiting as a normal operation, the invalidation message handler has a lower priority in acquiring the local latch compared to the front-end accessors.
As in Figure <ref>, this approach can significantly reduce the number of invalidation messages for accessing the same data.
However, prioritizing local access over global access can potentially prevent invalidation messages from taking effect under highly skewed workloads, potentially lead to global starvation on other compute nodes. The solution to this starvation problem is presented in <ref>.
§.§ Fairness of One-Sided RDMA Latches
Fairness is a significant challenge for the SELCC protocol, as it is based on a shared-exclusive spinlock. The read and write latency on a particular server can become extremely long if that server experiences starvation during latch acquisition. In monolithic servers, the latch fairness problem is often addressed by maintaining a FIFO queue for each latch. However, in the context of disaggregated memory, maintaining such a queue without incurring extra RDMA round trips is extremely difficult. A new efficient mechanism is required to enhance latch fairness over disaggregated memory. This section starts from the root causes of latch starvation over SELCC, and proposes the
relevant
solutions accordingly. Due to the two-level hierarchy of the system, two root causes of latch starvation can be identified, each requiring distinct resolution techniques.
Root Cause 1: Asymmetric Local Latch Acquisition. As stated in <ref>,
to minimize the volume of invalidation messages traffic, local front-end accessors have higher priority than invalidation message handlers when acquiring the local latch.
A compute node can experience global latch starvation for a particular data object if a peer compute node with a valid copy continuously receives local access requests from multiple threads for that data object. In this scenario, the local accessors continuously hold the local latch, causing the invalidation message handler's requests to fail continuously, leading to global latch starvation. This type of starvation, caused by the asymmetric chances of acquiring the local latch between local front-end accessors and background invalidation message handlers, can be resolved through a lease mechanism on local latch within a single compute node.
Root Cause 2: Asymmetric Global Latch Acquisition. It is not necessary to have symmetric hardware configurations across all the compute nodes. Consequently, some compute nodes with weak CPU or network resources may experience latch starvation due to the low frequency of RDMA latch retries. Additionally, if there are continuous global read requests for a particular data object, a write request for that data object may struggle to acquire the exclusive latch because peer compute nodes continuously hold the shared latch, preventing the writer from obtaining the exclusive latch. These types of starvation, stemming from asymmetric chances of acquiring the global latch among multiple compute nodes, require an effective global coordination mechanism among multiple compute nodes to resolve.
§.§.§ Addressing Local Latch Starvation
To address local latch starvation, we implement a lease mechanism that forces the compute node to release the global latch when a data object has been continuously accessed by local front-end threads for an extended period.
To interrupt these continuous local accesses at an appropriate time, two counters, the read access counter (R_c) and the write access counter (W_c), are maintained in each cache entry. These counters are activated only when an invalidation message is dropped due to the ongoing local access and is deactivated when a thread acquires the latch without spinning, indicating that the data is no longer heavily accessed.
The counters are incremented by 1 when a local access waits for the latch. Synthetic access times for the cache entry are calculated by
H_times = R_c/P + W_c, where P represents the number of front-end threads on the compute node. When the synthetic access times exceed a predefined threshold θ, the local thread proactively release the global latch and reset the counters.
§.§.§ Addressing Global Latch Starvation
To address global latch starvation, we adopt a priority aging mechanism, originally devised to solve the starvation problem in CPU scheduling <cit.>. In SELCC, each latch is assigned a priority that is positively correlated to the number of retries a compute node has conducted for a particular RDMA latch.
The main idea is to ensure that global latch ownership is always handed over to the compute node with the highest latch priority. To achieve this, we implement 3 mechanisms. First, as stated in <ref>, there is a manually injected time interval between each latch retry for a particular latch. This interval decreases as the priority of latch acquisition increases. Thus, compute servers having prolonged wait times are more likely to successfully acquire the latch through more frequent latch retries.
Second, we develop a deterministic global latch handover mechanism based on latch priority. The exclusive latch holder, receiving invalidation messages from all conflicted servers, acts as a centralized decision-maker for global latch ownership transfer. The latch acquisition priority is attached to the invalidation message, providing the exclusive latch holder with information on which compute node is experiencing latch starvation. During the continuous local access (Sec. <ref>), the invalidation message handler keeps receiving invalidation messages from other compute nodes, and stores the message with the highest priority value in the cache entry. When releasing the global latch,
the invalidation handler thread checks the stored invalidation messages in the cache entry. If there is a stored invalidation message from Server B with a priority greater than that the others, it will deliberately handover the latch from Server A to Server B by conducting RDMA CAS onto the latch word (Compare:(A, 0b00...0), Swap: (B, 0b00...0).
After the deterministic latch ownership transfer, Server A clears the stored invalidation message, and resets all relevant states in the cache entry.
Finally, to prevent the write starvation caused by continuous global reads, all global readers create a time window during which no concurrent reader holds the target shared latch, allowing the concurrent writer to acquire the exclusive latch.
To realize this, when
latch starvation
is
detected, we
inject a spinning wait between the forced shared latch release and the next shared latch acquisition on the same data object. The spin duration is designed as T_spin = P_inv× T_r, where P_inv is the priority level of the received invalidation message and T_r is the time elapse of RDMA round trips.
§ READ AND WRITE SUPPORT OVER SELCC
As outlined in <ref>, data access over SELCC is protected by the SELCC latch. Before conducting write or read operations on the GCL in local cache, the accessors have to acquire the SELCC latch via the APIs to get access permission. After completing the access, the thread must release the corresponding SELCC latch via the APIs to allow other accessors, local or global, to access the GCL. SELCC latches refer not only to the RDMA latch in disaggregated memory, but also to the local latch in the compute-side cache. The interactions among the local caches and the remote RDMA latches are complex and need of further illustration.
In this section, we detail the procedures behind the SELCC APIs to explain the read and write operations.
§.§ Reads with SELCC_SLock
Algorithm <ref> illustrates the procedure for acquiring the SELCC shared Latch.
First, the algorithm searches local cache for the entry corresponding to gaddr, and retrieves the handle h (Line 2). If h is not null, the local shared latch on the cache entry is acquired (Line 4). Then, we check the global latch state of the cache entry (Line 5). If the state is either Shared or Exclusive, indicating a cache hit, the handle is returned (Line 6).
If the cache entry is not found, a new cache entry is created and inserted into the local cache (Line 8), and the local shared latch is acquired (Line 9).
If the cache entry is invalid or not found, we try to acquire the shared latch on the global cache line via RDMA in a loop (Lines 10-13). This involves issuing an combined RDMA request with CAS and read operations, and check the operation returned to verify whether the GCL has been exclusively latched by another compute node. If so, the algorithm sends invalidation messages to the current exclusive latch holder. This process repeats until the shared latch is successfully acquired, after which the handle to the cache entry is returned.
§.§ Writes with SELCC_XLock
Algorithm <ref> outlines the procedure for acquiring the SELCC exclusive latch (SELCC_XLock). Similar to SELCC_SLock, the global address is searched in the local cache (Line 2). If a valid entry is found with global exclusive state, the handle is returned (Line 6). Else, the exclusive global latch must be acquired or upgraded from the shared global latch. To upgrade the shared latch, the accessor uses RDMA CAS to attempt an atomic upgrade the latch from shared to exclusive. This attempt may fail if there are shared copies in other compute nodes. Moreover, there is potential deadlock if there is another concurrent accessor trying at the same time to upgrade its SELCC shared latch to be exclusive.
To handle potential deadlocks, the procedure (Lines 8-13) is repeated up to
N times (N ≥ 2). If deadlock is detected, we abandon the atomicity of latch upgrade, and falls back to a two-step process: Releasing the shared latch (Line 14), and then acquiring the exclusive latch (Lines 18-21). If the cache entry is found invalid, the valid state is re-acquired by obtaining the global exclusive latch (Lines 18-21). If the cache entry is missed, a new entry is created, and both the local and global latches are acquired before returning the handle (Lines 16-21).
§.§ Unlocking SELCC Read and Write Locks
After completing the access, the SELCC latch must be released. The unlatching procedure involves two steps. First, the cache entry handle is released, indicating that the cache entry is ready for eviction. Second, the local latch in the cache entry is released, allowing other threads or the invalidation message handler to operate on this cache line. Notably, the global latch remains unchanged due to the lazy release mechanism.
§ SELCC'S CONSISTENCY MODEL
SELCC guarantees the strongest achievable consistency level, sequential consistency.
Every compute node should observe operations from different compute nodes in the same sequential order <cit.>.
Sequential consistency is essential for a generative cache framework because many applications,
e.g.,
banking and financial services, rely on strong consistency to provide reliable and accurate services to users.
The primary reason for SELCC achieving sequential consistency
is its latch-based design with eager invalidation. The latch acquired before reads or writes, serves as a barrier between operations, preventing read-write reordering within a thread. Eager invalidation ensures that before a compute node modifies data in the disaggregated memory, it must invalidate all cache copies. This forces all subsequent reads to fetch the latest data from the disaggregated memory. This mechanism guarantees that all compute nodes can observe a write simultaneously when the writer releases the SELCC exclusive latch.
Consequently, there is a total order of writes observed by all compute nodes, determined by the moment the writer releases the SELCC exclusive latch.
As in Figure <ref>, there are four updates to the disaggregated shared memory occurring from left to right in chronological order. The total order of the operations is (X = 1 → Y = 1 → Y = 2 → X = 2) that is determined by the moment the disaggregated memory completes the RDMA latch release. This is different from the real order of write operations (Y = 1 → X = 1 → X = 2 → Y = 2) on the timeline.
No reader thread can observe values to
X and Y that violate the sequential order determined by the moment of RDMA latch release.
By default, SELCC guarantees strong consistency. It is feasible to relax the consistency of SELCC for improved performance. For instance, we can relax the read-write ordering by enabling asynchronous writes. Instead of completing the exclusive latch acquisition and sending invalidation messages for each write operation, the writer can push the modified value and the target global cache line ID into a work request queue, and let dedicated background threads perform the write operations in FIFO order.
This approach results in a protocol with FIFO consistency <cit.>, enhancing performance by allowing asynchronous execution of writes.
§ APPLICATIONS OVER SELCC
As in <ref>, SELCC provides a main-memory-like programming model/API for users to develop data structures and algorithms using pessimistic concurrency control. SELCC APIs do not support optimistic concurrency control, as it is less reliable in terms of correctness (See <cit.> Consideration #3 Pessimistic synchronization is more “future proof”). Below, we demonstrate how to re-implement two of the most crucial database components: indexes and transaction engines,
using SELCC APIs.
§.§ Index Support over SELCC
Migrating the index from a monolithic server over SELCC involves two main steps. The first step is organizing the basic data structure into Global Cache Lines (GCL). For data structures that already organize multiple data objects into blocks, e.g., B-trees, R-trees, ART trees, and hash tables, this process is simplified by aligning the node structures onto GCL. In cases where the original data structure is in-memory and does not organize multiple data objects into blocks (e.g., skip lists), there are two options: Reorganizing multiple data objects into blocks based on their locality, or adjusting the global cache line size to match a single data object. The former approach minimizes space overhead, while the latter requires no code modifications. The second step involves replacing local shared-exclusive latches in the algorithm with . SELCC ensures both read-write atomicity and cache coherence globally. A concurrent B-link tree is reimplemented. In <ref>, we compare its performance with optimized B-trees over disaggregated shared memory; Sherman <cit.> and DEX <cit.>.
§.§ Transaction Support over SELCC
Migrating concurrency control algorithms from a monolithic server to SELCC involves three primary steps. First, tuples should be properly organized into GCLs. Second, local shared-exclusive latches are replaced with SELCC_XLock / SELCC_SLock locks. Finally, algorithms that require monolithic timestamps utilize provided by SELCC API to perform RDMA Fetch-and-Add (FAA) operations on a global timestamp generator to obtain monotonically increasing timestamps.
Three types of algorithms have been implemented over SELCC: two-phase locking with no wait strategy (2PL), timestamp ordering (TO), and optimistic concurrency control (OCC). Tuples are organized in a heap style, meaning they are placed in GCLs by the chronological order of insertion. To ensure atomicity of tuple accesses, these accesses must be protected by locks.
For two-phase locking, SELCC latches on the GCLs are reused for locking purposes, minimizing the RDMA round trips required by the transaction concurrency control. To support durability in disaggregated memory, durable storage media is leveraged for write-ahead logging. Since all transactions are executed within the same compute node via RDMA, transaction support over SELCC does not require two-phase commit protocols.
§ EVALUATION
Overview.
First, we run micro-benchmarks to show the scalability and performance benefits of SELCC as a cache coherence protocol ( <ref>). Second, we use the YCSB benchmark to explore how
the index over SELCC performs compared to the state-of-the-art btrees over disaggregated memory ( <ref>). Third, We use the TPC-C benchmark to study how the transaction engines over SELCC perform under OLTP workloads ( <ref>).
Testbed. Experiments are conducted on a cluster of 16 nodes in Cloudlab <cit.>. The chosen instance type is c6220 that features two Xeon E5-2650v2 processors (8 cores each, 2.6GHz) and 64GB (8GB × 8) of memory per node. The cluster is interconnected using 56 Gbps Mellanox ConnectX-3 FDR Network devices. Each server runs Ubuntu 18.04.1, and the NICs are driven by Mellanox OFED-4.9-0.1.7.
The 16 servers are split into 8 compute servers and 8 memory servers. The compute servers can utilize all the CPU cores but have a limited local cache (8GB by default). The memory agents on the memory servers have access to all the memory but are restricted to utilizing a limited number of CPU cores (1 core by default).
§.§ Micro-benchmark
First, we evaluate the scalability of SELCC as an abstraction layer supporting multiple primary servers, and then demonstrate the advantages of SELCC as a cache coherence protocol.
Baselines.
To show the efficiency of SELCC, we compare SELCC against two abstraction layers over disaggregated memory.
The first baseline is GAM, an RPC-based cache coherence protocol designed for distributed shared memory.
We test GAM with different consistency models: total store order consistency and sequential consistency, corresponding to GAM (TSO) and GAM (SEQ) in the figures.
A notable RPC-based cache coherence protocol is the lock-fusion model in PolarDB MP. GAM can roughly represent its performance with limited remote compute power.
The second baseline is SEL, a one-sided access framework that operates without compute-side caching. While it employs the SELCC protocol to ensure RDMA access atomicity, it circumvents the cache coherence problem by disabling caching. SEL shares the same APIs as SELCC, allowing applications developed for SELCC to run seamlessly on SEL.
Benchmarks. We test the competitors by the micro-benchmark tool that allows for adjustments in sharing ratios, read/write ratios, data skewness and access locality.
In this micro-benchmark, each compute server issues 16 million accesses over 24 million allocated Global Cache Lines (48GB in total).
The overall throughput with different read ratios, 100% (Read only), 95% (Read intensive), 50% (Write intensive), 0% (Write only) are tested.
§.§.§ Evaluating the Scalability of SELCC
To evaluate the scalability of the SELCC protocol, we conduct a benchmark test under a uniformly distributed workload while varying the number of compute nodes.
To avoid
bottlenecks due to memory nodes' RDMA bandwidth,
we scale the number of memory nodes in proportion to the number of compute nodes. Each compute node executes 16 local threads.
We compare SELCC under
various
sharing ratios (sr), following the methodology in <cit.>. The sharing ratio (sr) indicates the percentage of allocated data accessible by all compute nodes, while the remainder is accessed privately. When the sharing ratio is zero, the system essentially operates as a sharding-based system over disaggregated memory.
Experimental results are given in Figure <ref>. The point values represent the overall throughput while the bar values indicate the proportion of operations requiring invalidation messages in the 100% shared case.
For the read-only and read-intensive workloads, SELCC demonstrates strong scalability regardless of the sharing ratio,
as there is very little cache coherence overhead introduced in the system. SELCC with 100% sharing ratio exhibits slightly super-linear scalability under read-only workloads, because the increased number of memory node results in less NIC Translation Buffer (TLB) misses. For write-intensive and write-only workloads, SELCC scalability deteriorates with increased shared data ratio and larger local cache sizes (Figure <ref> c and d). The reason is that a higher shared data ratio and larger cache size increase the likelihood of two compute nodes caching the same data, resulting in a higher volume of invalidation messages. Compared to the fully partitioned SELCC (0% shared ratio), the fully shared SELCC (100% shared) shows a 16.0%/14.3% (8GB cache) and 38.3%/36.0% (16GB cache) performance degradation at 8 nodes in write-intensive and write-only workloads, respectively.
This performance degradation mainly results from the invalidation messages for cache coherence.
Despite the overhead from maintaining cache coherence, SELCC still shows good scalability under write-intensive workloads. Compared to the single compute node deployment, the eight-node SELCC increases throughput by 6.67×/6.85×/(8GB cache) and 4.96×/4.77×(16GB cache) corresponding to the write-intensive and write-only workloads.
§.§.§ Workloads with Access Locality
To illustrate performance benefits of SELCC under workloads with access locality, we conduct a uniformly distributed micro-benchmark with 50% locality, where each operation has a 50% probability of accessing the same GCL as the previous one. The benchmark is executed with 8 compute nodes fully sharing data, using varying numbers of threads per node. Scalability becomes sub-linear between 64 and 128 threads due to the saturation of the ConnectX-3 NIC's network bandwidth.
Compared with SEL, due to the local cache, SELCC shows significant performance gains in read-intensive and read-only workloads (Figures <ref>a and <ref>b), with improvements of 1.68× and 2.18× at 128 threads, respectively.
However, the performance advantage of SELCC diminishes in write-intensive and write-only workloads (Figure <ref>c), where about 35%/33% of operations require invalidation messages and remote dirty page reads, reducing the local cache's effectiveness.
Compared with GAM (TSO) and GAM (SEQ), SELCC demonstrates superior performance across all four read ratios, achieving 3.60×/3.48×, 2.85×/3.41×, 5.12×/5.61×, and 3.63×/4.08× the throughput, respectively.
GAM exhibits limited thread scalability, especially in write-only and write-intensive workloads with a high number of compute-side threads (Figures <ref>c and <ref>d). This bottleneck primarily results from overloading the computing resource in the memory nodes.
§.§.§ Workloads with Access Skewness
To illustrate the performance benefits of SELCC under a workload with access skewness, we run the micro-benchmark with a Zipfian distribution. The skewness parameter, θ,
is set to
0.99, with no access locality applied. Other parameters
are
configured
in the same way as that of the previous subsection.
For read-intensive and read-only workloads, SELCC exhibits significant performance gains, achieving throughput 5.89×/5.40× over that of SEL at 128 threads. These gains result from the high cache hit ratios (60.6% and 84.4%, respectively) of skewed workloads.
However, for write-intensive and write-only workloads, SEL shows better performance than SELCC initially when thread count
is
low,
as SELCC suffers from a large number of invalidation messages triggered by the data hotspot.
As the thread count increases, SEL experiences significant performance degradation (over 7× in write-intensive workloads),
primarily due to the high contention in RDMA atomic operations over the data hotspot. In contrast, SELCC demonstrates better thread scalability, as the conflicts are resolved in the local cache first.
Finally, SELCC outperforms GAM(TSO) and GAM(SEQ) by 2.13×/8.23×, 1.96×/5.00×, 4.11×/2.57×, and 13.15×/4.04× for workloads with 128 threads, highlighting the superiority of SELCC as a cache coherence protocol over disaggregated shared memory.
§.§ Evaluating Index Performance over SELCC
Although the micro-benchmark demonstrates the performance benefits of SELCC under certain workload patterns, it does not accurately represent the real-world workload that an application using SELCC would encounter. We construct an index following the methodology outlined in <ref>, and evaluate its performance using
YCSB <cit.>.
Baselines. Three B-tree baselines are evaluated in this experiment. The first baseline is the B-tree over SEL, following the same methodology as the B-tree over SELCC. The second baseline is Sherman, an optimized index over disaggregated shared memory. We address the correctness issues in Sherman's optimistic synchronization following <cit.>. The final baseline is DEX, a sharding-based B-tree over disaggregated memory. Unlike the other shared-memory baselines, DEX employs a sharding mechanism to bypass the cache coherence problem.
Benchmarks & Configurations. We benchmark the indexes using YCSB, following methodologies established in the existing literature <cit.>. Each index is loaded with 50 million key-value records and tested under varying read ratios and data skewness (θ = 0.99). The experiments are conducted over 8 compute nodes, with 8 threads per node. The local cache of SELCC is set to 128MB.
§.§.§ Results
Uniform Workload. The b-tree over SELCC outperforms that over SEL by factors of 3.75×/4.09×/5.92×/6.28×, respectively (See Figure <ref>a). Unlike the results in the micro-benchmark, the high performance advantage of SELCC over SEL persists even in the write-intensive workload because most of the internal nodes are cached immutable. Compared to Sherman, the b-tree over SELCC outperforms Sherman in read-intensive and read-only workloads by 1.79× and 1.89×, respectively, because Sherman's optimistic synchronization requires one more RDMA round trip than the pessimistic synchronization used in SELCC. Finally, the b-tree over SELCC slightly loses to DEX under uniform workloads. This result is expected, as the sharding mechanism in DEX fully bypasses the cache coherence problem and includes many index-specific optimizations, whereas
the
b-tree over SELCC serves as a demonstration without data structure-dependent optimizations.
Skewed Workload.
The
B-tree over SELCC outperforms both SEL and Sherman by factors of 11.0×/9.89×/3.22×/11.86× and 3.07×/1.70×/1.16×/4.44×, respectively, because the local cache in SELCC can hold most of the hot data. The B-tree over SEL has very limited performance under skewed workloads due to the excessive RDMA round trips required for traversing the tree. Sherman exhibits weaker performance than the B-tree over SELCC, because its leaf nodes cannot be cached locally, resulting in high RDMA atomic traffic contention over the hot spots. In contrast, SELCC can mitigate this traffic by pre-resolving conflicts in the local cache. DEX demonstrates extremely fast performance as it completely avoids concurrency control and caches data locally. B-tree over SELCC outperforms DEX by 1.18× when handling read-only workloads, as the data hot spots can be effectively managed by all eight compute nodes.
While DEX shows superior performance as a key-value store, it has limitations when serving as an index component in a full-fledged multi-primary database due to the overhead of cross-shard transactions (See <ref>).
§.§ Evaluating Transaction Support over SELCC
We construct a transactional engine using various representative concurrency control algorithms: two-phase locking (2PL) with no-wait deadlock-avoidance strategy, timestamp ordering (TO), and optimistic concurrency control (OCC), following the methodology described in <ref>. Additionally, we build a 2 Phase Commit (2PC) engine over partitioned SELCC. We evaluate the performance of transaction engines using the TPC-C benchmark.
Baselines.
First, we build transaction engines over the SEL abstraction layer to explore the benefits of SELCC over SEL under OLTP workloads.
Second, we construct a distributed transaction engine with 2 Phase Commit over partitioned SELCC. By comparing the performance of fully-shared SELCC against partitioned SELCC, we aim to demonstrate the advantages of fully-shared SELCC for bypassing the two-phase commit (2PC) protocol.
Benchmark & Configuration.
A database is loaded with 256 warehouses, occupying approximately 64GB of disaggregated memory. The benchmark suite includes five queries: three update queries (Q1, Q2, and Q4) and two read queries (Q3 and Q5) [The order of Q1-Q5 corresponds to the sequence of queries introduced in the TPC-C specification <cit.>.]. The experiment is conducted in two parts. First, we evaluate SELCC against SEL using the three concurrency control algorithms, with all data fully-shared. The five queries are first evaluated individually, and then are evaluated in an evenly mixed manner. Write-ahead logging is disabled to clearly highlight performance discrepancies. In the second part, we compare fully-shared SELCC against partitioned SELCC using the same database setup. The transaction concurrency control algorithm is set to 2PL, and write-ahead logging is enabled to fully demonstrate the overhead of the 2-Phase Commit protocol.
§.§.§ Results
SELCC vs. SEL. As in Figure <ref>, concurrency control algorithms over SELCC offer significant performance benefits compared to those over SEL when handling workloads generated by TPC-C. SELCC achieves up to 28.2× throughput with read queries, 6.12× with update queries, and 3.39× in mixed scenarios.
SELCC maintains a considerable advantage over SEL even for update queries, because there are still numerous reads on immutable data (e.g., index traversal and reading immutable tables).
Additionally, the performance of concurrency control algorithms varies when dealing with different queries. TO algorithm over SELCC exhibits poor performance in read-only queries (Q3 and Q5) because read operations require updating the read timestamp, resulting in cache invalidation. However, TO outperforms the 2PL algorithm in update queries due to its lower abort rate.
OCC generally shows slower performance than 2PL because it requires acquiring the SELCC latch for the GCL twice per tuple—once during the read phase and again during the invalidate phase that results in a higher volume of cache invalidation messages.
Fully-shared vs. Partitioned. For partitioned SELCC, we partition the data according to warehouse IDs. Q1 and Q2 are evaluated with varying distribution ratios, representing the percentage of cross-shard transactions. As in Figure <ref>, partitioned SELCC outperforms fully-shared SELCC when the distribution ratio is 0. The gap between fully-shared and partitioned SELCC is not apparent primarily due to slow log flush onto hard disk,
shifting the bottleneck from RDMA access to disk writes. This gap will be more significant given high-speed durable devices, e.g., persistent memory. However, as the number of cross-shard transactions increases, the performance of partitioned SELCC decreases significantly. This decline is primarily due to communication overhead and, more importantly, the excessive disk synchronization required for both the prepared and commit stages, which heavily consumes disk bandwidth. In contrast, the fully-shared SELCC that bypasses the two-phase commit (2PC) protocol, remains unaffected by the distribution ratio.
§ RELATED WORK
Abstraction layers over distributed shared memory.
Abstraction layers and unified memory models over distributed shared memory have long been a focus of research <cit.>. Traditionally, network latency has been a significant issue, leading many systems to install local caches with relaxed consistency models to mitigate network costs <cit.>. Recently, advancements in networking technologies, e.g., RDMA <cit.> and programmable switches <cit.>, have revitalized interest in distributed shared memory, enabling stronger consistency models for local caching. However, the interaction between local servers and remote memory relies on RPC-based communication that can saturate the limited computing resources on disaggregated memory, particularly for systems built on stranded memory.
In addition, many abstraction layers (e.g., FaRM <cit.>, NAM <cit.>) leverage one-sided RDMA as the primary method to transfer data between the local servers and the remote memory. Due to the complexity of maintaining cache coherence without RPC, these systems do not apply compute-side caching to exploit data locality. The SELCC protocol addresses this gap by providing a cache coherence protocol with zero computing involvement on the memory node.
Database systems techniques over disaggregated memory.
Approaches to database research over disaggregated memory differ significantly between academia and industry. Academic research focuses on redesigning specific database components, e.g., indexes <cit.> and transaction concurrency control algorithms <cit.> over the disaggregated memory. SELCC converges the individual DB component research tracks by providing a layer of abstraction.
In contrast, industry,
e.g.,
PolarDB of Alibaba, conducts research in full-fledged system support over disaggregated memory <cit.>. PolarDB migrates the buffer pool onto disaggregated memory, achieving higher cache hit ratio <cit.>, instant failure recovery <cit.>, elasticity resource provisioning <cit.>, and multiple primary nodes <cit.>.
CXL-based disaggregated memory
CXL is an emerging technology addressing resource disaggregation from a hardware perspective <cit.>. In the context of CXL 3.0, cache coherence among compute servers will be guaranteed at the hardware level <cit.>. However, the CPU cache is limited in size and is manipulated in small granularity (64 Bytes). Frequent updates over a large memory region could trigger too many invalidation signals and remote memory access over the CXL network. Therefore, it is still beneficial to implement software-level cache in near memory with larger cache line granularity to reduce overhead over CXL networks. In these scenarios, the latch protocol over the CXL-based disaggregated memory can still be upgraded to guarantee cache coherence among CXL-enabled disaggregated memory and the software-level cache in local memory.
§ CONCLUDING REMARKS
This paper addresses a longstanding challenge for database systems over disaggregated memory: maintaining cache coherence without involving remote computing power. SELCC provides a disaggregated memory abstraction that facilitates further research in various areas, including data structures, transaction concurrency control, and multi-primary buffer management.
Additionally, the SELCC protocol can be leveraged by cloud-native DBs to achieve multi-primary designs.
§ ACKNOWLEDGEMENTS
Walid Aref acknowledges the support of the National Science Foundation under Grant Number IIS-1910216.
Jianguo Wang acknowledges the support of the National Science Foundation under Grant Number IIS-2337806.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.03593v1 | 20240905145122 | Ensuring resilience to extreme weather events increases the ambition of mitigation scenarios on solar power and storage uptake: a study on the Italian power system | [
"Alice Di Bella",
"Francesco Pietro Colelli"
] | physics.soc-ph | [
"physics.soc-ph",
"econ.GN",
"q-fin.EC"
] |
SegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing
Zhihu Hu
September 9, 2024
====================================================================================
§ ABSTRACT
This study explores compounding impacts of climate change on power system's load and generation, emphasising the need to integrate adaptation and mitigation strategies into investment planning. We combine existing and novel empirical evidence to model impacts on: i) air-conditioning demand; ii) thermal power outages; iii) hydro-power generation shortages. Using a power dispatch and capacity expansion model, we analyse the Italian power system's response to these climate impacts in 2030, integrating mitigation targets and optimising for cost-efficiency at an hourly resolution. We outline different meteorological scenarios to explore the impacts of both average climatic changes and the intensification of extreme weather events. We find that addressing extreme weather in power system planning will require an extra 5-8 GW of photovoltaic (PV) capacity, on top of the 50 GW of the additional solar PV capacity required by the mitigation target alone. Despite the higher initial investments, we find that the adoption of renewable technologies, especially PV, alleviates the power system's vulnerability to climate change and extreme weather events. Furthermore, enhancing short-term storage with lithium-ion batteries is crucial to counterbalance the reduced availability of dispatchable hydro generation.
Keywords
Climate change adaptation; Italian power system; power system resilience; mitigation strategies; photovoltaic power production
empty
§ INTRODUCTION
The complex and urgent challenge posed by climate change, including more frequent and severe windstorms, heavy precipitation, droughts, and wildfires, can significantly increase risks to power infrastructure and energy systems <cit.>.
At the same time, nations are increasingly committing to fundamentally reshape energy infrastructures, transitioning them towards low-carbon solutions to mitigate the adverse effects of climate change <cit.>. Nevertheless, the impacts of climate change are already evident in daily life, with projections indicating that these effects will intensify <cit.>. As a result,
it is imperative to design and plan future energy and power systems with a dual focus: not only on mitigating climate change but also on ensuring adaptation to the impacts that will arise.
The need for planning with a focus on both climate change mitigation and adaptation is particularly crucial when considering the electricity sector.
Firstly, future power infrastructure will increasingly depend on variable renewable energy sources (VREs), which are far more weather-dependent than traditional fossil-based thermal plants <cit.>. Secondly, a widely recommended strategy for decarbonising other sectors of the economy is the electrification of final demand <cit.>. This shift will require significant increases in electricity production, which may be more susceptible to weather-related risks due to the higher penetration of VREs. Consequently, careful planning is essential to ensure that future power systems are both sustainable and resilient in the face of these challenges.
In the case of the European Union (EU), power systems are already undergoing significant transformations, with nearly one-third of the region's electricity now generated from renewable sources such as hydropower, solar, and onshore wind <cit.>. As electrification progresses and the integration of RES deepens, European energy systems are becoming increasingly dependent on weather conditions.
Climate change impacts every stage of the electricity generation process: supply side, transmission and distribution networks, and load patterns. On the demand side, elevated temperatures can lead to increased electricity consumption for air conditioning during the summer, while warmer winters may reduce demand for heating: these changes affect not only the overall electricity demand but also result in higher load peaks and alterations in the power load profile <cit.>. As populations adapt to climate change, analyses on the influence of demand-side shocks on power systems are crucial for planning resilient systems. Some studies consider aggregated timespans, seasonal parameters or daily peaks for the electricity load <cit.>, while others reach an hourly resolution for temperatures and power demand <cit.>. Having estimates at a fine detail appears critical to enhance the resilience of the power sector, which could suffer during hours of high demand and low availability of supply <cit.>. Most of the literature has provided evidence of the short-term, intensive margin adjustments, examining how immediate changes in temperature influence electricity demand on a day-to-day or hour-by-hour basis <cit.>, disregarding instead long-run extensive margin adjustments. Over the past two decades, AC usage has increased rapidly in both developed and developing regions <cit.>, highlighting the critical need for novel analyses to better understand the implications of AC adoption on power system planning <cit.>.
As previously introduced, climate change also has significant impacts on the generation side of the power infrastructure. Higher air and water temperatures can reduce the cooling efficiency of thermal power plants, leading to reduced power output. Coal and nuclear plants, which operate using steam-turbine processes, can experience significant operational challenges during droughts. Variations in stream flow levels and elevated temperatures can substantially impact the availability of cooling water required for these plants to function at full capacity <cit.>. Gas-fired power plants, operating though combustion-turbine processes that require little or no water for cooling, can be affected by a reduction in the efficiency of turbines due to extreme ambient temperatures, ultimately leading to power output reductions <cit.>. Recent work has underscored that climate change has already increased curtailment of thermal power plants <cit.>.
Climate change can increase the frequency of prolonged droughts, which can reduce water availability for hydropower. A large body of literature has focused on how water scarcity due to climate change can undermine power generation of hydroelectric dams <cit.>. van Vliet et al. observe that climate change is projected to lead to significant regional variations in hydropower potential: some areas are expected to experience substantial increases in potential, while others may face considerable decreases, such as the United States and southern Europe <cit.>. Turner et al. corroborate these findings, highlighting the pronounced regional variability in hydropower impacts and emphasising the considerable investment required to adapt to anticipated changes in water availability <cit.>. Climate change also significantly impacts renewable power generation. Heatwaves and high temperatures can reduce the efficiency of solar panels, while altered wind patterns can affect wind turbine output <cit.>. Evaluating these effects is complex, and there is limited consensus on the magnitude and direction of climate-induced impacts on variable RES, particularly when addressing issues at the country level <cit.>.
Finally, weather-related impacts on transmission and distribution systems are expected to be amplified by climate change. Increased frequency and severity of extreme weather events, such as storms and high winds, can cause physical damage to infrastructure, leading to outages and disruptions <cit.>. Additionally, higher temperatures can affect the efficiency and capacity of transmission lines, increasing the risk of overheating and reducing performances <cit.>. Transmission capacity, crucial for the integration of the future electricity grid, might be significantly reduced during summertime due to increased peak temperatures <cit.>.
To provide policymakers with strategies for enhancing the resilience of power systems, integrating climate change impacts on both supply and demand into a comprehensive modelling framework is essential. Weather and climate effects can disrupt electricity operations, leading to increased costs, load shedding, or outages if demand exceeds forecasts or power capacities are compromised. Only a few studies incorporate high-frequency supply and demand forecasts under climate change, enabling an evaluation of effective responses.
Tobin et al. <cit.> assess climate impacts on European electricity production and suggest increasing resilient RES like wind and solar.
Bloomfield et al. <cit.> highlight substantial climate-induced variability in Europe’s energy balance by 2050, stressing the need for better integration of climate uncertainty in planning.
Optimisation models are traditionally employed to develop strategies for power capacity planning, especially when incorporating future mitigation goals <cit.>, while the integration of empirically-estimated climate change impacts in energy models is more limited (country-level studies include
<cit.>
and <cit.>).
IAM-based projections provide valuable insights; however, their findings tend to be aggregated in both space and time due to the broad nature of aggregated energy demand <cit.>. In contrast, high-resolution bottom-up power system models offer more detailed analysis, making them better suited for understanding the complexities of the interplay between mitigation and adaptation within the power system. Bottom-up models might advance our vision on the hourly and local consequences of future climate, since they generally have a larger spatial and temporal resolution compared to IAMs <cit.>. In a few cases, regional versions of leading IAMs have been coupled with power-system models with the aim of assessing climate change impacts on peak load and generation (<cit.>, <cit.>,<cit.>).
To the best of our knowledge, Handayani et al. is one of the few research articles that thoroughly explores the intricate relationship between optimising mitigation and adaptation strategies in power sector planning <cit.>.
In this context, this paper makes two significant contributions to both academic literature and power system investment planning. First, it stands out as one of the few studies that simultaneously addresses decarbonization goals and the impacts of climate change on various power generation technologies and electricity demand while modelling power system expansions. Unlike existing research that typically focuses on long-term pathways (e.g.<cit.>), this paper examines the operation and optimal capacity requirements for a specific year. This approach provides high temporal (hourly) and spatial (market zones) resolution, which is crucial given that climate change effects can be highly localised and may vary significantly at different times of the day. As a second contribution, by delineating various scenarios, this paper addresses the inherent uncertainties associated with climate change. Our goal is not only to assess the optimal investment strategy under average future weather conditions but also to evaluate the requirements for extreme weather scenarios. This approach enables a comprehensive understanding of how investment strategies might need to adapt to both typical and atypical climatic conditions. In this paper, we explore the implications of both demand and supply side shocks on energy system planning using a bottom-up capacity expansion model, focusing on the power system planning for Italy in 2030.
In this study, we build upon previous work by employing an integrated modelling approach to provide a more complete and rigorous analysis. We optimise the least-cost electricity system across various weather scenarios to account for potential future climatic conditions. By defining various weather scenarios for 2030, we can evaluate the impacts of both average near-future climate shifts and more extreme weather conditions. We combine existing and new empirical evidence to expand the understanding of power demand shocks and supply impairments under future climate change, with a focus on the implication of increased temperatures on air-conditioning demand, thermal power outages and hydropower generation.
The remaining sections of this paper are organised as follows. Section <ref> outlines the methodology used, including the empirical evaluation of climate change impacts on power supply and demand, as well as the development of the bottom-up model for the Italian power system. Section <ref> presents the findings of the study, while Section <ref> provides a discussion of these results. Finally, Section <ref> outlines the possible implications for policy-making, and addresses the study's limitations along with recommendations for future research.
§ MATERIALS AND METHODS
§.§ Italian power system
In order to evaluate the effects of climate change on the Italian power sector in 2030 we adopt a power system modelling framework developed based on the open source model oemof, which consistently represents electricity dispatch and optimisation (the model is described in details in Appendix <ref>). A version of the power system model is available on GitHub at <cit.> and was already employed in peer-reviewed studies <cit.>. The Italian power system is described with an hourly resolution, divided into the seven market zones defined by the transmission system operator Terna <cit.>. The model includes the existing power generation capacities up until the year 2021, then the optimiser is able to install new power plants to minimise the total system costs within the limits of model constraints (outlined in Appendix <ref>). In the majority of the outcomes, the optimisation is performed assuming the implementation of the mitigation policies for Italy, which are legally binding according to the European Climate Law <cit.>. The European decarbonization goal for 2030 is to decrease carbon dioxide released in the atmosphere by 55%, in line with the Fit-for-55 policy package <cit.>. A larger effort is expected from sectors in the EU Emission Trading Scheme (which are electricity and heat generation, energy-intensive industry sectors, aviation and maritime transport) reaching a 62% decrease in CO_2 emissions <cit.>. In particular, the power sector can count on more mature technologies than other energy sectors and it is a fundamental enabler of the transition of other energy sectors through electrification <cit.>. Therefore, in this model we impose to the Italian power system in 2030 a reduction of CO_2 emissions of 65% of electricity emissions with respect to their value in 1990 (they were 124.6 Mton of CO_2 <cit.>). Additionally, in Section <ref>, we present scenarios in which no mitigation policies are applied (denoted as Non Mitigated cases) to evaluate the implications of adapting the power system for climate resilience compared to the Mitigated cases.
The various elements of the power system necessary to shape the Italian electricity system are delineated in the Appendix A. Key features include: i) multiple market zones linked in the model through high-voltage transmission lines; ii) power production resources including natural gas, rooftop and utility scale photovoltaic, wind onshore and offshore, run-off-river, reservoir hydro, biomass and geothermal generation and imported electricity; iii) the possibility to add storage capacity for lithium-ion batteries and hydrogen storage technology, composed of electrolysers, fuel cells and hydrogen tanks. The model outcome has been validated with the data from the Transmission System Operator for the baseline year of 2019 <cit.>.
§.§ Climate variables
§.§.§ Data
Historical weather patterns are derived from weather reanalysis data for the forty years from 1981 to 2020 (following a method from <cit.>), available from ERA5 and ERA5-Land <cit.>. We chose a time period of several decades centred around 2000 to strike a balance between - from the one side - evaluating weather conditions close to recent years and - from the other side - including several decades of weather data in order to derive reliable distributions of weather patterns. Nevertheless, we show that the impacts we estimate are largely unchanged if we restrict the time span of our historical weather scenario to the two decades of 2001-2020 (see Supplementary Information and Supplementary Figure <ref>).
The future climate projections derive from one CMIP5 EUROCORDEX Global and Regional Climate Model (GCM and RCM) (the GCM ICHEC-EC-EARTH and the RCM KNMI-RACMO22E). This data was preferred over CMIP6 climate projections because the former has hourly resolution, allowing us to conduct an assessment of climate change impacts retaining the high temporal frequency of the historical reanalysis ERA5 data. We consider as main future emission scenarios the RCP 4.5 (we find that no substantial differences arise with respect to hotter climate scenarios, e.g. RCP 8.5, in the time horizon of our analysis, centred around 2030, see Supplementary Figure <ref>). We match the historical period to the ERA5 reanalysis baseline period (1981-2020) and consider as future period the two decades around 2030 (2021-2040). In this way, our climate change impact projections should be considered as medium-term shifts in the climate occurring over three decades. While mean climate changes over only three decades might lead to negligible shifts in the mean climate conditions, the focus of this work is on both the mean and the tails of the weather distribution, allowing to test if power system planning will be impacted by climate change even in the next few decades, due to an amplification in extreme weather conditions. Both historical ERA5 reanalysis and future CORDEX projections data are taken from the population-weighted dataset of <cit.>, available for Italy at the NUT3 level.
§.§.§ Definition of scenarios
The impacts of weather patterns on the supply- and demand-side of the electricity system are included in the paper by considering four alternative scenarios
developed separately for each impact category: demand for power, hydropower supply, thermal power supply and VREs power supply.
We start by considering a number j of weather variables (W^j) - such as daily maximum temperatures - at the NUTS3 level (n) for each hour (h), calendar day (d) and year (y). From each weather variable W^j_n,h,d,y we derive the power system impacts based on different methodological approaches depending on the impact type (described in detail in Section <ref> and Appendix <ref>). Regardless of the specific approach, we can generalise the method adopted for the computation of impacts as follows. Consider the location- and time-specific impacts Υ_n,h,d,y computed from the weather variable W^j through a damage function f():
Υ_n,h,d,y = f(W^j_n,h,d,y)
First, we compute the impact of mean historical weather on the power generation variables (scenario Historical Mean, HM) by taking the average over all years y of f(Ŵ^j_n,h,d,y) for a given location, hour and calendar day, where Ŵ^j_n,h,d,y is a weather variable of the ERA5 weather reanalysis spanning from 1981 to 2020:
Υ^HM_n,h,d = ∑_yf(Ŵ^j_n,h,d,y) /y
Second, we use the delta change-method, adopted extensively in the climate impact literature (see for instance <cit.>) to calculate the impact due to the shift from the mean historical weather to the mean weather around 2030 (scenario Future Mean, FM). More in detail, we compute the climate change amplification (Δ) as the difference between the future and historical EUROCORDEX projections of each weather variable Z^j_n,h,d,j for n, h and d, where means are computed over 20 years around 2010 k (2001-2020)[We test two different historical periods baselines, alternatively 1981-2020 and 2001-2020. We find negligible differences in the amplification effect of climate change, as reported in Figure <ref>. ] and 20 years around 2030 (2021-2040) m:
Δ^j_n,h,d = ∑_m Z_n,h,d,j/m - ∑_k (Z_n,h,d,k) /k
To identify the impact of the scenario Future Mean, we add the amplification Δ to Ŵ^j_n,h,d,y and, similarly to equation <ref>, we take the average impact over the years y (note that Δ is common across all years):
Υ^FM_n,h,d = ∑_yf(Ŵ^j_n,h,d,y+ Δ^j_n,h,d) /y
The implementation of Υ^HM_n,h,d and Υ^FM_n,h,d on a power system model allows to compare how slowly moving climatic changes in the mean weather conditions can affect the optimisation of power system capacity planning. Then, we evaluate the implications of weather-related impacts that occur in the tails of the historical and future simulated weather distributions, respectively. The two alternative extreme weather scenarios (HE and FE) are constructed by considering the tail of the distribution of possible weather impacts resulting from f(Ŵ^j_n,h,d,y), based on a quantile function Q^χ() and a quantile threshold χ. We consider as alternative values for χ the 75th and 95th percentile of impacts. While the impacts in the model are identified at the regional and hourly level, the computation of the percentile is done by ranking each possible weather simulation at the annual and country level. In other words, we rank each of the possible weather realisations based on their aggregated country-level impact. We then select the 10 and 2 years which have the highest simulated impact for the 75th and 95th quantile thresholds respectively. Finally, we take the average the hour- and calendar-day specific impact in the subset of selected years, with the aim of reconstructing the simulated extreme weather conditions while at the same time maintaining a plausible inter-annual variability deriving directly from the observed reanalysis data characterising the selected years.
Note that when we evaluate each impact category in isolation we simulate shocks occurring once every four years (75th percentile) or once every twenty years (95th percentile). On the other hand, the model run that combines extreme weather shocks of all impact categories (demand, hydro-power and thermal generation) has a different implied probability given that some years in our distribution exhibit a pattern of co-occurrence of extreme weather impacts across multiple categories (as reported in Table <ref>).
In the case of the Future Extreme, the amplification of climate change is accounted for by adding the Δ amplification as in Eq. <ref>.
Υ^HE,χ_n,h,d = ∑_p(y)^χf(Ŵ^j_n,h,d,p(y)^χ) /p(y)^χ
Υ^FE,χ_n,h,d = ∑_z(y)^χf(Ŵ^j_n,h,d,z(y)^χ+ Δ^j_n,h,d) /z(y)^χ
Where p(y)^χ and z(y)^χ are the subset of years among the historical and future simulations selected, respectively, depending on the given percentile threshold χ.
Table <ref> presents a summary of the method used to compute the different weather impact scenarios:
§.§ Impacts on electricity demand
The response of the hourly electric load to temperature is computed taking into account two aspects: the short-run co-variation between load and weather as well as the long-run amplification in the short-run load-weather relationship due to the growth in the adoption of air-conditioning appliances in the residential sector. First, in order to identify the increase in electricity demand related to the growth in AC uptake across Italian households we use an empirically estimated response function that provides a generalised functional relationship between daily maximum temperatures, AC market saturation and peak load demand, provided by <cit.>. We project the prevalence of residential AC ownership rates in Italy at the regional (NUTS 2) level by associating for each location and year the probability of AC ownership in households based on the adoption function of <cit.> (the detailed equation used by this study are shown in the Supplementary Information).
Historical AC ownership rates at the regional level are taken from the Italian Budget Survey published by ISTAT for the year 2019 <cit.>. The projected regional AC share is shown in the Supplementary Figure <ref>. In general, the average AC ownership rate in Italy goes from 33% in 2019 to 47%-55% in 2030 under RCP 4.5-8.5. The regions with the highest current as well as future AC share are the ones characterised by higher annual CDDs (Sardegna, Sicilia, Puglia) or by higher income per capita levels (Veneto, Emilia-Romagna, Lombardia).
Second, we couple the future NUTS3-level hourly temperature projections provided by <cit.> and the NUTS2 AC prevalence levels to estimate the amplification of hourly electricity demand due to climate change, based on the temperature-load coefficients (β) resulting in the non-linear function h() estimated in <cit.> (taking the form shown in the Supplementary Methods), and the projected hourly temperatures provided by <cit.>. As in <cit.> the non-linear temperature-load function h() is defined based on the population-weighted temperatures binned into k intervals of 3^∘C width, B_k=[T_k, T_k), here we construct a k-vector of indicators that track whether each hour's mean temperature falls within a given interval:
T^k = 1 ·{ T ∈ B_k } + 0 ·{Otherwise}
.
The observed historical binned hourly temperature for each location (n) for each hour (h), calendar day (d) and year (y), T̂^̂k̂_̂n̂,̂ĥ,̂d̂,̂ŷ and the difference in the EUROCORDEX climate series for the historical and future epochs Δ(T^k_n,h,d) are used to project the impact on the hourly load based on the historical and future AC prevalence levels, respectively, in each scenario, based on the equations presented in <ref>.
Figure <ref> Panel a shows how the mean temperatures vary in each season depending on the scenario. A large increase is observable in mean temperatures during summer but also a higher frequency of warm temperatures in winter. Panel b shows the resulting amplification of the hourly load in the summer around 2030, with respect to the Historical Mean scenario, averaged for all bidding zone. Maximum and mean amplifications of the hourly load at the national level reach over 30% and 20% in the central hours of summer months, respectively.
An important note has to be done on the impacts of weather variable on electricity demand for heating. This is visible in Figure <ref> panel a for the winter months. The correlation for the effects of temperature on power load for heating are produced using the whole Europe as a sample, thus they include varius levels of heating electrification. In Italy, renewable energy sources cover around 20% of the consumption for heat, with a target in 2030 of 33.1% <cit.>. Currently the final consumption for space heating met with electricity is 1 TWh <cit.>; projections assume that the number and capacity of heat pumps in Italy will double by 2030 <cit.>, we can assume to also double the consumption, thus 2 TWh would be around 2% of the winter consumption of electricity.
§.§ Impacts on power generation
In this work we take into account the potential impact of long-run climate change and short-run weather variability on power generation in several dimensions.
First, we develop a set of projections on the changes in hydro-power generation by exploiting the daily time series of hydro-power potential at the NUTS3 level developed in <cit.>, distinguishing between run-off-river and reservoir. The potential hydro-power generation is combined with the installed production capacity of hydro-power plants. We emphasise that this analysis does not incorporate changes in the management of water demand for human needs, as they fall outside the study's scope. However, it is worth noting that such changes could potentially mitigate the impact of decreased water availability on hydro-power production <cit.>. We compute both daily and weekly power generation levels in each scenario (Historical Mean, Future Mean, Historical Extreme, Future Extreme) for each Italian macro-zone (see Figure <ref> Panel a for the projected value of the country-level total hydropower generation).
Second, we develop a statistical analysis to investigate the impacts of daily maximum temperatures and water runoff anomalies on the availability of thermal power generation. More in detail, we generate a regression model based on unexpected outages data collected from 2018 to 2022 in Italy. The dataset <cit.> includes information of over 4000 outages, of which 2352 of gas- and 1887 of coal-fired generation units. The method adopted partially follows a previous analysis (<cit.>), but expands from the literature by providing country-specific and fuel-specific responses. In the most detailed specification, we identify fuel-specific impacts by including a set of interaction terms between temperature and runoff anomalies and a character variable representing the power plant type. Comprehensive information on the methods employed is provided in Appendix <ref>. We find that the likelihood of occurrence of an outage for coal-fired generation increases considerably when daily maximum temperatures surpass 35°C, reaching 10-50% at 40°C, depending on the water runoff anomaly. Gas-fired generation is less sensitive to high temperature and low water runoff levels, but the likelihood of an outage is non-negligible and between 5-25% at 40°C.
Since in our power model optimisations coal generation is phased out from the mix, we focus on the projections of impacts for gas-fired generation. We use the estimated outage occurrence function in conjunction with daily maximum temperature anomalies at the location of the gas-power plants to simulate daily thermal outage occurrence. Finally, we aggregate the resulting projections of power plant outages at the bidding-zone level to obtain an indicator of power generation availability changes around 2030 in the four scenarios identified in section <ref> (see Figure <ref> Panel b).
Our model has a deterministic approach in the quantification of the available supply of power generation sources. As noted by <cit.>, the main problems of this method relate to its unique economic objective function and the absence of uncertainties in its formulation. Other works (e.g. <cit.>) have proposed a stochastic model where long-term uncertainty in the VREs resources is introduced by using multiple scenarios consisting of weekly time series of hourly wind power output data. We leave the adoption of a stochastic approach to future studies, and we partially address this limitation by observing the outcomes of two alternative set of model runs: one with no impact of climate change on VREs and one where the generation profiles of VREs in the four different scenarios are included - although through the deterministic approach. We observe negligible differences when incorporating the effects of meteorological variables on VREs (see Appendix <ref>). However, as this approach is not ideal for capturing VREs' uncertainty in capacity expansion analyses, we have chosen to present our main findings without these considerations.
§ RESULTS
§.§ Installed capacity
We identify considerable impacts of mitigation policy goals and climate impacts on the optimal generation capacity. The installed capacities for the 2030 Italian power system, determined through investment optimisation, meet hourly load requirements while enforcing a 65% reduction in power sector emissions. We consider as "added capacity" the additional one optimised by the solver on top of the existing one in the 2019 reference case, detailed in Table <ref>. The effects of mitigation (grey bars) and of climate change and extreme weather (coloured bars) on generation and storage capacity are displayed in Figure <ref>, for the different climate scenarios and generation technology.
In Figure <ref>, the panel on the left shows the changes in installed capacity resulting from the Historical Mean scenario: in this case, the added capacity is driven only by mitigation policies. Around 50 GW of photovoltaic panels, mostly in large scale fields, are required to reach 2030 decarbonization targets for power generation in Italy in a cost optimal system. The alternative climate scenarios take into account the effects of long-term climatic changes and extreme weather variability on electricity demand, thermal power plants and hydro-power. We performed various optimisation accounting for these effects separately to grasp which has more consequences on the installed capacities. Despite increasing the likelihood of unplanned outages, the impact of climate change on gas-fired plants does not affect optimal installed capacity, thanks to the ample unused natural gas power capacity available in both the 2019 reference and mitigation scenarios [Note that such results depends on the modelling framework, which assumes perfect foresight of the reduction in available capacity from unplanned power outages. More detailed assessment of the impacts on short-term market operations when unplanned outages occur fall out of the scope of this analysis.]. As a side note, changes in the availability of solar and wind energy from the Historical Mean to the Future Mean climate scenarios are negligible when aggregating across time and space, and point to a modest increase in rooftop solar generation and curtailment to account for an increase in seasonal weather variability of wind generation (results are presented in the Appendix ).
In the Future Mean scenario, considering only the effects of climate change from a shift in the mean climatic conditions, the additional installed capacity is minimal with respect to the Historical scenario. This indicates that the photovoltaic power installed for mitigation purposes would be sufficient to ensure the system's resilience to an average year of weather under climate change conditions occurring by 2030. In fact, solar electricity is significantly correlated with power demand driven by AC appliances, since they are both relevant in the middle of the day (the contribution of solar PV on hourly power generation during summer months is shown more in detail in the next section, particularly in Figure <ref>). When considering historical extreme weather (Historical Extreme panel), changes in the availability of hydropower have a relevant role in increasing the need for solar capacity (plus ≈ 5GW). Solar electricity is produced during the day, substituting the reduced run-off-river generation and shifting the use of reservoir hydro during night-time, instead of employing natural gas-fired plants. This mechanism allows the system also to stay compliant with the emission reduction target by reducing the use of fossil-based electricity.
Furthermore, climate-induced changes in hydro-power availability have remarkable effects in driving the uptake of short term storage. Indeed, when the availability of hydro power generation is reduced, the need for flexibility in the system is satisfied with li-ion batteries. In this Historical Extreme case the storage capacity is relatively small (≈ 1GWh), but for higher degrees of decarbonization a larger volume will be needed. To provide a comparison, Terna, as published in the 2022 Future Energy Scenarios document, foresees approximately 71 GWh of new utility-scale storage capacity will need to be developed by 2030 to meet the requirements of the Fit-for-55 scenario [The optimisation does not install extra storage technologies in the Future Mean case since it works in perfect foresight and thus acts to minimize the costs, but it is not able to foresee market dynamics and future decarbonization of the system.]. Finally, the Future Extreme scenario presents a stronger impact of the increase in demand for AC utilization. Utility Scale PV installation reach the maximum potential, taken from Trondle et al. <cit.>, which considers social and technical constraints to fields installations. The optimisation algorithm tends to prioritise Utility Scale panels over rooftop PV, since they are more costs efficient due to their lower price for economies of scale, but land availability, protected areas and social constraints limit strongly their potential. A sensitivity analysis on the value for its maximum potential might enlarge the use of this technological option, given its price competitiveness. In the Future Extreme scenario, the effect of reduced hydro generation on installed capacity is lower than in the Historical Extreme scenario because of the balancing between summer and winter water inflows, observed in the mean climate change weather analysis (Figure <ref>). As the impact on hydro electricity production goes down, also the necessity for substitute li-ion batteries decreases (half of the need in the Historical Extreme case driven by hydro power).
In Table <ref> we display the values of annualised investments related to the extra installed capacities represented in Figure <ref>. These values reflect the compound effect of all the weather variables in the optimisation (AC demand, hydro and thermal power).
The investment values reflect the results already observed in the figure: in the Future Mean scenario, there is a negligible increase in costs compared to the Historical Mean. Given that all scenarios assume a 65% reduction in CO_2 emissions from power generation, this indicates that decarbonising the energy system may yield significant synergies with adaptation strategies to climate change. This is largely attributed to the integration of climate-resilient technologies, particularly rooftop PV systems and utility-scale PV installations. In the extreme weather cases (Historical and Future), the primary increase in system expenses is attributable to rooftop PV installations. These technologies are more costly compared to Utility-Scale solar panel fields; however, rooftop PV installations are essential due to the larger availability of potential spaces for their installations and their ability to reduce centralised load demands. Indeed, rooftop PV is a form of distributed energy production and plays a crucial role in mitigating stress on the high-voltage grid by generating electricity locally. This becomes particularly advantageous under increased electricity demand scenarios due to climate change or extreme weather conditions, as it diminishes the dependency on centralised power sources, as demonstrated in Figure <ref> in Appendix <ref>.
§.§ Power system costs
We compute the power system costs related to the capacity expansion of the additional generation and storage resulting from mitigation policy and climate change impacts (Figure <ref>). In this graph expenditures refer to the capital investments and operational costs for the extra optimised capacity displayed in Figure <ref>. In the reference scenario with no impacts from climate change (Historical Mean), we find that a power system compliant with EU mitigation policy would require additional investments for almost 4 billion euro (b€). We find that weather resilience for electricity generation in Italy in the Future Extreme case requires extra costs for 2 b€, representing an increase of around 50% compared to the Historical Mean value. On one hand, costs related to the reduction in hydro-power occur mostly due to the implementation of extreme weather events in the model, given that most of the increase from the impact category are projected already in the Historical Extreme scenario (second graph from the left). On the other hand, costs increase caused by the amplification of the hourly load occur strongly only in the most extreme scenario, accounting for the combined influence of climate change and extreme weather events (Future Extreme scenario). Overall, the reduction in the availability of hydro-power generation due to climate change is the key driver of increased installation and operation costs, as it alone accounts for 1.3 b€ in additional expenditures.
The relative contribution of the rooftop PV installations and operational costs becomes higher than the one from the added utility scale PV in the scenario with the largest weather stress on the system. The total extra capacity from large scale solar fields is still larger than the one on rooftops (see Figure <ref>). However, the expenses per unit of power associated with deploying rooftop installations are more consistent than the ones for large scale solar fields, due to the absence of economies of scale.
§.§ Electricity generation
As a consequence of the changes in the power capacity mix induced by mitigation policies[As a reminder, all the expansion capacity optimisations are performed with the underlying assumption of coal phase-out <cit.> and an abatement of CO_2 emissions in line with European policies <cit.>.], the generation mix is substantially different in 2030 with respect to the reference generation of 2019 [The model outcome for the 2019 reference year has been validated with the data from the Transmission System Operator <cit.>.] (panel a of Figure <ref>).
Despite the cap on carbon dioxide emissions, natural gas still plays a significant role in 2030 power mix: 54 TWh less mean that 121 TWh of electricity are still gas-generated, one third of the total annual value. Solar electricity production has a relevant increase in all the scenarios. In the Historical Mean case this is due to the abatement target for CO_2 that forces the model to install and use more solar panels. Instead, the marginal increase in solar power generation in the other weather scenarios is driven by the effect of climate change and more frequent severe weather events. Another crucial point is curtailment: in the case of extreme weather in both historical and future climate, excess electricity would be 5 TWh. Although curtailed power constitutes only about 1.5% of the total demand, it requires attention as it presents both challenges and opportunities. This surplus cheap energy can be used within hard-to-abate sectors, reducing overall costs for mitigation solutions. On the other hand, excess power could be accounted for by load shifts, moving the request of electricity for certain services or appliances to hours of peak generation.
The total annual electricity demand (shown with a red dot in Figure <ref>) grows from 320 TWh in 2019 to 331-334 depending on the climate scenario. In the Historical Mean case, the additional electrical load is based on exogenous projections for the year 2030 <cit.>, which consider the electrification of end-uses, particularly for passenger transport, industry and heating. The projections of <cit.> also take into account the simultaneous reduction driven by investments in energy efficiency. The change in demand with respect to the Historical Mean case is driven by the change in hourly demand for heating and cooling devices projected based on the method presented in section <ref>. Changes in annual aggregate demand mask considerably higher impacts occurring at the hourly level in the summer.
Figure <ref> shows an example of hourly generation and demand in the four weather scenarios for a representative summer week (going from Monday to Sunday) in Sardinia. We chose this power market zone since, in the two extreme weather scenarios, it is where the majority of li-ion battery capacity is installed to take advantage of the excess renewable generation (the impact of solar PV on the net load of all the Italian regions during the summer is shown in Supplementarry Figure <ref>). Inspecting the hourly behaviour of demand and production in each weather scenario confirms the capability of PV panels to produce a large amount of electricity during the central hours of the day, precisely when the demand for cooling is projected to peak. Additionally, it is noteworthy to notice the interactions with neighbouring regions (To other regions and From other regions in the legend, representing Centre-North, Centre-South and Sicily) and the utilisation of storage mechanisms, specifically the charging and discharging processes of PHS and batteries. This strategy involves charging the storage system with excess solar generation during midday, followed by discharging to supply power in the evening, reducing the use of natural gas.
§.§ Mitigation vs adaptation: trade-offs and synergies
While the main objective of this work is to investigate climate change impacts in a power system which is compliant with the legally binding European mitigation laws by 2030, in this section we compare those results with an alternative case in which climate change affects the Italian power sector when no stringent mitigation target is enforced. We undertake this analysis to determine whether achieving mitigation goals entails a trade-off with adaptation goals, meaning planning a system resilient to climate change and extreme weather conditions. To this aim, we run each climate impact scenario (Historical Mean, Future Mean, Historical Extreme and Future Extreme) assuming that the Italian power system faces no mitigation target in 2030 (Non Mitigated scenario). The results in terms of costs are outlined in Figure <ref>, showing the total annual system cost in 2030 for a Non Mitigated power system in the column on the left, then that cost for a Mitigated system in the right column. The waterfall bars represent the cost differences due to the achievement of the mitigation targets, always including the adaption to the impacts of the specific weather scenario.
As expected, the cost to purchase natural gas is always negative; instead, installation costs for solar technologies and batteries increase, even if the increase of the latter is negligible. The crucial difference is that the expenses for the acquisition of fuels is an operational cost, while the others are initial investment costs. This means that installing renewable technologies has a higher initial expenditure but significantly enhances the energy independence of the country and its resilience to geopolitical tensions and volatility of prices. In both the Non Mitigated and the Mitigated scenarios, total annual system costs increase when going to Extreme weather cases (upper vs lower graphs). When moving to the scenario resilent to the effects of climate change (upper left vs upper right graphs), the expenditures decrease in the Mean weather cases. The key message is that adapting to changing average weather patterns, particularly with regard to air conditioning demand and the unavailability of thermal and hydro power generation, reduces mitigation costs. Solar panels play a crucial role in this trade-off, as they not only represent an environmentally sustainable technology but also enhance the system's resilience to increasing temperatures. When considering extreme weather conditions, the effects of climate change result in a rise in expenditures (lower left vs lower right graphs). A pivotal aspect is that extreme weather events, which will become more frequent even under the RCP 4.5 scenario around 2030, pose a relevant threat and a possible increase in expenses for the power system. Importantly, the differences between total annual system costs in the Non Mitigated and Mitigated cases are quite negligible, highlighting the advantages of planning for a system concurrently decarbonized and adapted to a changed climate.
To evaluate the trade-off between mitigation and adaptation we can obtain a cost for the abatement of the remaining CO_2 to achieve the mitigation targets. Indeed, this is derived from the difference of the annual system costs in the case of both adaptation resilience and decarbonization goals and the case only with resilience. This costs difference is then divided by the amount of emissions associated with ensuring the resilience to climate change.
Without a specific abatement target, only the choice of the most cost-effective technology cannot lead to the required decarbonization of the power sector in 2030 (reaching the objective of 44.24 Mton of CO_2). This holds true not only in an average weather year, but most importantly in the most severe cases. Panel c in Figure <ref> shows the percentage variation in the counterfactual cases with no mitigation with respect to the ones with mitigation. While 30% more expensive, the ambitious mitigation case avoids the emissions of 10 MTon co2 (35% less CO_2), as we find that by 2030 the emissions of the power system with no ambition in the decarbonization goals would be 51.44 Mton CO_2. The emission reduction with respect to the value in 2019 (which was of 81.2 MtCO_2 <cit.>) is entirely driven by cost-optimal installations of utility scale solar panels. Water scarcity and increased AC demand can erase the beneficial decarbonization effects obtained through technology cost reductions.
§ DISCUSSION
In this study, we employ a capacity expansion model of the Italian power system projected for 2030 to assess the necessary investments in power generation capacity quantitatively. This analysis is conducted with the dual objective of meeting mitigation targets and addressing the impacts of climate change on both power demand and generation while maintaining a high temporal resolution. The European decarbonization goal for 2030 is outlined in the Fit-for-55 policy package and mandates a 55% reduction in CO_2 emissions <cit.>. The power sector, leveraging its advanced technologies, is expected to achieve more considerable emission reductions. Thus we assumed a 65% decrease in Italian electricity CO_2 emissions compared to 1990 levels, capping them at 44 Mton of CO_2 in the year 2030. We develop four alternative meteorological scenarios to thoroughly decompose the effects of climate change, distinguishing between shifts in the mean and the extremes of the weather distribution while assuming the implementation of decarbonization policies.
Overall, we find that transitioning towards a low-carbon power system capable of meeting demand under extreme weather patterns - both as occurred in the past and amplified by climate change - necessitates a substantial increase in installed capacity, specifically in solar generation and short-term storage. Around 5-8 GW of additional PV capacity and 0.8-1.1 GWh of li-ion batteries are required at increasing weather stress on the 2030 Italian power system. These expansion requirements serve to withstand, at the same time, higher future AC-induced peaks in the load as well as lower hydropower generation. Given the pivotal role that electricity is expected to play in driving the decarbonization efforts of hard-to-abate energy sectors, it becomes imperative to prioritise the ability of the power sector to address such concerns. While demand-side measures related to electricity and water use in agriculture and other end-use sectors could help alleviate stress on the electrical grid, they are outside the focus of this research.
A valuable feature of this work is that it examines the adequacy of the power sector while at the same time ensuring that decarbonization policies are implemented, therefore allowing to identify of potential trade-offs or co-benefits between adaptation and mitigation. The outcomes of the optimisations find that the largest installation of new generation technologies is of Utility Scale PV (40-42 GW) since this is the most cost efficient in the Italian peninsula (see Figure <ref>). Rooftop PV requires an addition of 12 GW to meet decarbonization goals, and extra 5 GWs are needed in the most extreme weather cases. This decentralised technology is crucial in reducing the stress on the electric grid and satisfying the power load locally. Furthermore, we find robust evidence that accounting for extreme weather events (both in the Historical and Future Extreme scenarios) affects the optimal capacity mix: a power system resilient to extreme events requires storage to help balance the water scarcity and thus hydropower unavailability. Climate change and extreme weather might hinder the availability of programmable reservoir hydropower plants and non-dispatchable RES might not be sufficient to meet the hourly load. Investments in storage capacity could be fundamental to alleviate the burden on the system and to mitigate the volatility and unpredictability in power generation. In this study, two storage options are available for the power system model: a short term storage with li-ion batteries and a long-term one with hydrogen tanks, electrolysers and fuel cells. Given the short-term nature of meteorological impacts, lithium-ion batteries are preferred over hydrogen storage for energy management. Chemical batteries, such as lithium-ion, are particularly well-suited for daily and short-term energy storage due to their minimal energy losses during charge and discharge cycles <cit.>, making them an effective alternative to weather-dependent hydro power. Moreover, installing electrolysers, pressurised tanks and fuel cells has a high cost, which becomes viable only in more severe abatement scenarios.
The generation mix is markedly different from the reference generation of 2019, with solar electricity production showing substantial increases across all scenarios. This rise is attributed to the stringent CO2 abatement targets and the effects of climate change, which include more frequent severe weather events. Despite the cap on carbon emissions, natural gas remains a crucial component in the transition of the power mix, providing 121 TWh of electricity, one-third of the total annual power supply. Our analysis underscores the importance of addressing curtailment, particularly in extreme weather scenarios where excess electricity generation reaches 5 TWh. Although this constitutes a small fraction of total demand, it presents opportunities for integration into hard-to-abate sectors. It highlights the potential for load shifts to manage electricity demand more efficiently. The projected increase in annual electricity demand, from 320 TWh in 2019 to 331-334 TWh in 2030, reflects the anticipated electrification of end-uses, balanced by investments in energy efficiency. However, this aggregate increase masks significant hourly variations, especially during summer peaks.
Finally, this study underscores the vital interplay between mitigation and adaptation in the power sector. The analysis reveals critical insights into the costs and benefits associated with achieving mitigation goals in the context of climate change impacts on the Italian power system. We compare scenarios with and without stringent mitigation targets (denominated Non Mitigated and Mitigated cases), considering the set of different meteorological scenarios deployed. In the Mitigated cases, we demonstrate that, while initial investments in renewable technologies are higher, they significantly reduce operational costs associated with fuel purchases. This shift from operational to capital expenditures enhances energy independence and resilience to geopolitical frictions and price volatility. The increase in capital investments and the decrease in operational and fuel purchase expenditures requires a careful evaluation of the financial aspects linked to this shift, which falls out of the scope of this work. We leave for further analysis the assessment of how the change in these financial flows affects the actors of the supply chain, as well as the possible regulatory incentives for the new capital investments; both aspects are crucial for achieving an effective energy transition.
Solar panels emerge as a pivotal technology in the trade-off between adaptation and mitigation of the power system, contributing to environmental sustainability and bolstering system resilience against rising temperatures. Notably, the study highlights that the total annual system costs remain relatively stable between the Non Mitigated and Mitigated scenarios, underscoring the feasibility and advantages of concurrently pursuing decarbonization and climate adaptation strategies. These findings underscore the critical importance of integrating renewable energy investments into long-term planning to develop a robust, sustainable, and climate-resilient power system.
§ CONCLUSIONS
Our findings indicate that transitioning to a low-carbon power system in Italy by 2030, capable of meeting demand under increasingly extreme weather conditions, requires substantial investments in PV generation and storage capacity. Utility-scale PV is identified as the most cost-efficient option, with rooftop PV playing a crucial role in alleviating grid stress. Furthermore, resilience to extreme weather events necessitates a robust storage strategy, with lithium-ion batteries preferred for short-term energy management due to their round-trip efficiency. Moreover, our results display the projected generation mix for 2030, showing solar electricity production increasing substantially due to stringent CO_2 abatement targets and adaptation to impacts of climate change. Despite the emphasis on renewable energy, natural gas still supplies one-third of total annual electricity, while managing curtailment is crucial for redirecting excess generation to hard-to-abate sectors. Finally, this study highlights the crucial interaction between mitigation and adaptation strategies in the power sector, demonstrating that while initial investments in renewable technologies are higher under stringent mitigation scenarios, they lead to significant reductions in operational costs and enhance the robustness of the system to climate change impacts. Importantly, the findings indicate that total annual system costs remain stable between scenarios with and without stringent mitigation targets, affirming the feasibility of simultaneously pursuing decarbonization and climate adaptation. These results underscore the necessity of integrating renewable energy investments, particularly on solar panels, within long-term planning to develop a sustainable and climate-resilient power system, given their low dependence on meteorological conditions.
This analysis is not without caveats. The impacts of mean climate shifts and extreme events to the transmission lines (i.e. due to overheating of the cables) are not included. In addition, we do not perform any sensitivity analyses on economic variables, which could shift technical choices on different options. This can be a crucial consideration for the resilience of transition scenarios; however, it does not constitute the primary focus of this study.
Future research can expand the evidence provided in this work in several directions. A more detailed power dispatch model could investigate options to strengthen the resilience of power systems not only through changes in the dispatch mix, but also by exploiting balancing services, cross-border trade and demand-side management. Furthermore, given that conventional generation technologies play a dominant role in setting wholesale prices as they meet the net-load, i.e. residual demand not satisfied by renewable sources, extreme weather events may result in wholesale price fluctuations. Understanding the characteristics of power markets’ operations during extreme weather may bring to the surface possible limitations of the current power systems, leading not only to volatility in power prices but possibly also to higher costs for managing the grid. Models with a detailed representation of the use of electric appliances and of the behavioural aspects of consumption can be adopted to investigate the demand-side potentials for reducing the peak-load during extreme events.
§ ACKNOWLEDGEMENTS
A.D.B has received support from the GRINS (PNRR) project; F.C. has received support from the DIGITA (PRIN) project.
We acknowledge financial support from the Italy's National Recovery and Resilience Plan (PNRR), grant agreement No PE0000018 - GRINS – Growing Resilient, INclusive and Sustainable.
CRediT authorship contribution statement
A. Di Bella and F. P. Colelli: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing.
Declaration of generative AI and AI-assisted technologies in the writing process
During the preparation of this work the authors used ChatGPT in order to improve the language of this paper. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
§ SUPPLEMENTARY RESULTS
§.§ Effects of rooftop PV generation on centralised power demand
Rooftop PV generation is a form of distributed power generation, thus it has the advantage of reliving tension from the high voltage grid by producing locally and decreasing the centralised electricity load. In this sense, even if the overall electricity demand increases due to climate change or severe weather conditions, the remaining power request from centralised technologies is reduced, as illustrated in Figure <ref>). In the two Extreme scenarios the power load is almost halved by the large presence and generation provided by rooftop PV panels. This technology represents a pivotal resource within mitigation strategies, not only to address increased household electricity consumption, particularly for air conditioning, stemming from climate change-induced warmer climates, but also to alleviate strain over centralised power generation and transmission.
§.§ Mitigation and adaptation trade-off
Figure <ref> represents the additional (if positive) or reduced (if negative) costs that the system incurs under increased weather stress in the mitigated scenarios compared to the scenario without any imposed climate goals. The black dots represent the difference in total annual system costs between the Mitigated and Non Mitigated scenarios. Their presence on the positive side of the graph highlights that achieving both adaptation resilience and decarbonization goals leads to a greater increase in annual power system costs compared to a system addressing only adaptation constraints. The delta in annual system expenditures decreases when looking at the corresponding Future cases (Future Mean with respect to Historical Mean, Future Extreme with respect to Historical Extreme). This indicates that adapting to a changing weather in terms of AC demand, thermal and hydro power generation unavailability, reduces the mitigation costs. Solar panels play a key role in this trade-off, as they not only represent an environmentally friendly technology but also enhance system resilience to rising temperatures.
Another critical observation from Figure <ref> is that, as anticipated, all the mitigated scenarios exhibit higher expenditures for PV panels and batteries, while reducing costs associated with natural gas acquisition. This has relevant implications in terms of energy security: in fact, the Italian power system in all mitigated cases would be more independent from energy imports.
Another graph the we deem important to show is Figure <ref>. It offers a comprehensive overview of the cost items embedded in the different total annual system costs, allowing to really get the idea of how the expenditures for the Italian power sector in 2030 would be allocated. A notable point it the absence of CAPEX expenses for rooftop PV in the Non Mitigated cases. This distributed technology has a larger cost with respect to Utility Scale solar generation since it lacks economies of scale. It becomes critical though, when achieving mitigation goals, to enhance the production of renewable electricity and ensure the operation of the power system when higher temperatures have an impact on hourly AC demand.
§ SUPPLEMENTARY METHODS FOR CLIMATE IMPACTS
§.§ Air Conditioning load estimation
Coefficients are estimated by <cit.> though a pooled cross section-time series regression with a logit link function, Λ <cit.>. The variables used to identify the level of AC adoption are the 10-year moving average CDD24s (𝒞), the logarithm of the 10-year moving average annual per capita income (y) and the logarithm of the 10-year moving average annual urbanization rate (u):
Λ (s_i,t) = log( s_i,t/1-s_i,t) = Zα
= α_i^0 + α^Y y_i,t + α^C 𝒞_i,t + α^YC (y_i,t·𝒞_i,t) + α^U u_i,t
with location fixed effects α^0, and estimated parameters α^Y and α^C that capture the direct effects of income and heat exposure, and α^YC that captures their interaction. The functional form yields nonlinear effects of the linear predictors, governed by the logistic transformation.
The non-linear temperature-load function is estimated with a fixed effect models of per capita daily electric load, q, at the European member state level in each day from 2015 to 2019. Population-weighted temperatures are binned into k intervals of 3^∘C width, B_k=[T_k, T_k) to construct a k-vector of indicators that track whether each day's maximum temperature falls within a given interval:
𝒯_k = 1 ·{ T ∈ B_k } + 0 ·{Otherwise}
. Suppressing location and time subscripts, the empirical specification estimated by <cit.> is:
𝔼 [ln q] = ∑_k β_k,v^T 𝒯_k + ∑_k β_k,v^TAC( 𝒯_k · s ) + β_v^Y y + controls
where controls include state or country fixed effects that absorb variation associated with unobserved temporally-invariant confounders, and day-of-week, season and year fixed effects that control for idiosyncratic time-varying influences that are unrelated to temperature. The elements of β^T account for consumers' adjustments of stocks of energy-using durables, explicitly captured by the vector of interaction coefficients, β^TAC. The fitted coefficient vectors β^T and β^TAC provide flexible piece-wise linear spline representations of macro-regions' distinct nonlinear temperature response functions (see Supplementary Information).
§.§ Impacts on thermal generation outages
We study the potential unavailability of thermal power generation units due to extreme temperatures by developing a regression model based on outage information collected from 2018 to 2022 in Italy. The dataset <cit.> includes information of over 4000 outages, of which 2352 of gas- and 1887 of coal-fired generation units. The method adopted partially follows previous analysis (<cit.>), but expands from the literature by providing country-specific and fuel-specific responses. We consider as key dependent variable the occurrence of an outage in each power-plant, represented by a dichotomous variable alternatively 0 or 1 (c), and identify the influence of daily maximum temperatures (t) and water runoff anomalies (r), controlling for month (m) and location (p) fixed effects. Since temperature and runoff anomalies can have non-linear impacts on the operations of thermal generators, non-linear effects are captured by adding a quadratic or alternatively a cubic term to each. Furthermore, in the most detailed specification, we identify fuel-specific impacts by including a set of interaction terms between temperature and runoff anomalies and a character variable representing the power plant fuel k. We estimate the following equation though a logistic regression, so that the log odds of the outcome are modelled as a combination of the predictor variables:
Λ (c_i,t) = log( c_i,t/1-c_i,t) = f (t_i,t) · k + z (r_i,t) · k + ψ k + μ p + ν m + ε
We compare the different polynomial specifications based on standard performance metrics, and select the model with a fuel-specific cubic response to temperatures (Figure <ref>).
We find that the likelyhood of occurrence of an outage for coal-fired generation increases considerably when daily maximum temperatures surpass 35°C, reaching 10-50% at 40°C and 50-90% at 45°C, depending on the water runoff anomaly. Gas-fired generation is less sensitive to high temperature and low water runoff levels, but the likelihood of an outage is non-negligible and between 5-25% at 40°C. Regression results are shown in the Table <ref>. We use the estimated available capacity function in conjunction with future daily maximum temperature anomalies to simulate daily thermal power generation availability around 2030 and apply the results to the oemof Italian power system model optimisations.
§.§ Impact on variable renewable sources
We describe here the methodology for assessing the impacts of climate change on variable renewable generation production. We only evaluate these impacts in the two scenarios Historical Mean and Future Mean, since the optimization of power system capacity expansion is not be based on extreme low- or high- wind and solar generation occurrences, which are typically dealt with by transmission system operators though ancillary services and balancing energy markets. In other words, we only consider how climate change may alter the solar and wind production patterns by taking into account the hour-, calendar day- and region- specific mean production value across the historical and future year period. For both wind and solar power potential generation, we exploit the projections developed by <cit.>. For wind generation, we retrieve the time-series of normalised with potential at the NUTS3 level directly from the database, then we aggregate at the bidding-zone level. As for solar, we consider the time-series of hourly population-weighted direct normal irradiation (DNI), the amount of solar radiation per unit area [W/m^2] by a surface perpendicular to the sun rays. Temperature is also included into the analysis, since the power output that a solar panel p is able to produce at each hour h in its location n is dependent on ambient temperature. The following formula, taken from <cit.> shows the correlation, where T_n,h is expressed in degree Celsius and DNI_n,h is the DNI value, in location n and hour h:
P_n,h = η_p S_p DNI_n,h (1-0.005(T_n,h-25))
The parameters η_p and S_p represent the conversion coefficient [%] and the surface area [m^2] specific to the photovoltaic unit p. These values are not relevant for the calculation since P_n,h is then normalised to a time-series, accounting for the maximum values in each scenario and region, to obtain the solar potential. η_p itself is dependent on temperature, as explained by Evans <cit.>, following the equation:
η_p,T = η_p,Tref * (1 - β_ref * (T - Tref))
η_p,Tref refers to the panel efficiency at Tref, the reference temperature (which is commonly 25°C) and solar radiation of 1000 W/m^2. β_ref is the temperature coefficient and it is generally assumed to be 0.0004 K^-1 <cit.>. Thanks to these equations we can coherently consider the impact of the change of DNI and temperature in the future climate scenarios on PV panels power output.
In Figure <ref> we evaluate the variation in electricity generation and additional installed capacities for the Italian case study in 2030 in the Future Mean weather scenario with respect to the Historical Mean one. Again, we do not focus on the Historical Extreme and Future Extreme scenario because the high variability of renewable sources to weather is informative for dispatch optimisation problems, while it is not a key driver for power system investment planning. The input shocks included in the model, namely the mean level of the normalised wind power potential and the BNI, used to project future impacts for wind and solar, respectively, are presented in Figure <ref>, Panels c and d. As you can see in Figure <ref>, the changes
§.§ Additional figures
Figure <ref> shows that no substantial difference can be found in the value of the input projections of hydro production and electricity demand change by changing historical base year period of CMIP6 model output from 1981-2020 to 2001-2020.
Figure <ref> shows that no substantial difference can be found in the value of the input projections of hydro production and electricity demand change from RCP 4.5 and RCP 8.5 in 2030.
§ SUPPLEMENTARY METHODS FOR POWER SYSTEM MODELLING
We employ a power system model of the Italian electricity system <cit.>, developed with Oemof (Open Energy MOdelling Framework), an open-source energy modelling tool in Python <cit.>. This framework enables the creation of an energy network and then uses a solver to determine the energy balances (in this case, the one utilized was Gurobi, but Oemof is able to work with other open-source solvers). The same power system model for Italy has been used in previous works <cit.>. To be able to run simulations and investment optimizations, Oemof expansion capacity script requires the following inputs, provided through an Excel file. The spatial resolution of the model is based on the Italian electricity market zones, which, since year 2021, consist of 7 different areas <cit.>, organized as shown in Table <ref>. Building from the subsection in the paper discussing briefly the main elements of the model, Table <ref> shows the break-down in regions of each model node, while Table <ref> outlines the powerlines capacity.
[b]0.40
[b]0.50
§.§ Key features
Transmission lines. Market zones are linked in the model through high-voltage transmission lines. Capacity data are taken from Terna Development Plan in 2019 <cit.> and updated according to the 2021 Plan <cit.> and the National Energy and Climate Plan (PNIEC Piano Nazionale Integrato per l'Energia e il Clima <cit.>), assuming the transmission capacities targets for 2026 completely achieved in 2030. The spatial subdivision and the values for exchangeable GWs between zones are visualised in Figure <ref>.
Demand. For each region, a time series of the hourly power load is considered, obtained from Terna Download Center web page, taking data for the year 2021 <cit.>. Terna projects an annual national electricity demand of 331 TWh by 2030, taking into account the ongoing electrification of the transportation, heating, and industrial sectors <cit.>. The distribution of this demand into market zones in 2030 is assumed to mirror that of 2021. Consequently, for 2030, we maintain the same hourly demand profiles, adjusting them upward to align with the anticipated national load of 331 TWh for the year.
Commodity sources. For power production, the resources considered in this case study are natural gas, water and imported electricity. Nowadays Italy still has some coal-fired power plants running <cit.>, but according to the Italian climate and energy strategy <cit.>, the phase-out from this polluting source will be in 2025. Therefore, this work assumes that in 2030 there will be no coal power plant working. Import refers to the net electricity imported from other countries and it has been designed as a source of generation. Italy exchanges electricity with France, Switzerland, Austria, Slovenia, Greece, Malta and Montenegro, but the main importer is France <cit.>. For this reason we assumed to model import as a source of generation only for the North market zone and with a negligible emission factor <cit.>, thanks to the large fraction of French electricity produced with nuclear power. For commodity sources, variable costs are specified; for import and hydro costs are set to zero. Natural gas prices for 2030 are derived from World Energy Outlook 2022 <cit.>, assuming the Stated Policy case, thus 29 €/MWh of thermal energy. Specific emissions factors are adopted from technical specifications in <cit.>.
Power plants transformers. For natural gas-fired plants, total installed capacities are imposed at 2021 values <cit.>, with no possibility to expand. Import power derives from the assumption of constant import throughout the whole year, so a total amount of 42.8 TWh <cit.> reduced by 4.75 TWh due to transmission and distribution losses, results in 4.34 GW in each hour. For hydro power plants, efficiencies for the conversion to electricity are offered by <cit.>, import has no efficiency and for gas-fired plants efficiency is evaluated as an average of the whole Italian gas power plants park (55.1%) <cit.>. For gas plants cost of operation is also specified, set at 4 €/MWh <cit.>.
Renewables. The Renewable Energy Sources RES embedded in this study are rooftop and utility scale photovoltaic, wind onshore and offshore, run-off-river, reservoir hydro, biomass and geothermal generation (actually present only in Tuscany, thus with 771.8 MW in Centre-North). Existing offshore capacity is set to zero and the model can expand generation capacity of the two types of solar and the two categories of wind power. In each node the existing installed power in April 2022 is the starting point, aggregated in market zones. Data are taken from Terna <cit.>, assuming that Utility Scale photovoltaic (PV) has a size larger than 1 MW, and from the European Commission's Joint Research Centre (JRC) <cit.> for run-off-river and reservoir hydro. PV panels in large fields or on rooftops are distinguished in the model by different existing capacities, investments and operational costs and maximum potential for available land. The existing capacity for each power generation technology in each region is represented in Table <ref>.
Storages. In 2021, in the Italian power system only Pumped Hydro Storage (PHS) power plants can store electricity, since there are no batteries or hydrogen storage connected to the grid. Values for PHS pumping and generation power capacities and the nominal retainable energy are provided by JRC database <cit.>, with a national storage value of 560 GWh. Other parameters supplied are a capacity loss of 8.33 · 10^-6 <cit.> (referred to the stored energy), inflow and outflow efficiencies, choosing respectively of 85% and 90% to get a Round Trip Efficiency (RTE) of 76.5%, in line with numbers from the U.S. Department of Energy <cit.>. The optimisation can expand the storage capacity for lithium-ion batteries and hydrogen storage technology, composed of electrolysers, fuel cells and hydrogen tanks. Further details can be found in Appendix <ref>.
§.§ Dispatch optimization
In this work we deploy both dispatch and expansion capacity optimizations, based on the chosen scenario. The main goal of a dispatch optimization is to find the generation mix which meets the demand at the lowest operational cost. The solver will start using the cheapest sources per MWh until the energy needed is provided. The objective function developed for this work can be expressed by Eq. <ref>, describing the minimum of the cost to operate the system C^ operation.
Min C^ operation = Min( ∑_t=1^T∑_n=1^N∑_s=1^SE_t,n,s·vc_t,n,s)
where:
t = analysed time step, from 1 to T
n = node considered, from 1 to N
s = generation source, from 1 to S
E_t,n,s = energy generated by the source s, located in the node n at the time step t.
vc_t,n,s = variable costs of generation for the energy source E_t,n,s
This main objective function must respect a series of constraints. They will be explained and analysed further, following Prina et al. work <cit.>. The major constraint is to meet the energy demand, considering also the charge and discharge energy of the storages and the transmission losses. This has to be valid for every time step and it is possible to have an excess of generation.
∑_s=1^S (E_t,n,s + E_t,n,st^ charge - E_t,n,st^ discharge - E_t,p^ transmission loss) = D_t,n + E_t,n^ excess
E_t,n,st^ charge = energy employed for charging storage technology st at time t in node n
E_t,n,st^ discharge = energy employed for discharging storage technology st at time t in node n
E_t,p^ transmission loss = energy lost due to transport losses at time t in powerline p for node n
D_t,n = load demand at time t for node n
E_t,n^ excess = surplus of generated energy at time t for node n
Another restriction is the respect of the maximum power given as input for each generator unit. Non-dispatchable renewable units s in node n can supply up to their nominal capacity P_n,s^ non-dispatchable according to the source profile throughout the year. Fossil fuel and dispatchable renewable plants (such as reservoir hydroelectric), instead, can always provide power up until their stated capacity, P_n,s^ dispatchable for node n and source s.
0 ≤P_t,n,s≤P_n,s^ dispatchable
0 ≤P_t,n,s≤P_n,s^ non-dispatchable·a_t,n,s
P_t,n,s = power provided by the source s (can be dispatchable or not) at time t in node n
a_t,n,s = availability of the renewable source s at time t in node n. It can be a number between 0 and 1 (0 being no obtainable energy and 1 being maximum power available).
In addition, for dispatchable generators, an efficiency is presented to calculate the amount of resource exploited, according to the following equation. P_t,n,s^ dispatchable in this case is referred only to dispatchable generation units.
E_t,n,s^ resource = P_t,n,s/η_n,s
E_t,n,s^ resource = quantity of commodity source employed by resource s at time t in node n.
η_n,s = efficiency of the generation source s in node n.
Energy can be exchanged between two nodes, passing through transmission lines that have a nominal capacity value. P_p^ powerline represents the maximum exchangeable power in the transmission line p, which goes from one node to another. Each powerline p has losses related to the exchange of power, taken into account by the efficiency η_p^ transmission losses. The energy that can pass through powerline p at timestep t is E_t,p^ exchanged.
E_t,p^ exchanged≤P_p^ powerline·η_p^ transmission losses·Δ t
Δ t = timestep (in this case one hour)
st = storage technology, from 1 to ST;
Finally, storage technologies need to respect constraints and balances for the daily dispatch, bearing in mind also self-discharging. The first crucial restriction is that the storage content SC_t,st of the storage technology st at the time step t can not exceed the maximum storage content SC_st^ nominal for each technology st. In this work, Oemof has been set to maintain the same storage content at the beginning and the end of the time frame considered, thus energy stored describes a mean to achieve flexibility in the system.
SC_t,st≤SC_st^ nominal
Storage units have to observe another limitation, regarding the storage balance: the equation takes into account the power to charge the storage st at time t in node n, P_t,n,st^ charge, and the power to discharge it, P_t,n,st^ discharge, with their respective efficiencies, η_st^ charge and η_st^ discharge. The storage unit undergoes through a self-discharging process, contemplated with the efficiency η_st^ self applied to the stored energy. Eventually, the exchanged power modifies the storage content of the unit from one time step to another.
(P_t,n,st^ charge·η_st^ charge - P_t,n,st^ discharge/η_st^ discharge) ·Δ t - (SC_t,st - SC_t-1,st ) ·η_st^self = SC_t,st - SC_t-1,st
SC_t,st - SC_t-1,st = stored energy at time t in storage technology st, given by the difference of the storage content from one time step to another.
§.§ Expansion capacity investment
Oemof framework is suitable for investment optimisation analyses: the solver can decide how to invest in order to expand the technologies capacities if this can help to satisfy the energy demand with an overall lower cost. In general, all Oemof components can be expanded introducing the investment option in the code. In this paper the generation technologies that have the possibility to be expanded are rooftop photovoltaic, Utility Scale photovoltaic, on-shore wind and off-shore wind. The chosen RES generation technologies are in line with National Development Plans for Italy <cit.>, while others are excluded for different reasons: for example nuclear plants can not be added in the electricity mix due to political choices in the country. The expandable storage options are lithium ion batteries, hydrogen tanks, electrolysers and fuel cells. Energy storage is already provided in the current system by Pumped Hydro Storage (PHS), but in the model it cannot be increased since the exploitation limit has been already reached and there is no further space to build this type of power plants, which require a large amount of constructions and affect the environment <cit.>. Li-ion batteries were detected in the electro-chemical storage panorama as the biggest market player <cit.>, while many studies suggest that hydrogen can be an interesting option for power systems adequacy, since its cost can be reduced by exploiting it as an energy carrier in other hard-to-abate sectors <cit.>. The model thus can provides short-term storage with batteries and long term one with hydrogen, offering a comprehensive solution to store electricity produced.
Oemof offers an annuity function that provides the Energy Periodical Cost (Ep cost) for each technology, which is the essential variable to choose if an investment is convenient. The Ep cost is calculated according to Eq. <ref>. Some inputs are needed for each technology in order to compute the ep cost: capital expenditure (capex) in euros per unit of installed capacity, lifetime in years and a Weighted Averaged Cost of Capital (WACC) to consider the credit for the investment.
Ep cost = oemof.tools.economics.annuity(capex, lifetime, wacc)
= capex·wacc· (1+wacc)^lifetime/(1+wacc)^lifetime-1
Besides the economic parameters, the solver requires as inputs the existing technology capacity of the technology (which can be None) and its maximum potential foreseen for the future year to which the study is referring to. In the expansion capacity optimisation the objective function has been modified, adding also the investment made on the various technologies and minimising the total cost C^ tot.
Min C^ tot = Min ( C^op + ∑_n=1^N (∑_s=1^SC_s·P_n,s^added + ∑_st=1^STC_st·E_n,st^added + ∑_p=1^PC_p ·P_p^added ) )
C_s = capital cost for the expansion of the resource s
C_st^ added = capital cost for the expansion of storage technology st
C_p = capital cost for the expansion of power lines
P_n,s^ added = power capacity added for source s in node n
E_n,st^ added = storage capacity for storage st in node n
P_p^ added = power capacity added for power line p
The output of this function is the total annual cost for operation and optimization of the system. Insights are also given about the amount of power installed for each technology in each node; this is a very helpful information for policy makers, to understand which should be the aim of the incentives to introduce.
Oemof framework allows to introduce a constraint on emissions of the system, modifying the main expansion capacity script. To bring about it correctly, it is necessary to bestow emission factors for the polluting commodity sources, in quantity of contaminants per unit of energy supplied. The total amount of emissions, CO_2^total is calculated as follows.
CO_2^total = ∑_t=1^T∑_n=1^N∑_s=1^SP_t,n,s^ fossil source·co_2_s^factor
P_t,n,s^ fossil source = thermal power at time t in node n from the fossil commodity source s
co_2_s^factor = emission factor specific to the commodity source s in [ton of CO_2/MWh_th]
|
http://arxiv.org/abs/2409.02878v1 | 20240904170402 | Measuring Electron Energy in Muon-to-Electron Conversion using Holographic Synchrotron Radiation Emission Spectroscopy | [
"Nicholas Cutsail",
"Johan Vonk",
"Vivek Singh",
"Yury G Kolomensky"
] | physics.ins-det | [
"physics.ins-det",
"hep-ex"
] |
Department of Physics, University of California, Berkeley - 94720
Department of Physics, University of California, Berkeley - 94720
[email protected]
Department of Physics, University of California, Berkeley - 94720
Department of Physics, University of California, Berkeley - 94720
Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley - 94720
§ ABSTRACT
The coherent conversion of a muon to an electron in a nuclear field has been one of the most powerful methods to search for Charged Lepton Flavor Violation (CLFV). Recent advancements have significantly enhanced the sensitivity of μ→ e searches, primarily driven by advancements in muon beamline design and low-mass tracking detectors, which afford exceptional momentum resolution. Nevertheless, the performance of these detectors is inherently limited by electron scattering and energy loss within detector materials. To overcome these inevitable limitations, we propose a novel holographic track reconstruction leveraging synchrotron radiation emitted by electrons. Similar to cyclotron radiation emission spectroscopy (CRES) which has demonstrated outstanding energy resolutions for low-energy electrons, our technique relies on a precision measurement of cyclotron frequency, but in a regime where photons are emitted stochastically and are projected onto a 2-dimensional inner surface of a solenoidal magnet. We outline the concept of such a massless holographic tracker and feasibility of employing this innovative detection strategy for μ→ e conversion. We also address pertinent limitations and challenges inherent to the method.
Measuring Electron Energy in Muon-to-Electron Conversion using Holographic Synchrotron Radiation Emission Spectroscopy
Yury G Kolomensky
September 9, 2024
======================================================================================================================
§ INTRODUCTION
The Standard Model (SM) of particle physics assumes the fundamental notions of lepton number and flavor conservation <cit.>. Both are accidental symmetries, and a theoretical framework that explains the underlying symmetry leading to these conservation laws still needs to be discovered. The observation of neutrino oscillations, which is possible only if neutrinos have mass, has confirmed lepton flavor violation in the neutral lepton sector and implies that all processes involving lepton flavor violation should manifest at some level in perturbation theory. Therefore, Charged Lepton Flavor Violation (CLFV) remains a subject of intense theoretical and experimental interest that will offer valuable insights into the nature of the new physics beyond the SM if observed <cit.>. Currently, searches for μ^+→ e^+γ, μ→ e^+e^-e^+, and coherent conversion of μ^-→ e^- in the field of a nucleus stand out among all CLFV investigations, offering the most stringent constraints <cit.>. These channels have relatively clean final states, consisting only of electrons and photons, and allow an experiment to perform nearly background-free search using high-intensity muon sources. The essence of our study revolves around the experimental identification of μ^-→ e^- conversion, highlighting its distinctive experimental advantages alongside inherent complexities. It relies on negative muons from a muon beam captured by a target material, forming muonic atoms that cascade down to the ground state. In the SM, muons decay in atomic orbit (DIO) or undergo nuclear muon capture. DIO involves the decay of the bound-state muon to an electron and neutrinos, while in nuclear muon capture, the muon combines with a nucleus to produce neutrinos. If μ^-→ e^- conversion occurs, an electron is produced without neutrinos. This electron has a specific energy determined by the muon binding energy and the recoil energy of the nucleus:
E_μ e = m_μc^2 - B_μ(Z) -R(A)
where B_μ(Z) is the atomic binding energy of the muon and R(A) is the atomic recoil energy for a muonic atom with the atomic number Z and the mass number A. With only a monoenergetic electron in the final state, μ^-→ e^- conversion is speculated to provide the ultimate sensitivity to the CLFV process in the long term since, unlike μ^+→ e^+γ and μ→ e^+e^-e^+ processes, it does not suffer from the accidental coincidence background at high muon rates. Additionally, since the muon interacts with quarks in a nucleus, the conversion rate depends on the target nucleus and is model-dependent.
Exceptional experimental progress has been made in the last decade, enabling upcoming experiments like Mu2e <cit.> and COMET <cit.> to improve the sensitivity of the μ^-→ e^- conversion
by four orders of magnitude. This is enabled by the use of a pulsed beam, a novel muon beamline with grated magnetic field, and state-of-the-art low-mass tracking detectors that let the experiments achieve excellent momentum resolution better than 0.2% <cit.>. The excellent momentum resolution is critical for higher sensitivity since the DIO electrons constitute an intrinsic background that scales with the muon beam intensity. In the endpoint region, the DIO rate varies
as (E_μ e - E)^5 <cit.> and can only be suppressed with sufficient momentum resolution for the relativistic electron.
Current experiments commonly employ low-mass particle tracking detectors within a magnetic field to precisely track the trajectory of the relativistic electron emitted during conversion, facilitating its momentum measurement. However, the momentum resolution of present-day trackers is inherently limited by fluctuations in the energy loss in the tracking material. Ongoing efforts to further reduce the material budget of these detectors will likely push the current technologies to the limit <cit.>. Stochastic energy loss widens the conversion signal, necessitating experiments to integrate over a broader region and resulting in increased DIO background; this re-emphasizes the significance of minimizing energy loss and detector resolution.
We present a novel idea of using synchrotron radiation (SR) from the emitted electrons for energy reconstruction, eliminating the need for tracking material and minimizing the effect of energy loss on track reconstruction. Our proposed technique is fundamentally based on a non-destructive measurement of the electron's cyclotron frequency by projecting visible SR photons onto a photosensitive detector located on the inner surface of a solenoidal magnet. Precise measurements of times and positions of a set of stochastic photon hits on a two-dimensional cylindrical surface reconstructs the three-dimensional electron trajectories within the solenoidal volume, a technique akin to holography.
The method of non-destructive radiation spectroscopy, in spirit, is similar to the Project 8 experiment <cit.>, which uses Cyclotron Radiation Emission Spectroscopy (CRES) <cit.> for measuring low-energy electrons from β-decay. However, the implementation of our technique diverges significantly from Project 8, as discussed in the following sections.
§ PROPOSED EXPERIMENTAL APPROACH
Since Mu2e and COMET use Al(Z=13) as a stopping target, we will use E_μ e for muonic Al (E_μ e≈ 105 MeV) to elucidate our proposed technique. However, the method can be easily tuned for other nuclei with suitable changes in the experimental parameters, as shown below. The 105-MeV electrons are ultrarelativistic with a high Lorentz factor (γ≈ 205) and emit SR when subjected to acceleration in a magnetic field. The understanding of SR emitted by a single charged particle is well-established and is used extensively in scientific research. We will summarize the radiation characteristics and refer the readers to comprehensive and excellent textbooks for details <cit.>.
We consider hypothetical conversion electrons radially confined within a cylindrical volume permeated by a uniform axial magnetic field (B). These electrons follow helical trajectories along the magnetic field lines even at ultrarelativistic energies. However, their orbital angular frequencies (ω_L) are reduced due to the relativistic increase in their energy by a factor of γ (Eqn. <ref>).
ω_L = e B /γ m_e = ω_G/γ.
In this equation, ω_G = e B/m_e represents the non-relativistic cyclotron frequency of the electron. The SR emitted by these electrons exhibits a continuous spectrum, with power distributed across a wide spectrum of frequencies. The peak radiation power occurs near the critical frequency (ω_c) <cit.>, defined as:
ω_c = 3/2γ^3 ω_L = 3 e B sinθγ^2/2 m_e.
where θ denotes the pitch angle between the electron's velocity and the magnetic field B. The radiation is spread over a broad spectrum of frequencies around ω_c, with the total power following the distribution:
dP/dω = P_s/ω_c9 √(3)/8 πω/ω_c∫_ω/ω_c^∞ K_5/3(z) dz.
where K_5/3 is a modified Bessel function of the second kind of order 5/3.
The average rate of photons is given by
Ṅ = P_s/ħω.
Moreover, the angular distribution of the radiation power is highly directional, concentrated within a narrow cone of angle θ_RMS in the electron's orbital plane, and emitted predominantly in the direction of its motion.
θ_RMS∼1/√(2)γ(ω_c/ω)^0.4.
The SR features outlined above provide the foundation for our proposed experimental approach. The dominant frequency of the synchrotron radiation is enhanced by a factor of 3/2γ^3 compared to cyclotron radiation and results in a significant shift in the radiation spectrum in frequency for muonic conversion electrons; the dominant radiation frequency shifts by ∼8×10^6 for a 105-MeV electron. This shift dramatically alters the detector requirements needed to measure the radiation. For typical magnetic field strengths of 1–3 T, the critical frequency of a 105-MeV electron falls near the optical/UV range (Fig. <ref>), allowing the use of optical photodetectors for electron radiation measurement.
The highly directional nature of the emitted light allows for direct tracking of the electron's trajectory within a magnetic field in a vacuum environment, making a precise energy reconstruction feasible. This is a drastically different detection scheme from Project 8, where most of the emitted radiation from cyclotron motion is concentrated in a sharp, energy-dependent RF frequency band that can be precisely measured with resonant pickup. However, the power is broadened for SR over a range of harmonics of the electron's revolution frequency. Thus, the energies of the SR radiation photons do not precisely encode the energies of the electrons. Instead, we focus on measuring the cyclotron frequency by correlating it to the temporal and spatial distribution of the radiation. Specifically, we propose using optical photodetectors to record individual photon hits rather than the average power distributed over the detector chamber. This is crucial because a 105-MeV conversion electron produces too few optical photons per revolution to yield a meaningful average power distribution(Fig <ref>). Instead, we must analyze the track as a sparse collection of stochastic photon hits originating from the electron's trajectory. The electron's path is projected onto the detector surface, forming a characteristic "hologram" of its motion (Fig. <ref>).
While each photon hit does not directly measure the electron's path (as photons are emitted stochastically), the overall pattern of photon hits mirrors that of conventional particle trackers, allowing for similar analysis techniques (Fig. <ref>).
For 105-MeV electrons in a 2T magnetic field, the cyclotron frequency is f≈ 0.27 GHz, corresponding to a period of 3.7 nsec. The energy resolution is determined by the dwell time (length of the detector), the number of detected photons, and the time resolution for each photon hit. Recent advances in optical photon detection technologies make it possible to conceive of a detector with intrinsic energy resolution of better than 10^-3 (100 keV). For estimates of performance, we use the parameters demonstrated in modern photodetectors such as Large Area Picosecond Photo-Detectors (LAPPDsTM) which offer time resolution of <50 psec/pixel and pixel sizes of a few mm^2 <cit.>.
§ SIMULATION AND RECONSTRUCTION
In our simulation framework, we define the detector parameters encompassing their geometry, timing, spatial resolution, and quantum efficiency of the photodetector. These parameters define our virtual experimental setup. We then establish the initial conditions for the electrons, assuming an emission point within the target and setting the initial energy for signal electrons at 105 MeV. We model the electrons as being emitted isotropically from the stopping target, but a gradient magnetic field subsequently influences their trajectories. Moreover, we assign random emission times to the electrons.
Subsequently, we model the electron paths as helical trajectories within the magnetic field, ignoring radiation damping since the energy loss is negligible. A crucial aspect of our simulation is calculating the expected number of detected photons. This is achieved by integrating the product of the spectral rate (as depicted in Figure <ref>) and the photodetector's quantum efficiency over the optical band and then multiplying this result by the electron's dwell time within the detector. To account for the inherent statistical nature of photon detection, we sample a Poisson distribution to simulate the number of detected photons associated with each electron track and we randomize the photon emission times.
The next step in our simulation involves determining the directions of the generated photons. We achieve this by sampling the angular and spectral distribution formula for synchrotron radiation as shown below <cit.>.
dṅ/dΩ dE=ṅ/E4 √(3)/5 π(3 E/4 E_c)^2/3[(γψAi((3 E/4 E_c)^2/3(1 + γ^2 ψ^2)))^2
+ (Ai'((3 E/4 E_c)^2/3(1 + γ^2 ψ^2)))^2]
We then calculate their intersection points with the cylindrical detector with the photon directions established, considering the detector's geometry. We then introduce timing and position errors based on Gaussian distributions that reflect the detector's timing and spatial resolution. Additionally, we acknowledge the discretization of position errors due to the pixelation of the detector and justify its negligible impact compared to the broader angular distribution of the photons. The final stage of our simulation involves combining the detected photon hit locations and times from individual electron tracks to create a comprehensive representation of simultaneous tracks.
We initiate the reconstruction process by requiring a minimum number of detected photons. This strategic selection balances the desired reconstruction resolution against the percentage of retained signal tracks (efficiency). Once we identify the suitable tracks, we get an initial estimation of their emission time and pitch angle by performing a Hough transform <cit.> on the linear relationship between the z-position and time (t) for each track, followed by the application of DBSCAN (Density-Based Spatial Clustering of Applications with Noise) <cit.> to identify maxima in the resulting Hough space. This approach offers the advantages of being relatively resistant to random background photons and facilitating the separation of multiple tracks (Fig. <ref>).
The initial time is extracted from the x-intercept of the fitted line, while the pitch angle is derived from its slope using the relation
θ = arccos(dz/dt·1/β c) ≈arccos(dz/dt·1/c) .
We also quantify the success rate of this initial estimation and elaborate on how the presence of background photons influences it.
With initial estimates for the initial time and pitch angle, we employ a maximum likelihood fit to refine our reconstruction further.
We perform "toy" simulations to elucidate relations between energy resolution and experimental conditions.
First, the primary electrons are generated according to the specified momentum distributions (either a delta function for μ→ e conversion electrons, or Michel and DIO spectra). The constant magnetic field and small radiative losses (≪ 1%) allow us to treat electron motion as a simple helix. Photons are emitted randomly according to the synchrotron distributions and are detected by a cylindrical photodetector shell.
We use a simple model for the sensitive detector based on state-of-the-art technology. Large Area Picosecond Photo-Detectors (LAPPDsTM) are capable of capturing photon hits with the high spatial and temporal resolutions necessary to furnish a useful hologram. In particular, LAPPDs boast position resolutions ∼ 3 mm, timing resolutions ∼ 50 ps, dark count rates ≲ 1 kHz/cm^2, and quantum efficiencies up to 25% <cit.>. We model these responses as Gaussian.
For each simulated track, we reconstruct energies using the maximum likelihood method. The detection likelihood for each hit ℒ(θ,λ) is modeled with dependence on critical track parameters θ and photon nuisance parameters λ.
Then, the fit energy is taken to be that which maximizes the likelihood θ^*=argmax_θℒ(θ).
Since the likelihood function may contain several local minima, we use a multi-step seeding process. First, we take a Hough transform to fit the linear z vs t response (see Fig. <ref>). The fit x-intercept corresponds to the initial time when the electron is emitted, while the slope is related to the pitch angle (θ) by θ = arccos(slope/c). Next, using a precomputed grid of template tracks in energy and pitch angle, we
minimize the negative log likelihood interpolating between the nearest tracks.
It is also important to note that the likelihood is affected both by detector resolutions and the geometric angular spread inherent in the synchrotron emissions.
To proceed with our analysis, we set the detector parameters to experimentally practical values: B=2T, the length of the detector solenoid L=10m, the gradient magnetic field with the target region at B=2.25T, and LAPPD photosensors.
Under these conditions, track photon counts vary drastically due to Poisson statistics and a strong dependence of dwell time on pitch angle. Low-count tracks reconstruct poorly, whereas high-count tracks reconstruct with better resolution, so the hit count constitutes an essential track quality measure.
With the goal of surpassing Mu2e's 𝒪(100 keV/c) momentum resolution, we require ≥10 hits to constrict the resolution while maintaining adequate acceptance efficiency (Fig. <ref>). Since the average photon count has a strong monotonic dependence on pitch angle, the photon count cut roughly corresponds to a minimum pitch-angle cut around 54^∘.
The use of a graded field maps some of the backward-moving decay electrons into this pitch-angle acceptance region, increasing the overall acceptance efficiency. With electron momenta fully randomized and photon count determined by the associated SR rate and Poisson distribution, we reconstruct simulated 105 MeV conversion electrons with a resolution of σ=52.4 keV±0.5 keV (FWHM ≈ 123 keV) at 67% post-cut efficiency, giving a total ± 3 σ reconstruction efficiency of approximately 38% (Fig. <ref>). Higher resolution (lower FWHM) is achievable by requiring more photon hits, though this leads to decreased efficiency as fewer events meet the stricter threshold (Fig. <ref>).
§ BACKGROUNDS
A very appealing feature of SR-based detection is that it is insensitive to any particle background other than electrons.
The primary background sources are DIO electrons, which can be separated based on their lower energies, and random hits due to the photodetector's dark rates. LAPPDs based on microchannel plates are relatively insensitive to direct hits by neutrons, protons, and X-rays.
In the first step of reconstruction, we use a Hough transform to fit the initial slope and time. Introducing background, we can use a clustering method such as DBSCAN to find all tracks in the data. This method shows promising robustness against background hits and good potential for accurate track separation, achieving success rates exceeding 50% in our initial testing. Further yield improvements may be possible by implementing a likelihood-based approach.
The dark rate hits are uncorrelated and do not form tracks; their effect on energy resolution is negligible.
§ CONCLUSION AND FUTURE OUTLOOK
A typical energy resolution for the holographic synchrotron radiation detector is shown in Fig. <ref>. With the cut on the number of photons of N_γ>14, we project an energy resolution (Gaussian σ) of 52 keV and the selection efficiency of 38%. This performance exceeds that of the current Mu2e detector and is adequate for the next-generation experiment Mu2e-II <cit.>.
We note that the HSRES technique offers a number of advantages. First, since it is relatively insensitive to the non-relativistic particles, it can tolerate the beam-related backgrounds generated during the beam "flash". Therefore, this technique could open a window to explore heavy stopping targets such as Au, which correspond to short muon capture lifetimes. In addition, placing the HSRES detector in a relatively high magnetic field of 2 T may allow a conventional tracker-calorimeter detector similar to Mu2e to be located downstream in a lower magnetic field region. Thus, the HSRES technique is compatible with conventional tracking detectors. The use of both would allow additional background rejection capabilities, improve the combined energy resolution, and allow a robust identification of the signal in case of discovery.
§ ACKNOWLEDGEMENTS
The authors would like to thank the Project 8 collaboration for the inspiration, Elise Novitski for technical discussions regarding the CRES technique, and Marjorie Shapiro for asking whether the technique could be applied to Mu2e and stimulating this development. We are indebted to the Mu2e collaboration for making the concept of late Vladimir Lobashev a reality and motivating us to pursue it further. This work was supported by the US Department of Energy (DOE) Office of High Energy Physics under Contract No. DE-SC0018988, and by the Physics Department at the University of California, Berkeley. This research used the resources of the National Energy Research Scientific Computing Center (NERSC).
|
http://arxiv.org/abs/2409.03658v1 | 20240905161140 | A DNN Biophysics Model with Topological and Electrostatic Features | [
"Elyssa Sliheet",
"Md Abu Talha",
"Weihua Geng"
] | cs.LG | [
"cs.LG",
"math-ph",
"math.MP"
] |
smu]Elyssa Sliheet
[email protected]
smu]Md Abu Talha
[email protected]
smu]Weihua Gengcor1
[email protected]
[cor1]Corresponding author
[smu]Department of Mathematics, Southern Methodist University, Dallas, TX 75275 USA
§ ABSTRACT
In this project, we provide a deep-learning neural network (DNN) based biophysics model to predict protein properties.
The model uses multi-scale and uniform topological and electrostatic features generated with protein structural information and force field, which governs the molecular mechanics.
The topological features are generated using the element specified persistent homology (ESPH) while the electrostatic features are fast computed using a Cartesian treecode.
These features are uniform in number for proteins with various sizes thus the broadly available protein structure database can be used in training the network.
These features are also multi-scale thus the resolution and computational cost can be balanced by the users.
The machine learning simulation on over 4000 protein structures
shows the efficiency and fidelity of these features in representing the protein structure and force field for the predication of their biophysical properties such as electrostatic solvation energy.
Tests on topological or electrostatic features alone and the combination of both showed the optimal performance when both features are used.
This model shows its potential as a general tool in assisting biophysical properties and function prediction for the broad biomolecules using data from both theoretical computing and experiments.
Electrostatics;
Poisson-Boltzmann;
Interface methods;
treecode;
multipole methods
A DNN Biophysics Model with
Topological and Electrostatic Features
[
September 9, 2024
=====================================================================
Highlights
- Multiscale and Uniform Features
- High efficiency using the MtM Translation
- Verified using GB and PB Data
- DNN Topological Features + Electrostatic Features
Assignment
Eylssa & Weihua: Algorithm and Coding for the electrostatic features
Elyssa: Verify the features using GB and PB Solvation Energy Data
Elyssa: Repeat the topological DNN and add the electrostatic features for simulation
§ INTRODUCTION
One of the overarching themes of biology is that structure determines function.
§ MODELS AND ALGORITHMS
We introduce the theories and algorithms involved in this work. First in a comparison fashion, we introduce the two most popular implicit solvation models, the Poisson-Boltzmann model and the Generalized Born model, which are used to generate the core biological property of our concern: the electrostatic solvation energy, as the energy it takes bymoving the solutes into the solvent. Following that, we introduce the persistent homology, which produce the topological features. Finally
we introduce the Catersian treecode, whose modification evolves into the algorithms for us to generate the electrostatic features.
§.§ The Poisson-Boltzmann model
Figure 1 depicts the popular implicit solvent models.
In Fig. 1(a) a protein is represented by a collection of N_c spherical atoms with centered partial charges.
The molecular surface Γ (also known as the solvent excluded surface)
is defined by the trace of a water molecule represented by a red sphere
rolling on contacting with the protein atoms. The Poisson-Boltzmann model is shown in Fig. 1(b), where
the molecular surface Γ divides the entire computational domain Ω into
the protein domain _1 with
dielectric constant _1
and
atomic charges q_k located at x_k, k=1 : N_c,
and the solvent domain _2 with
dielectric constant _2 and dissolved salt ions <cit.>.
Assuming a Boltzmann distribution for the ion concentration,
and
considering the case of two ion species with equal and opposite charges
(e.g. Na^+, Cl^-),
in the limit of weak electrostatic potential
one obtains the linearized Poisson-Boltzmann (PB) model <cit.>,
_1^2 _1(x) =
-∑_k=1^N_c q_k (x-x_k), x∈_1,
_2^2 _2(x) =
^2_2(x), x∈_2,
_1(x) = _2(x),_1
_1 n( x) = _2_2 n( x),
x∈Γ,
where is the inverse Debye length measuring the salt concentration,
and
the potential satisfies a zero far-field boundary condition.
The PB model governs the electrostatic potential ϕ in the entire space. Theoretically after ϕ
is obtained, its gradient will produce electrostatic field while its integral will generate potential energy. However, there are many challenging issues on properly obtaining the field and energy (e.g. definition of field on molecular surface Γ <cit.>). Our attention for this project is on the energy side as described below.
The electrostatic potential energy is given as
E = 1/2∫_Ωρ( x) ϕ( x) d x = 1/2∑_k=1^N_c q_kϕ( x_k) = 1/2∑_k=1^N_c q_k(ϕ_reac( x_k)+ϕ_coul( x_k)) = E_solv+E_coul
where ρ( x) = ∑_k=1^N_c q_k (x-x_k) is the charge density as a sum of partial charges weighted delta function and the E_solv = 1/2∑_k=1^N_c q_k ϕ_reac( x_k) term is the solvation energy, the energy it takes for the protein to solvate from the vacuum to the solvent. The ϕ_reac is the reaction potential as the remaining component when Coulomb potential ϕ_coul is taken away from the total electrostatic potential ϕ.
Solving the PB model numerically
by grid-based methods is challenging because
(1) the protein is represented by singular point charges,
(2) the molecular surface is geometrically complex,
(3) the dielectric constant is discontinuous across the surface,
(4) the domain is unbounded.
To overcome these numerical difficulties,
several finite difference interface methods have been developed.
By assuming that the interface is aligned with a mesh line,
a jump condition capture scheme has been developed in <cit.>.
Based on a Cartesian grid, the immersed interface method (IIM) <cit.>
has been applied to solve the PB equation
<cit.>, in which the jump conditions can be rigorously enforced based on Taylor expansions.
For the purpose of dealing with arbitrarily shaped dielectric interfaces based
on a simple Cartesian grid,
a matched interface and boundary (MIB) PB solver
<cit.>
has been developed through rigorous treatments of geometrical and charge singularities.
Boundary element methods (BEM) for the PB model were developed
later <cit.>
with several inherent advantages,
(1) only the molecular surface is discretized rather than the
entire solute/solvent volume,
(2) the atomic charges are treated analytically,
(3) the interface conditions are accurately enforced,
(4) the far-field boundary condition is imposed analytically.
In the original BEMs, these advantages were offset by
the high cost of evaluating the interactions among the elements,
but fast summation schemes have been developed to reduce the
cost <cit.>
and
in our previous work we employed a O(N log N) Cartesian treecode <cit.> and later a O(N) Cartesian Fast Multipole Method (FMM) <cit.> for this purpose.
In developing these boundary integral PB solvers, we investigate and resolved many interesting numerical challenges e.g. preconditioning of matrix whose conditioner number increases when triangulation quality is reduced, the parallelization of the treecode algorithm using MPI <cit.> and the parallelization of the boundary integral PB solver using GPU <cit.>. The developed solvers can effeciently help us to produce electrostatic potentials, which can be further used to compute protein properties such as binding energy <cit.> and pKa values <cit.>.
§.§ Generalized Born Model
The solvation energy can also be efficiently computed by using the Generalized Born's method, which is an approximation to the Poisson's model without considering the ions.
This model serves to reduce the computational complexity of the PB model, which determining the solution requires solving a 3D partial differential equation.
To derive the GB model, we first
we adopt some classic results for electrostatics from Jackson's textbook <cit.>. Using the analogue from discrete point charge distribution to continuous charge density distribution, the potential energy of a charged system with density distribution ρ( r) takes the form
W = 1/2∫ρ( r) ϕ( r) d r = 1/2∫ E( r) · D( r) d r
where the electric field E = -∇ϕ, and electric displacement D=ϵ E.
Now consider assembling a charge to the center at the origin of a sphere with radius r_i. The sphere separates the domain with ϵ_in for r <a_i and ϵ_out for r <a_i. The assembly takes the energy
G_i = 1/8π∫ D· D/ϵ d r≈1/8π∫_r<a_iq_i^2/r^4ϵ_in dx + 1/8π∫_r>a_iq_i^2/r^4 ϵ_out dx
where the Coulomb field approximation D = q r/r^3 is used.
The solvation energy is the energy difference when ϵ for r>a_i changes from ϵ_in (unsolvated) to ϵ_out (solvated) given as
Δ G_solv,i = 1/8π(1/ϵ_out - 1/ϵ_in ) ∫_r>a_iq_i^2/r^4dx
= (1/ϵ_out - 1/ϵ_in) q_i^2/2a_i.
This result is consist with the Poisson-Boltzmann model of a solvated spherical cavity with centered charges as summarized in <cit.>, which is a special case from Kirkwood's derivation of a series of spherical harmonics for a spherical cavity containing arbitrary multiple charges <cit.>.
If we treat the molecule as a collection of spherical atoms, using Eq. (<ref>) the total solvation energy is
Δ G_solv =1/2(1/ϵ_out-1/ϵ_in)( ∑_i=1^N q_i^2/a_i + ∑_j ≠ i^N q_i q_j/r_ij)
≈1/2 (1/ϵ_out-1/ϵ_in) ∑_i=1^N ∑_j=1^Nq_i q_j/f_ij^GB
where the first equation has two terms: a sum of individual Born terms and pairwise Coulombic terms.
Note the actual molecule is better represented by a dielectric interface
(e.g. solvent excluded surface (SES)) which separates inside domain Ω_in and outside domain Ω_out. The charge q_i is located at the center r_i of a sphere with radius a_i. We assume Ω_in contains all these spheres. Thus the model using dielectric interface requires a step further as been approximated in the second equation in Eq. (<ref>).
Here f_ij^GB is the effective Born radii (i=j) or effective interaction distance (i j).
Assuming the Born Radii R_i's are obtained, a popular estimation of
f_ij^GB is:
f_ij^GB(r_ij) = (r_ij^2 +R_iR_j e^-r_ij^2/4R_iR_j)^1/2
where r_ij is the distance between the atomic centers of atoms i and j.
Note R_i depends not only on a_i, but also on radii and relative positions of all other atoms.
The estimation of effective Born radii is an active research area. From Poisson-Boltzmann theory, the perfect Born radii is given as
R_i = 1/2 (1/ϵ_out-1/ϵ_in) q_i/Δ G_PB,i
with Δ G_PB,i as the solvation energy from the PB model with all except ith charge muted.
There are many approaches to approximate R_i and a typical one is the volume integration via quadrature as
R_i^-1 = 1/4π∫_Ω_outq_i^2/r^4 d r = q_i^2/a_i-1/4π∫_Ω_in/B( r_i, a_i)q_i^2/r^4 d r.
Computing R_i using FFT leads to the computational cost of sovaltion energy to O(n^3log n+N+N^2) <cit.> against the cost of solving the PB model of O(n^6+Nn^2+N) with finite difference method (iterative solver, matrix structure not considered), where n is the number of grid point in x,y,and z direction, assuming cube-alike computational domain.
As shown in Fig. 1(c), we define the perfect Born Radius of the atom centered at x_k as the radius of the red sphere that makes the solvation energy of the red sphere
is equal to the solvation energy of the protein
when there is only one charge q_k located at x_k for both the sphere and the protein.
The GB model calculates the solvation energy as a sum of interactions between spherical atoms of
Born radii R_k for k=1,⋯,N_c
E_solv = 1/2(1/ϵ_1-1/ϵ_2)∑_i,j=1^N_cq_i q_j/f_ij
with effective interaction distance
f_ij = √(r_ij^2+R_iR_jexp(-r^2_ij/4R_iR_j)).
The approximation of R_k efficiently and accurately is a hot research topic, aiming to obtain the perfect Born radii rapidly. We briefly mention the GB model here due to its efficiency to compute solvation energy compared with the cost to solve the PB model. Especially in the machine learning model, using quantities produced from GB can be a great choice of features when PB quantities need to be fast predicted <cit.>.
§.§ Topological Features
The fundamental task of topological data analysis is to extract topological invariants as the intrinsic features of the underlying space.
For the prediction of protein properties,
we expect the topological invariants such as
independent components,
rings,
cavities, etc.
carry useful information
which cannot be discovered by algebraic or geometric way.
In addition, topological invariants in a discrete data set
can be studied using simplicial homology which uses a specific rule to identify simplicial complexes from simplexes.
Here the simplex represents the simplest possible polytope
in any given dimension like point,
line segment,
triangle,
tetrahedron, etc.
Furthermore, filtration and persistent homology
can identify and connect complexes at different level of complexity
and the appearance and disappearance of the homology group.
This brings us an analog to the four levels of structure of proteins
(Primary, Secondary, Tertiary, and Quaternary)
and the change in protein structure during its folding pathway, as well as
binding affinity at different location and orientation.
To make the topological features uniform and physically informed,
we choose to use Element Specific Persistent Homology (ESPH)
to extract topological features at different level of complexity.
An example of Element Specific is that we can use the collection of vertices V_𝒫_k for the unit 𝒫_k in 𝒫={ CC, CN,CO,CS,CH, NN,⋯,SS,SH,HH} as mentioned earlier to generate topological features.
ESPH has been used for successfully predicting protein-ligand binding <cit.>. Once the set is determined and numerically calculated, the barcode can be generated using software GUDHI <cit.>, followed by the algorithm below to generate the vector of topological features.
We define the collection of barcodes as 𝔹(α, 𝒞, 𝒟), where
α : atom labels (i.e., protein, ligand, mutated residue)
𝒞 : type of simplicial complex (i.e., Ribs or Cech)
𝒟 : dimension (i.e., Betti-0, Betti-1, etc.)
Using the collection, the structured vectors V^b, V^d, and V^p can be constructed to respectively describe the birth, death, and persistent patterns of the barcodes in various spatial dimensions. Practically, the filtration interval [0, L] is divided into n equal length subintervals and the patterns are characterized on each subinterval. The description vectors are defined as:
V^b = || {(b_j, d_j) ∈𝔹(α, 𝒞, 𝒟) | (i-1)L/n ≤ b_j ≤ iL/n } ||, 1≤ i < n,
V^d = || {(b_j, d_j) ∈𝔹(α, 𝒞, 𝒟) | (i-1)L/n ≤ d_j ≤ iL/n } ||, 1≤ i < n,
V^p = || {(b_j, d_j) ∈𝔹(α, 𝒞, 𝒟) | (i-1)L/n ≥ b_j, iL/n ≤ d_j } ||, 1≤ i < n,
These vectors can be viewed as (1D) images. Each pixel is associated with, m channels that describe different element type, mutation status, topological dimension, and topological event (birth and death).
§.§ Electrostatic Features
§.§.§ General introduction
In majority of the molecular simulations, including Monte Carlo simulation, Brownian Dynamics, Molecular Dynamics, etc, the electrostatic interactions are characterized by the interactions between the partial charges assigned at the atomic centers by the force field, which are determined by experiment or quantum chemistry.
The algorithm for obtaining the mulit-scale, physics-informed, uniform electrostatic features is explained in Fig.
<ref>. In short, the electrostatic of the protein from q and ϕ_reac at the charge locations will be represented by the point-multipoles, whose moments will be calculated using treecode or FMM combined. To understand the point-multipoles, we provide some explanations below.
Consider a protein with N_c atom and its multi-scale N_d point multipole representation as shown in Fig. <ref> for our machine learning model. For n=1,⋯, N_c, at the center of the nth atom, i.e., r_n=(x_n,y_n,z_n),
the nth permanent order 2 (for example) multipole M^n consists of 13 components:
M^n = [ q^n, d^n_x, d^n_y, d^n_z, Q^n_xx, Q^n_xy, …, Q^n_zz]^T, where q, d_i, Q_ij for i,j=1,2,3 are the moments of the monopole, dipole, quadruple in suffix notation.
Using this notation, the permanent charge at r_n can be written as <cit.>
ρ^n( r)=q^nδ( r- r_n)+d_i^n∂_iδ( r- r_n)
+Q^n_ij∂_ijδ( r- r_n),
A key idea is that the Coulomb potential G^n
governed by the Gauss's law - Δ G^n = 4 πρ^n in the free space is expressed in terms of the Green's function
G^n( r)=1/| r- r_n|q^n + r_i - r_n,i/| r- r_n|^3d_i^n
+ (r_i - r_n,i)(r_j - r_n,j)/2| r- r_n|^5 Q^n_ij.
For all permanent multipoles M = [ M^1, M^2, …, M^N_d]^T, the total Coulomb potential is additive such as
G^ M( r) = ∑_n=1^N_d G^n( r) by the superposition principle.
For the point-multipole approach in the left picture of Fig <ref>, our goal is thus the computation of M accurately and efficiently.
In fact, the computational cost is O(N_c) using the strategies in <cit.>, i.e. the moments at the finest cluster are computed first and a M2M (moments to moments) transformation can be used to efficiently compute moments at any desired level. These moments are intrinsic properties of the cluster thus can serve as features for the protein, which carries simplified and important information. The number of features up to level L cluster is
N_f(p,L) = N_p(1+8+8^2+⋯ + 8^L) = N_p8^L+1-1/7
where N_p is the number of terms in multipole expansion with N_p=1,4,10,20,35,56 ⋯ as the sequence of
tetrahedral numbers
when pth order multipole is used.
Alternatively, for the Barycentric treecode approach in the right picture of Fig <ref>, the computational cost is O(N_clog N_c),
and the number of terms in the similar sense of N_p is c_p^3 where c_p is the number of Chebyshev nodes in one direction <cit.>.
§.§.§ Implementation details
1. The protein is represented by its atomic locations and partial charges: q_i( r_i) for i=1,⋯,N_c, e.g. the PQR file.
2. Two parameters p and L are given, where p is the order in the Cartesian Taylor expansion and L are the number of levels. With given p, we have the number of terms given as
N_p = (p+1)(p+2)(p+3)/6.
The number of features up to level L cluster is
N_f(p,L) = N_p(1+8+8^2+⋯ + 8^L) = N_p8^L+1-1/7=p(p+1)(p+2)(8^L+1-1)/42.
These N_f numbers are ordered by level from 0 to L and the coordinates of cluster centers in each level (more details to be given). This determines the dimension of our final feature vector F ∈ℝ^N_f.
The following table gives how N_f varies with the change of p and L:
p\L 0 1 2 3 4
0 1 9 73 585 4681
1 4 36 292 2340 18724
2 10 90 730 5850 46810
3 20 180 1460 11700 93620
4 35 315 2555 20475 163835
Steps:
1. Read in the particle location and charges
2. From order p and level L to calculate number of features N_f
3. Order the particles
4. Build the tree with L levels; Note each cluster has variables of:
§.§ The machine learning models
The proposed research has the following components.
(a) A DNN based machine learning model using physics informed and multi-aspect features. These features are in algebra, topology, geometry, and electrostatics
and are extracted using mathematical algorithms and their related software.
These features are multi-scale and uniform.
Being multi-scale, we mean depending on the accuracy requirement
and available computing powers,
the number of features can be adjusted as needed.
Being uniform, we mean once the scale is set,
the number of features is fixed for proteins of different sizes.
(b) Algorithms to produce electrostatic features using
the partial charge distribution that is assigned by atomic location from X-ray or NMR and the force field.
We will consider both the pairwise Coulomb interaction between partial charges
and the reaction potentials output
from solving the Poisson-Boltzmann model.
(c) Biological applications of designed machine learning model.
We start with applications we had experience such as electrostatic solvation energy,
free energy and binding energy if protein and its ligand are both involved;
protein pKa prediction;
protein Monte Carlo simulation; and eventually protein molecular dynamics.
§.§ A DNN based machine learning model with algebraic, topological, geometric, and electrostatic features
The main goal of this project is to discover the hidden and useful information embedded
in the protein structural data from the protein data bank in a simple and abstract way.
The protein structure data and force field
include bonded and non-bonded interactions with which
the molecular simulation can be performed.
We categorize these information into algebraic, topological, geometric, and electrostatic features.
These four aspects are all important to the machine learning model.
Here we lightly touch the first three
and put our focus on the electrostatic features as our main research focus.
Based on our two-decade experience in developing numerical algorithms in electrostatic interations,
our proposed algorithms are novel and practical with the combination of both accuracy and efficiency.
The DNN based Machine Learning model is shown in Fig. <ref>, which uses the available protein structural data <cit.>, force field such as AMBER <cit.>, CHARMM<cit.>, AMOEBA <cit.>, etc. and known protein properties repositories such as PDBbind Database <cit.>, Protein pKa Database <cit.>, etc to mathematically generate algebraic, topological, geometric, and electrostatic features to train a learned model
and then use the learned model to predict unknown properties for protein with available structures.
§ RESULTS
§.§ Electrostatic Features Only
§.§ Topological Features Only
§.§ Electrostatic and Topological Features Combined
§ DATA AND SOFTWARE DISSEMINATION
Questions:
1. What's the purpose of Run_alpha_hydro()?
2. The reason that only 1000+ proteins can have both topological features and electrostatic features.
The 4000+ proteins with both PB and GB solvation energy computed has only protein.pqr files.
Solution: modify the files which generate topological features to only use pro.pqr file for atoms [C,N,O,S]
3. For all the performance parameters: MSE, PCC, R^2, MAPE, and the scatter plot, are these results from testing data?
4. How much difference will the simulations at different time have?
In the present work,
the selected 4294? protein structures are obtained from the PDBbind v2015 refined set and core set, and PDBbind v2018 refined set as the training set <cit.>. The collection has proteins sized from 997 to 27,713 atoms.
A data pre-processing is required before a PB solver can be called.
The protein structures in the original data set are protein-ligand complexes.
Missing atoms and side-chains are filled using the protein preparation wizard utility of the Schrodinger 2015-2 Suite with default parameter setting.
The Amber ff14SB general force field is applied for the atomic van der Waals radii and partial charges.
§.§ Topology Features Generation
The following software needs to be installed.
https://gudhi.inria.fr/index.htmlGUDHI
Currently a protein .pdb file and a ligand .mol2 are required as the inputs for the generation of topological features.
1. Get_structure(pdb_file, mol2_file ,'complex.npz')
Finding atoms [C,N,O,S] in protein within cut-off distance 50Å from atoms [C,N,O,S,P,F,Cl,Br,I] in ligand, and save all these atoms in terms of position and types in the complex.npz file.
2. Run_alpha('complex.npz', 'protein.PH', 'ligand.PH', 'complex.PH')
Use the atoms positions in protein, ligand, and complex to create Cech/Alpha complex in terms of barcode of simplexes and then convert the barcode information to Betti numbers.
3. Run_alpha_hydro('complex.npz', 'protein_C-C.PH', 'ligand_C-C.PH', 'complex_C-C.PH')
4. PrepareData('complex.npz', 'protein.PH', 'complex.PH', 'protein_C-C.PH', 'complex_C-C.PH', 'complex_digit.npy')
Turn Betti numbers into Topological Features in terms of 1-D like images.
ieeetr
|
http://arxiv.org/abs/2409.02798v1 | 20240904151628 | Beam Breakup Instability Studies of Powerful Energy Recovery Linac for Experiments | [
"Sadiq Setiniyaz",
"R. Apsimon",
"P. H. Williams",
"C. Barbagallo",
"S. A. Bogacz",
"R. M. Bodenstei",
"K. Deitrick"
] | physics.acc-ph | [
"physics.acc-ph"
] |
APS/123-QED
[email protected]
Now at Center for Advanced Studies of Accelerators, Jefferson Lab, Newport News, USA
[email protected]
Engineering Department, Lancaster University, Lancaster, LA1 4YW, UK
Cockcroft Institute, Daresbury Laboratory, Warrington, WA4 4AD, UK
STFC Daresbury Laboratory & Cockcroft Institute, Warrington, WA4 4AD, UK
Laboratoire de Physique des 2 Infinis Irène Joliot-Curie (IJCLab), Orsay, France
The Paris-Saclay University, Gif-sur-Yvette, France
Now at CERN, Geneva, Switzerland
Center for Advanced Studies of Accelerators, Jefferson Lab, Newport News, USA
Center for Advanced Studies of Accelerators, Jefferson Lab, Newport News, USA
Center for Advanced Studies of Accelerators, Jefferson Lab, Newport News, USA
§ ABSTRACT
The maximum achievable beam current in an Energy Recovery Linac (ERL) is often constrained by Beam Breakup (BBU) instability. Our previous research highlighted that filling patterns have a substantial impact on BBU instabilities in multi-pass ERLs.
In this study, we extend our investigation to the 8-cavity model of the Powerful ERL for Experiment (PERLE). We evaluate its requirements for damping cavity Higher Order Modes (HOMs) and propose optimal filling patterns and bunch timing strategies.
Our findings reveal a significant new insight: while filling patterns are crucial, the timing of bunches also plays a critical role in mitigating HOM beam loading and BBU instability. This previously underestimated factor is essential for effective BBU control.
We estimated the PERLE threshold current using both analytical and numerical models, incorporating the designed PERLE HOM dampers. During manufacturing, HOM frequencies are expected to vary slightly, with an assumed RMS frequency jitter of 0.001 between cavities for the same HOM. Introducing this jitter into our models, we found that the dampers effectively suppressed BBU instability, achieving a threshold current an order of magnitude higher than the design requirement.
Our results offer new insights into ERL BBU beam dynamics and have important implications for the design of future ERLs.
Beam Breakup Instability Studies of
Powerful Energy Recovery Linac for Experiments
K. Deitrick
September 9, 2024
===================================================================================
§ INTRODUCTION
In the 2020 European Strategy for Particle Physics <cit.>, research and development in the field of superconducting Energy Recovery Linacs (ERLs) <cit.> was prioritized, owing to their anticipated crucial role in future particle physics applications. The PERLE (Powerful ERL Experiment) <cit.> is one of the suggested advanced ERL test facilities developed to assess potential options for a 50 GeV ERL, as proposed in the Large Hadron Electron Collider (LHeC) <cit.> and Future Circular Collider (FCC-eh) designs. Moreover, it functions as a base for dedicated experiments in nuclear and particle physics. PERLE's main objective is to investigate the operation of high current, continuous wave (CW), multi-pass systems utilizing superconducting cavities that operate at 802 MHz.
PERLE's remarkable capacity to handle beam power up to 10 MW and an operating (injection) current of 20 mA offers researchers a unique opportunity to conduct controlled studies on Beam BreakUp (BBU) <cit.> studies relevant to next-generation multi-pass ERL designs. The maximum achievable beam current in an ERL is often constrained by the BBU instability. Recent research has demonstrated that the selection of filling patterns, which delineates the order of bunch injection into the ERL over subsequent turns, can significantly influence not only RF stability and cavity voltage <cit.> but also BBU instabilities in multi-pass ERLs <cit.>. However, the impact on BBU is somewhat more complex due to the asynchronous nature of the HOM mode relative to the beam, and the transcendental relationship between the HOM voltage and the bunch offsets.
Filling patterns describe the order bunches pass through the cavity in multi-turn ERLs. For example, the simplest filling pattern [1 2 3 4 5 6] describes a pattern where first bunch is at its first turn, the second bunch is at its second turn so on so forth. Another filling pattern [1 4 2 5 3 6] describes a pattern where first bunch is at its first turn, the second bunch is at its fourth turn, and so on. When the filling pattern does not change when bunches pass though the cavity, this pattern is referred as “Sequence Preserving” (SP) patterns. More complicated filling patterns can be achieved via maneuvering bunch injection timing and re-combination schemes, which is beyond scope of this paper. In this paper, we shall only focus on the SP patterns.
The original optics for PERLE 2.0 <cit.> and subsequent PERLE 2.1 <cit.> features symmetric, by-design, multi-pass linacs, with minimized values of beta functions reaching about 10 meters at both linac ends. An important design choice was to maintain almost identical optics for all accelerating and decelerating passes (except for the first pass), as shown in Fig. <ref>.
Since PERLE is a 6-pass ERL, the current pass through the cavity during the operation is 6 times that of the injected current. Therefore, for 20 mA injection, the current passes through the cavities at 0.12 Amps. Note the threshold current in this paper refers to the current pass through the cavity.
§ MULTIPLE CHECKPOINT ANALYTICAL MODEL
The numerical simulation will be benchmarked against the multiple checkpoint analytical model, which was previously discussed in <cit.>. The threshold current I_th, λ of the mode number λ with angular frequency ω_λ is given by:
I_th, λ = -2E/e(R/Q)_λQ_L, λk_λ∑_j>i=1^N_c(E/E_j)(M^ij)_mnsin(ω_λ t_r^ij).
In this equation:
* e is the electron charge.
* k_λ is the wave number of the HOM.
* E is the energy of the beam in the recirculation arc.
* E_j denotes the beam energy at checkpoint j.
* (M^ij)_mn is an element of M^ij, the transfer matrix from the i^th checkpoint to the j^th checkpoint; mn equals to 12 for horizontal modes and 34 for vertical modes.
* t_r^ij is the time for particle travel between corresponding checkpoints.
* (R/Q)_λ is the shunt impedance of the HOM.
* Q_L, λ is the loaded quality factor of the HOM.
* N_c represents the total number of checkpoints, specifically positioned at the exits of the linacs.
It is crucial to note that this analytical model makes certain simplifications, particularly overlooking the impact of the filling pattern and bunch timing on the calculated threshold current. As highlighted in Ref. <cit.>, the bunch filling pattern exerts a significant influence on both the BBU and the threshold current.
Moreover, bunch timing is instrumental in determining the HOM arrival phase for succeeding particles and also has big impact on BBU, which is elaborated upon in subsequent sections. Therefore, while the numerical and analytical models may not precisely coincide, a general level of agreement between them can still be expected.
§ CONSTRAINTS FOR PERLE FILLING PATTERNS
In a multi-turn ERL, as discussed in previous studies <cit.>, depending on the topology of the beam line, there can be 1, N/2 or N arcs on each side of the ring; where N is the number of turns each bunch completes in the ERL between injection and extraction. In many situations, the preferred option is to opt for N/2 arcs as this avoids the inherent complexity of an FFA-type arc design with a very high energy acceptance, while also reducing the complexity and capital cost associated with N arcs on each side of the ring. In the case of PERLE, there are several key constraints placed on the injection filling pattern of the bunches in the ring:
* Bunches should be as close to evenly spaced as possible to minimize collective effects
* The beam line should be configured for a sequence preserving (SP) scheme, in order to ensure regular injection timing
From the first constraint, this implies that we need a filling pattern such that accelerating and decelerating bunches alternate. From previous work <cit.>, it is known that every filling pattern has an associated SP transition set. A filling pattern is defined as the RF cycle or bucket that is occupied by the bunch on each turn as is represented by a row vector, where the index is the turn number and the value is the RF cycle/bucket number. A transition set is a row vector that shows how many RF cycles/buckets a bunch on turn j shifts when it starts turn (j+1). For a sequence preserving scheme, we require that on each turn, the bunch on it’s j^th turn occupies RF cycle number F_j. Therefore, after each turn, the bunch on turn j shifts from RF cycle F_j to F_(j+1), therefore the filling pattern and transition set for an SP scheme are related as:
[ F=[F_1 F_2 ⋯ F_N]; T= {(F_2-F_1) (F_3-F_2) ⋯ (F_N-F_1)} ]
PERLE is a 6-turn ERL and the bunch train is 20 RF cycles long. Usually, we would define the filling pattern as the RF bucket occupied by a bunch on turn j, however in this case, it is more useful to define it as the RF cycle modulo 20. It is also useful to note that without loss of generality, we can define that the bunch on turn 1 is in RF cycle 1. For the PERLE filling pattern, there are a total of 12 filling patterns out of 120 unique patterns which meet the constraints. These are summarized in Table <ref>.
Given the bijective relationship that this has to the SP transition sets, we essentially know the total length of each turn modulo 20. From this, we will infer the arc lengths. To begin with, we shall define a few conventions. The beam is injected in the North straight section, just upstream of the North linac. The beam travels in a clockwise direction, passing through the East arc section, followed by the South straight section with the South linac, and finally the West arc section to complete a full turn of the ring. The beam is extracted in the South straight section, just downstream of the South linac as shown in the diagram in Fig. <ref>. The straight sections are assumed to be an equal length, L, and the arc lengths are defined as A_n, where A_1, A_3 and A_5 are the East arcs and A_2, A_4 and A_6 are the West arcs. A_0 is a time delay between extraction of the bunch on turn 6 and the injection of a new bunch. The lengths of each turn, with respect to the injection point in the North straight is given as:
[ T_N1 = 2L + A_1 + A_2 = 20m_N1 + F_N2 - F_N1; T_N2 = 2L + A_3 + A_4 = 20m_N2 + F_N3 - F_N2; T_N3 = 2L + A_5 + A_6 = 20m_N3 + F_N4 - F_N3; T_N4 = 2L + A_5 + A_4 = 20m_N4 + F_N5 - F_N4; T_N5 = 2L + A_3 + A_2 = 20m_N5 + F_N6 - F_N5; T_N6 = 2L + A_1 + A_0 = 20m_N6 + F_N1 - F_N6 ]
The denotation of F_Ni is to represent the filling pattern in the North straight. If we assume that we know the length of the straights and we define one arc length, then we can define all other arc lengths in terms of it. In this example, we will assume that the length of Arc 6 is known:
[ A_0≡(A_6+2F_N1-2F_N4)mod(20); A_1≡(-2L-A_6-F_N1-F_N6+2F_N4)mod(20); A_2≡(A_6+F_N2+F_N6-2F_N4)mod(20); A_3≡(-2L-A_6-F_N2-F_N5+2F_N4)mod(20); A_4≡(A_6+F_N3+F_N5-2F_N4)mod(20); A_5≡(-2L-A_6+F_N4-F_N3)mod(20) ]
Having determined the arc lengths, we can now look at the resulting filling patterns in the South straight as this is the North filling pattern, plus a straight length and an arc length:
[ F_S1 = F_N1 + L + A_1; F_S2 = F_N2 + L + A_3; F_S3 = F_N3 + L + A_5; F_S4 = F_N4 + L + A_5; F_S5 = F_N5 + L + A_3; F_S6 = F_N6 + L + A_1 ]
We can now substitute in the relevant arc lengths to obtain:
[ F_S1 = (2F_N4 - L - A_6) - F_N6; F_S2 = (2F_N4 - L - A_6) - F_N5; F_S3 = (2F_N4 - L - A_6) - F_N4; F_S4 = (2F_N4 - L - A_6) - F_N3; F_S5 = (2F_N4 - L - A_6) - F_N2; F_S6 = (2F_N4 - L - A_6) - F_N1 ]
However, we can add or subtract a constant from the filling pattern as this is simply equivalent to reindexing the RF cycle numbers, and therefore we obtain that the North and South filling patterns must be related as F_S≡-F_N^m, where F^m is used to denote that the order of the filling pattern elements are flipped. This result shows that in general the filling pattern in the north and south arcs are generally different.
§ BUNCH TIMING AND FILLING PATTERNS FOR PERLE
We have examined six different bunch timings, which are detailed in Fig. <ref>. Each timing ID represents a unique pattern of bunch spacing in terms of the fundamental RF period T_RF, approximately 1.25 ns. For instance, in Timing ID 1, the first bunch occurs at T_RF = 0, and the next bunch appears 2.5 T_RF later. Integer multiples of T_RF indicate acceleration, while half-integer multiples suggest deceleration. A single bunch train spans 0 - 20 T_RF, and the subsequent train begins at 20 T_RF. If we put two trains together, for example in case of timing ID 1, it would have timing of [0 2.5 6 9.5 13 16.5 20 22.5 26 29.5 33 36.5].
Table <ref> enumerates the feasible combinations of timing and filling patterns for the North and South linacs. In each entry, the first number represents the timing ID (or filling pattern) for the North linac, and the second pertains to the South linac. It is notable that the timing often differs between the North and South linacs, resulting in variable relative bunch spacing.
Out of 12 permissible filling pattern combinations, six exhibit identical patterns in both North and South linacs, while the remaining six differ. Each pattern combination comprises two numbers: the filling pattern in the North linac and that in the South linac. For conciseness, we refer to 120 filling patterns in the 6-turn ERL by their filling pattern number, which is given as follows:
Pattern 1: [1, 2, 3, 4, 5, 6],
Pattern 2: [1, 2, 3, 4, 6, 5],
⋮
Pattern 120: [1, 6, 5, 4, 3, 2].
It is crucial to distinguish between two conventions for describing filling patterns: the space convention and the time convention.
* Space Convention: In this convention, the numerical value represents the turn number, and its position within the array (i.e., its index) indicates its physical location in the bunch train (i.e., its RF bucket number).
* Time Convention: Conversely, in this convention, the numerical value signifies its physical order within the bunch train (i.e., its RF bucket number), and its position (i.e., its index) represents the turn number.
To put it simply, if the index corresponds to the physical location in the bunch train, then the “space convention” is being used. If the index denotes the turn number, then the “time convention” applies.
For instance, consider a filling pattern described in the time convention:
[1 3 5 2 4 6] = [1_1 3_2 5_3 2_4 4_5 6_6].
This filling pattern can be translated into the space convention as:
[1_1 4_2 2_3 5_4 3_5 6_6] = [1 4 2 5 3 6]
In this example, the index shifts between the two conventions, reflecting either its physical location (space convention) or its turn number (time convention).
The space convention is often favored for its intuitive grasp when describing filling patterns, whereas the time convention is particularly useful for computational tasks such as pattern transitions and arc length calculations. In this paper, we restrict the usage of the time convention to the section outlined in <ref>. It should be noted that the filling patterns provided in Table <ref> are expressed in the time convention. These have been subsequently translated to the more intuitive space convention, as presented in Table <ref>.
§ HIGHER ORDER DIPOLE MODES
The PERLE cavity is a 5-cell superconducting cavity that operates at a fundamental mode frequency of 801.58 MHz. The dipole Higher Order Modes (HOMs) deemed most critical for the PERLE bare cavity are itemized in Table <ref> <cit.>. Dipole modes are known to introduce transverse kicks to the beam, whereas monopole modes are responsible for inducing energy jitter. However, the influence of monopole modes is largely inconsequential, owing to their markedly diminished amplitude relative to the fundamental mode. Furthermore, any resultant energy jitter can be effectively mitigated through other compensatory methods. In light of the more significant consequences of dipole modes, this paper will specifically concentrate on evaluating their criticality and impact.
§ CRITICAL HOMS Q_L SIMULATIONS
The impact filling pattern on the BBU hasn't been investigated previously and a code with filling pattern capability hasn't developed yet. Hence, we have developed a BBU code described in the Ref. <cit.>. The current code is an extension of the single-cavity and single-mode model described Ref. <cit.> to a multi-cavity and multi-mode model. During the development process, the ERLBBU algorithm in Refs. <cit.> was adapted, and extended by adding filling pattern and timing dependence.
§.§ Simulation and analytical results
The first step of the BBU instability studies are to estimate the required critical quality factor Q_L of HOMs to operate the PERLE at 0.12 Amps. The simulation results for the 15 HOMs are given in Fig. <ref>. The black dashed lines are the estimation by the analytical model, while the colored lines are from simulation results by using different timing combinations given in Table <ref>. The maximum Q_L value for simulation is set to 10^10, as this is near the Q_L value of the fundamental mode, making further simulation redundant.
Firstly, we see the analytical model and simulation exhibit good agreement, as evidenced by the close proximity of the dashed black lines to the colored lines. However, unlike the analytical model, the simulation can capture the filling pattern and bunch timing dependence of the BBU instability, resulting in a more accurate prediction. Secondly, some modes are more sensitive to pattern and filling combinations than others. Higher frequency HOMs are less sensitive to the timing combinations than the lower frequency HOMs. Lastly, both pattern and timing combination can vary the Q_L by an order of magnitude or more, which indicates they both are critical for BBU instability suppression.
§.§ Pattern and timing dependence
Simulation results show HOM No. 3, 4, 7, 8, and 14 are the most critical modes. It can be seen the they are pattern and timing dependent as shown in Fig. <ref>. In sub-figure (a), which is timing combination 1, the HOM No. 4 is the critical model in most cases, while in timing combination 4 in sub-figure (b) other modes becomes critical. This indicates that both filling patterns and timing combinations have a significant impact on BBU instability.
§.§ HOM voltage oscillation and BBU instability
The voltage in one cavity impacts voltage in other cavities through bunch offsets x and henceforth beam loading dV_HOM. For example, the kick Δ x_cav 1' received by the bunch in the first cavity can be given as
Δ x_cav 1' = V_HOM, cav 1, im/V_beam
where V_HOM, cav 1, im is the imaginary part of the HOM voltage as the kick is from the magnetic field and V_beam is the beam voltage, pc/e. This kick adds an offset in the second cavity
x_cav 2 = M_11x_cav 1 + M_12Δ x_cav 1'
with M_11 and M_12 being the elements transfer matrix between the first and second cavity. This would in turn impact the beam loading in the second cavity dV_HOM, cav 2 which can be given by
dV_HOM, cav 2 = (2π f_HOM)^2_H/2c q_b( R/Q)_H x_cav 2
where ω_HOM = 2π f_HOM with f_HOM being the HOM frequency, q_b is the bunch charge and (R/Q)_H is geometric shunt impedance of the HOM. Inserting Eq. <ref> and Eq. <ref> in to Eq. <ref> gives
[ dV_HOM, cav 2 =; ω_HOM^2/2c q_b( R/Q)_H[M_11x_cav 1 + M_12( V_HOM, cav 1/V_beam) ]. ]
Eq. <ref> clearly shows how the HOM voltage in the first cavity can impact the second cavity.
All 8 cavities are interconnected through bunch offset. The behavior of HOM voltages can be likened to a set of interconnected balls, with the bunch offset acting as a spring that transfers oscillation from one cavity to another. As a result, there is a synchronization throughout the system, which causes HOM voltages to exhibit similar fluctuations and trends, as demonstrated in Figure <ref>, particularly in subfigures (a) and (b), where the cavities share noticeable small voltage oscillations.
In Fig. <ref>, the HOM voltage behaviors in low-Q_L cases shown in sub-figures (a) and (b) are different from those of high-Q_L cases shown in (c) and (d). In low-Q_L, the voltages are synchronized much faster indicating stronger coupling between cavities. Even minor fluctuations are commonly shared by all cavities. The high-Q_L cases have more oscillations, longer synchronization and settling time, and significant lagging in oscillations, which indicates weaker coupling between cavities. In summary, the Q_L is inversely proportional to the coupling between cavities.
The coupling between cavities is proportional to beam loading, and stronger beam loading leads to stronger cavity coupling. The standard deviation of beam loading (σ_dV_HOM) over 1 turn indicates the strength of beam loading. Beam loading is stronger in Q_L = 1.2 × 10^5 compared to Q_L = 3.6 × 10^6, as seen in Figs. <ref> and <ref>. With stronger beam loading, oscillations propagate more easily to other cavities, necessitating a lower Q_L for faster HOM damping.
When the beam loading is strong, oscillations are more easily propagated to other cavities, so the Q_L needs to be lower so HOM can be dampened faster. It can be seen in Fig. <ref> that later cavities tend to have stronger beam loading and cavity No. 8 has the highest. This difference in beam loading causes the cavities to have different final settling voltages. For example, cavity No. 8 always highest beam loading as shown in Fig. <ref>, which caused it to have the highest settling voltage as shown in Fig. <ref>. This indicates there is more HOM build-up in later cavities that would require more damping and care should be given to them.
The optimal patterns and timings can lower the beam loading significantly, which can lower coupling between cavities and hence reduce the propagation of HOM through cavities. This can suppress BBU instability and allow required Q_L to be higher (i.e. less HOM damping is required).
§ THRESHOLD CURRENT ESTIMATION
PERLE HOM (Higher Order Modes) couplers have been designed by Barbagallo et al. <cit.> to mitigate unwanted modes while not affecting the fundamental one. The loaded Q-factors (Q_L) of the HOMs were effectively reduced below critical levels. Still, it remains essential to ascertain the threshold current when all modes are operative simultaneously. We further enhanced our numerical model to estimate threshold current when all modes are activated. Simulations were carried out with lowered Q_Ls. It's notable that these couplers disrupted the transverse symmetry, which means the Q_L and R/Q for the vertical and horizontal modes are different. In the simulations, a total of 30 modes are considered, half of which are horizontal and the other half are vertical.
§.§ Simulations results
Simulations were conducted encompassing 12 pattern and 6 timing combinations, the results of which are depicted in Fig. <ref>. The existing design of PERLE employs pattern combination [51, 51] and timing combination No.4, yielding a threshold current of 1.42 Amps. Impressively, this value is nearly 12-fold greater than PERLE's operational current of 0.12 Amps.
The threshold current is found to be highly sensitive to both the bunch pattern and the timing. By adjusting the bunch timing alone, we have been able to achieve a maximum threshold current of 6.13 Amps or, conversely, a minimum of 0.83 Amps. Regardless of these variations, it's noteworthy that the threshold current remains significantly higher than the operational current of 0.12 Amps in all scenarios. This outcome indicates that the HOM couplers are effectively damping HOMs, thus ensuring stable operation even at high threshold currents.
§.§ Dominant modes
In our simulations, we observed the threshold currents were mostly set by mode No. 18 or 20, which are both vertical modes. To investigate which mode is the most dominant mode, we also simulated the situation where only a selected few modes are activated and the results are given in Fig. <ref> for 12 filling patterns and 6 bunch timings. In the subfigure (a), the red curve indicates the case where only modes 18 and 20 are activated and the blue curve is when all 30 modes are activated. It shows the threshold current is mostly dictated by the modes 18 and 20. The subfigure (b) shows mode 20 is slightly more dominant than 18.
§.§ Frequency Jitters
The HOM spectrum of the manufactured cavities can vary from the design.
As can be seen from Eq. <ref>, the threshold current is sensitive to the HOM frequency. Slight changes in the frequency can vary the threshold current significantly, as show in the Ref. <cit.>. Therefore, relative RMS jitters of σ_f_HOM/f_HOM = 0.001 were introduced to the simulations assuming Gaussian distribution for 3 different filling patterns and 3 timings and results are given in the Fig. <ref>. The orange distributions indicate the case where all the cavities have the same randomly assigned frequency. The blue ones are when the 8 cavities have different randomly assigned frequency. It can be seen that (1) similar ranges of threshold currents (few to more than 10 Amps) were observed between different filling patterns and timings; (2) the threshold current is significantly higher when cavities have different frequencies; and (3) the lowest threshold currents predicted in simulations are few Amps, which is an order higher than the PERLE design requirement of 0.12 Amps.
It can be seen from Fig. <ref> (a), when the cavities have same HOM frequencies, they can form resonances and oscillate together. This can amplify BBU and result in a low threshold current. When HOM frequencies are different as in sub-figure (b), the resonance is broken and the threshold current increased by nearly an order of magnitude in this case.
We also varied the relative RMS jitters σ_f_HOM/f_HOM and no significant difference is observed. This is because the threshold current is quasi-periodic over HOM frequency <cit.>, which can also be seen from the Eq. <ref>. As PERLE revolution time is around 0.2 μs, the half period of threshold is approximately 2.5 MHz. As the HOM frequencies are on the order of GHz, relative RMS jitters of 0.001 would sufficiently cover the 1 threshold period.
§ THRESHOLD CURRENT ESTIMATED IN ANALYTICAL MODEL AND BMAD
So far, we have only reported threshold current results from our in-house BBU tracking code. To crosscheck these results, we estimated threshold currents using both the analytical model and Bmad <cit.>. Similar to our previous approach, we introduced relative RMS jitters σ_f_HOM/f_HOM = 0.001 into the analytical model, described in Eq. <ref>. The resulted threshold currents are shown in the Fig. <ref>. We observe that the minimum threshold current is around 2 Amps, which is consistent with earlier simulations. When the RMS jitters were varied to σ_f_HOM/f_HOM = 0.002 and 0.005, no significant difference are observed in the threshold current distributions, which is also similar to the results of the earlier simulations. The distribution of the threshold currents is different to of the simulations, this is due to the fact that the analytical model is different from the simulations in the analytical model doesn't account for the phases of bunches and interaction between cavities etc.
To perform BBU studies in Bmad, the original PERLE 2.0 lattice was converted from OptiMX to Bmad. Once the lattices were converted, it was necessary to rematch the beamlines, as the approximations made for cavity edge focusing differ by a small amount. Once rematched, the sections were concatenated together, and Bmad's multipass functionality was applied, whereby beamline elements which are common to multiple passes of different energies are identified as such, and the appropriate calculations are performed to insure consistency between the different energies in each common element. The optics and energy recovery was checked in Bmad, and compared against the original design code.
The threshold current results are given in the Fig. <ref> for the baseline PERLE filling pattern combination [51, 51] and timing combination No.4. When the cavities had same HOM parameters, the threshold current was at around lowest value of 2.1 A. When random jitters of σ_f_HOM/f_HOM = 0.001 introduced, the threshold current increased. We can see the results are consistent with the predictions of in-house tracking code and analytical model.
§ CONCLUSION
In this work, we explored all possible filling patterns and bunch timing combinations for PERLE with its current constraints.
We built an 8-cavity PERLE BBU tracking model and numerically estimated the damping requirements for HOMs, finding strong agreement with the analytical model. As the numerical model is more sophisticated, it was able to incorporate the impacts of filling patterns, bunch timings, and HOM frequency difference in cavities, providing deeper insights into the behavior of the 8-cavity ERL system.
In our simulations, we observed that when cavities share same HOM frequencies, the become interconnected through bunch offset and beam loading, which led to the synchronization and propagation of HOM voltages across the cavities.
However, slight variation in HOM frequencies (by σ_f_HOM/f_HOM = 0.001) can disrupt this synchronization, mitigate BBU instability, and increase the threshold current to several Amps.
Our analysis indicates that bunch timings are as influential as filling patterns. By optimizing these elements, we can diminish beam loading and interaction between cavities, reducing the spread of HOM voltages between cavities. This, in turn, helps control BBU instability and raises the threshold current.
We used an analytical model and two BBU tracking codes to estimate threshold current of PERLE at least to be around 2 Amps. This is 17 times larger than the required operation current of 0.12 Amp. The results show frequency jitters of σ_f_HOM/f_HOM = 0.001 are sufficient to increase the threshold current by an order of magnitude.
Among these factors, the bunch timing and filling pattern can be adjusted by carefully designing the beamline lattice. In contrast, the HOM frequency variations are fixed once cavities are manufactured. Therefore, integrating a mechanism to adjust bunch timing and filling patterns to the multi-turn ERLs is crucial for managing BBU instability.
§ ACKNOWLEDGEMENTS
The authors extend their sincere thanks to Dr. Graeme Burt for his invaluable suggestions and insights.
A special thanks to Julien Michaud and the support provided by the PERLE collaboration. We also express our gratitude to the Bmad development team for their assistance. The studies presented have been funded by STFC Grants No. ST/P002056/1 and ST/V001612/1 under the Cockcroft Institute Core Grants.
Work at Jefferson Lab has been supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contracts DE-AC05-06OR23177.
99
EuropeanStrategyGroup:2020pow
European Strategy Group, `2020 Update of the European Strategy for Particle Physics',
http://dx.doi.org/10.17181/ESU2020CERN-ESU-013, CERN-ESU-015.
Merminga2016
L. Merminga https://doi.org/10.1007/978-3-319-14394-1_11Energy Recovery Linacs (2016).
Angal2018perle
D. Angal-Kalinin et al., https://doi.org/10.1088/1361-6471/aaa171Journal of Physics G: Nuclear and Particle Physics, 45, 065003, 2018.
Agostini2021
P. Agostini et al., https://doi.org/10.1088/1361-6471/abf3baJ. Phys. G: Nucl. Part. Phys. 48 110501, 2021
Brüning2022
O. Brüning, A. Seryi, and S. Verdú-Andrés.
https://doi.org/10.3389/fphy.2022.886473Frontiers in Physics, 10, 2022.
Klein2018
M. Klein and S. Achille, PERLE Collaboration et al.
https://cds.cern.ch/record/2652336CERN-ACC-NOTE-2018-0086.
LYNEIS1983269
C. M. Lyneis and R.E. Rand and H.A. Schwettman and A.M. Vetter,
https://www.sciencedirect.com/science/article/pii/0167508783900571Nucl. Instrum. Methods Phys. Res 204, 269-284, 1983.
Setiniyaz2020
S. Setiniyaz, R. Apsimon, and P. H. Williams, https://link.aps.org/doi/10.1103/PhysRevAccelBeams.23.072002Phys. Rev. Accel. Beams 23, 072002, 2020.
Setiniyaz2021
S. Setiniyaz, R. Apsimon, and P. H. Williams, https://link.aps.org/doi/10.1103/PhysRevAccelBeams.24.061003Phys. Rev. Accel. Beams 24, 061003, 2021.
PERLE
S.A. Bogacz, https://scipost.org/SciPostPhysProc.8.013/pdfPoS DIS2021 547, (2021)
michaud
J. Michaud, https://indico.cern.ch/event/1266985/contributions/5448975/attachments/2671223/4630745/PERLE-CERN-collab-meeting2.pdfPERLE Collaboration Meeting, CERN, June (2023).
yunn2005
B. C. Yunn, https://journals.aps.org/prab/abstract/10.1103/PhysRevSTAB.8.104401Phys. Rev. ST Accel. Beams 8, 104401, 2005.
Carmelo2022ERL
C. Barbagallo, P. Duchesne, W. Kaabi, G. Olry, F. Zomer, R. A. Rimmer, H. Wang, R. Apsimon, S. Setiniyaz, https://indico.classe.cornell.edu/event/2018/contributions/1811/attachments/1468/2516/HOM_damping_studies_paper_ERL22_Carmelo_Barbagallo_def.pdfin Proceedings of ERL2022, Ithaca, United States, 2022.
Bogacz_PERLE
S.A. Bogacz et al., https://link.aps.org/doi/10.1103/PhysRevAccelBeams.27.031603Phys. Rev. Accel. Beams 27, 031603, 2024.
Pozdeyev2005
E. Pozdeyev, https://link.aps.org/doi/10.1103/PhysRevSTAB.8.054401Phys. Rev. ST Accel. Beams 8, 054401, 2005.
TennantDissertation2006
C. D. Tennant, https://dx.doi.org/doi:10.21220/s2-9ga7-4826Dissertation, 2006.
Pozdeyev2006
E. Pozdeyev, C. Tennant, J.J. Bisognano, M. Sawamura, R. Hajima, and T.I. Smith, https://doi.org/10.1016/j.nima.2005.10.066NIM A, 557, 176-188, 2006.
Benson2019ERL
S. V. Benson, A. Seryi, C. Tennant, Y. Zhang, F. Willeke, and G. Stupakov, https://accelconf.web.cern.ch/erl2019/talks/mocozbs01_talk.pdfPresentation at the Energy Recovery Workshop 2019, Berlin, Germany, 16-20 September 2019.
Hoffstaetter2004
G. H. Hoffstaetter and I. V. Bazarov, https://link.aps.org/doi/10.1103/PhysRevSTAB.7.054401Phys. Rev. ST Accel. Beams 7, 054401, 2004.
Hoffstaetter2019
W. Lou and G. H. Hoffstaetter, https://link.aps.org/doi/10.1103/PhysRevAccelBeams.22.112801Phys. Rev. Accel. Beams 22, 112801, 2019.
Litvinenko:2019txu
V. N. Litvinenko, T. Roser and M. Chamizo-Llatas, https://dx.doi.org/10.1016/j.physletb.2020.135394Phys. Lett. B 804 (2020), 135394
Accardi2016
A. Accardi et al., https://doi.org/10.1140/epja/i2016-16268-9Eur. Phys. J. A 52, 268 (2016).
Socol2013
Y. Socol, 10.1016/j.optlastec.2012.06.040Opt. Laser Technol.46, 111 (2013).
Socol2011
Y. Socol, G. N. Kulipanov, A. N. Matveenko, O. A.Shevchenko, and N. A. Vinokurov, https://link.aps.org/doi/10.1103/PhysRevSTAB.14.040702Phys. Rev. Spec. Top. Accel. Beams14, 040702 (2011).
Hug:2017ypc
F. Hug, K. Aulenbacher, R. Heine, B. Ledroit and D. Simon,
doi:10.18429/JACoW-LINAC2016-MOP106012
Shimada2010
M. Shimada and R. Hajima, https://link.aps.org/doi/10.1103/PhysRevSTAB.13.100701Phys. Rev. ST Accel. Beams 13, 100701 (2010).
Hayakawa2010
T. Hayakawa et al., https://doi.org/10.1016/j.nima.2010.06.096Nucl. Instrum.Methods A621, 695 (2010).
Neil:2000zz
G. R. Neil et al. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.84.662Phys. Rev. Lett. 84 (2000), 662-665
Alarcon:2013yxa
R. Alarcon et al. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.164801Phys. Rev. Lett. 111 (2013), 164801
Wangler2008
T. P. Wangler, RF Linear Accelerators, 2nd ed. (Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, 2008), p. 391.
Chao2013
A. W. Chao,
Handbook of Accelerator Physics and Engineering, 2nd ed. World Scientific, 2013.
Padamsee1998
H. Padamsee, J. Knobloch, and T. Hays,
RF Superconductivity for Accelerators, Wiley-VCH, 1998.
Gluckstern:1985yh
R. L. Gluckstern, R. K. Cooper, and P. J. Channell,
https://inspirehep.net/literature/216288Part. Accel. 16, 125–153, 1985.
Delayen2003
J. R. Delayen,
https://link.aps.org/doi/10.1103/PhysRevSTAB.8.024402Phys. Rev. ST Accel. Beams 8, 024402, 2005.
Delayen2005
J. R. Delayen,
https://link.aps.org/doi/10.1103/PhysRevSTAB.6.084402Phys. Rev. ST Accel. Beams 6, 084402, 2003.
Altenmueller1966
O. H. Altenmueller et al., https://accelconf.web.cern.ch/l66/papers/vi-02.pdfProc. of the Linear Accelerator Conf., Los Alamos, N. Mex., 1966.
Neil1970
V.K. Neil and R.K. Cooper, https://cds.cern.ch/record/1107855/files/p111.pdfPart. Accel., 1, pp.111-120, 1970.
Volkov2018
V. Volkov and V. Petrov, http://jacow.org/linac2018/papers/tupo091.pdf in Proc. LINAC'18, Beijing, China, Sep. 2018, pp. 537–539.
Tennant2005
C. D. Tennantet al., https://link.aps.org/doi/10.1103/PhysRevSTAB.8.074403Phys. Rev. ST Accel. Beams 8, 074403, 2005.
Merminga2002
L. Merminga, https://doi.org/10.1016/S0168-9002(02)00293-0NIM A, 483, 107-112, 2002.
Bartnik:2020pos
A. Bartnik, et al., https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.044803Phys. Rev. Lett. 125 (2020) no.4, 044803.
pw
W. K. H. Panofsky and W. A. Wenzel, https://aip.scitation.org/doi/10.1063/1.1715427Review of Scientific Instruments 27, 967 (1956).
apsimon
R. Apsimon et al., https://link.aps.org/doi/10.1103/PhysRevAccelBeams.22.061001Phys. Rev. Accel. Beams 22, 061001 (2019).
Bmad
D. Sagan, https://doi.org/10.1016/j.nima.2005.11.001Nucl. Instrum. Methods Phys. Res., Sect. A 558, 356 (2006).
|
http://arxiv.org/abs/2409.02912v1 | 20240904175118 | Design of a Standard-Compliant Real-Time Neural Receiver for 5G NR | [
"Reinhard Wiesmayr",
"Sebastian Cammerer",
"Fayçal Aït Aoudia",
"Jakob Hoydis",
"Jakub Zakrzewski",
"Alexander Keller"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT"
] |
Design of a Standard-Compliant Real-Time Neural Receiver for 5G NR
Reinhard Wiesmayr^,∗, Sebastian Cammerer^†, Fayçal Aït Aoudia^†, Jakob Hoydis^†
Jakub Zakrzewski^†, and Alexander Keller^†
^†NVIDIA, ^ETH Zurich, contact: [email protected]
^∗Work done during an internship at NVIDIA.
This work has received financial support from the European Union under Grant Agreement 101096379 (CENTRIC). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission (granting authority). Neither the European Union nor the granting authority can be held responsible for them.
July 2023
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
We detail the steps required to deploy a MUMIMO NRX in an actual cellular communication system. This raises several exciting research challenges, including the need for real-time inference and compatibility with the 5G NR standard.
As the network configuration in a practical setup can change dynamically within milliseconds, we propose an adaptive NRX architecture capable of supporting dynamic MCS configurations without the need for any re-training and without additional inference cost.
We optimize the latency of the NN architecture to achieve inference times of less than 1ms on an NVIDIA A100 GPU using the TensorRT inference library. These latency constraints effectively limit the size of the NN and we quantify the resulting SNR degradation as less than 0.7 dB when compared to a preliminary non-real-time NRX architecture.
Finally, we explore the potential for site-specific adaptation of the receiver by investigating the required size of the training dataset and the number of fine-tuning iterations to optimize the NRX for specific radio environments using a ray tracing-based channel model.
The resulting NRX is ready for deployment in a real-time 5G NR system and the source code including the TensorRT experiments is available online.[<https://github.com/nvlabs/neural_rx>]
§ INTRODUCTION
Significant performance gains have been demonstrated by NN-based signal processing for wireless communications <cit.>. However, little has been reported regarding the practical deployment of these algorithms in a 5G NR system.
The challenges include a) real-time inference imposing strict latency constraints on the NN architecture, b) the need for a dynamic re-configuration of the MCS without re-training, and c) site-specific fine-tuning to adapt the algorithms to the specific environment which even allows for continuous performance improvement after deployment.
In this work, we focus on the concept of NRX <cit.>, where a single NN is trained to jointly perform channel estimation, equalization, and demapping. The concept has been first introduced in <cit.> with MIMO extensions in <cit.>. A 5G NR compliant multi-user MIMO receiver for the PUSCH is proposed in <cit.>. We revise our architecture from <cit.> and investigate two methods to support dynamic MCS configurations without the need for re-training.
Predicting the inference latency of a given neural network architecture is a challenging task for which the results strongly depend on the targeted hardware platform, the specific software stack as well as the level of code optimization.
Thus, the number of FLOP, weights, or layers is often used as surrogate metric to predict the model's computational complexity. However, such metrics may lead to inaccurate conclusions due to the high level of parallelism and unknown memory bottlenecks during inference.
We deploy our NRX <cit.> using the TensorRT inference library on the targeted NVIDIA A100 GPU platform. This ensures realistic latency measurements and allows for eliminating bottlenecks from the critical path.
As a result, we propose a carefully optimized real-time version of the NRX architecture.
Another interesting aspect of ML-based receiver design is the inherent possibility of data-aided fine-tuning and site-specific adaptation of the algorithms. Contrary to (accidental) overfitting of the NN, the idea is to let the receiver learn the underlying channel statistics of the specific deployment scenario. Early promising results have been reported in <cit.> for a single user system, and recently in <cit.>.
However, it remains a challenge to gather a sufficiently diverse real-world dataset to validate the possible gains and to quantify the required amount of training data. Some static users may produce many similar channel realizations, while a few dynamic users may generate a much richer dataset.
We use ray tracing <cit.> to generate environment-specific CIR and investigate the required number of samples and training iterations for NRX fine-tuning.
Given the strict real-time constraints, we will address a similar research question as in <cit.> of whether a large neural network trained offline to generalize to arbitrary channel conditions performs better than a smaller, adaptive network optimized for specific scenarios. As it turns out, this principle of generalization through adaptability offers a competitive performance under strictly limited computational complexity and latency, though it may require additional training resources and may reduce robustness to unforeseen conditions. We like to emphasize that we do not assume training on-the-fly (i.e., online). Instead, we train periodically (or even just once) on a small dataset which can be regarded a by-product of the normal operation mode of the receiver.
§ BACKGROUND
We assume MUMIMOMIMO UL transmission from U UE to a single BS with B antennas.
While each UE can have multiple transmit antennas (e.g., UE u is equipped with N_u antennas), we assume that each UE only transmits a single MIMO stream.[In 5G NR terminology, a MIMO stream is often called layer.]
We consider transmission over the 5G NR PUSCH adhering to a standard-compliant OFDM frame structure with S subcarriers and T=14 OFDM time symbols per slot[The preliminary NRX architecture from <cit.> as well as our extensions adapt to varying values of T and S without the need for retraining.] and refer the interested reader to <cit.> for more details on 5G NR systems.
With a sufficiently long cyclic prefix, the MUMIMO input-output relation on each subcarrier s∈{1,…, S} and for each OFDM time symbol t∈{1,…, T} can be modeled as
_s,t = ∑_u=1^U _s,t,u_s,t,u + _s,t
where _s,t∈^B is the received signal, _s,t,u∈^B × N_u is the MIMO channel matrix, _s,t,u is the modulated transmit vector of UE u after beamforming, and _s,t∼𝒩𝒞(0, N_0 ) is the complex Gaussian noise with power spectral density N_0.
While most of the RE (indexed by the tuple (s,t)) are allocated for data transmission, certain RE are reserved for pilot symbols (called DMRS) which are known to the BS and used for channel estimation.
For simplicity, we will omit the indices s and t in the following.
The UE apply codebook-based beamforming _u = _u x̃_u where _u is a beamforming vector and x̃_u is the modulated transmit symbol which is taken from an 2^m-ary constellation. Throughout this paper, we focus on MCS indices i from 5G NR described in <cit.>, which applies QPSK, 16-QAM, and 64-QAM, with varying code rates.
The m bits transmitted in x̃_u originate from random payload bits that are encoded by 5G NR compliant LDPC channel coding and rate-matching, which depend on the MCS index i and the total number of data-carrying RE.
The goal of classical MIMO detectors as well as that of the NRX is to compute LLR estimates for each of the UE's transmitted bits b from the received signal.
We define the LLR as logits, i.e.,
ℓ = ln(b=1|)/(b=0|)
and feed their estimates to the subsequent channel decoder.
§.§ Preliminary Neural Receiver Architecture
We briefly revisit the NRX architecture from <cit.> that was proposed for a single MCS and which we will extend to support varying and mixed MCS in <ref>.
The NRX depicted in <ref> implements a CGNN for MUMIMO detection across the OFDM RG. To enable 5G NR standard compliance, the CGNN is surrounded by a RG demapper and a transport block decoder (cf. Sionna's 5G NR module[<https://nvlabs.github.io/sionna/api/nr.html>] for details). To enable channel estimation for varying DMRS, e.g., resulting from varying slot or UE indices, a LS channel estimator provides an initial channel estimate to the CGNN for each individual data stream.
The CGNN architecture consists of three main components: (i) the state initialization layer (), (ii) the unrolled iterative CGNN algorithm ( blocks), and (iii) the read-out layers. While the architecture in <cit.> only proposes one type of output layer that is used to transform the state variable to LLR estimates (), our new architecture implements an additional read-out layer that outputs a (refined) channel estimate, also from the same state variable.
As detailed in <cit.>, the layer implements a small CNN that transforms the CGNN inputs, i.e., the entire RG of received signals _s,t, a positional encoding of distances to the next pilot symbol, and each UE's initial LS channel estimates, into the initial state vector _u^(0)∈^S× T × d_S of depth d_S. The main part of the NRX consists of an unrolled iterative algorithm that applies N_it consecutive blocks, each of which updates the state vectors of all UE in parallel. Each block first performs a message passing step where a MLP is used to transform each UE's state to messages, which are then aggregated by taking for each UE the sum of messages from all other UE. The second part of each block is composed of a CNN that updates each UE's state based on the previous iteration's state, the aggregated message-passing messages, and the positional encoding.
After N_it such blocks, readout MLP are applied to transform the final state vectors _u^(N_it) into the desired outputs. As in <cit.>, the layer outputs LLR estimates _s,t,u∈^m for each UE's symbol of all RE on the RG.
We extend the NRX from <cit.> by an additional read-out layer, denoted as , that outputs refined channel estimates computed from the same state vectors _u^(N_it).
This layer not only provides more accurate channel estimates, it is also found to improve the training convergence when applied using an additional MSE loss term as described in the following.
§.§ Training Scheme of the Neural Receiver
The NRX from <cit.> is trained by empirical risk minimization with the BCE loss that is computed between the LLR estimates and ground-truth bit labels. A training step describes one gradient-based weight update that is computed from the average loss of N independent samples. Each of these N batch samples represents transmission of one entire OFDM RG. If not stated otherwise, the NRX is trained on synthetic training data sampled from the 3GPP UMi channel model.
For each batch, the number of active users 1≤ U_A≤ U is randomly sampled from a triangular distribution <cit.>, and, in each batch sample, each UE transmits random payload bits that are individually encoded and modulated. The SNR is random-uniformly sampled (in the log-scale) for each batch sample from a pre-defined SNR range, which is a hyperparameter.
We propose the following extensions to the training scheme from <cit.> and detail additional considerations for Var-MCS-NRX in <ref>.
§.§.§ Double-readout
The layer can be jointly trained with the layer by adding an MSE loss to the BCE loss.
The MSE loss is computed between the output channel estimates and the ground-truth channel realizations, and scaled by a hyperparameter γ that controls its contribution to the total loss (i.e., the sum of BCE and scaled MSE loss) used for gradient computation.
§.§.§ Multi-loss
To support a variable number of unrolled NRX iterations 1 ≤ N_it^'≤ N_it, the NRX is trained with the so-called multi-loss <cit.>. There, the read-out layers are applied to the state variable after each NRX iteration and the total loss is accumulated from the loss of all N_it model readouts.
§ NEURAL RECEIVERS WITH VARIABLE MCS
In this paper, we study two approaches to extend the NRX architecture from <cit.> to support variable modulation schemes[The code rate and coding scheme is transparent to the NRX, as the NRX outputs LLR on coded bits.]: (i) masking of higher-order LLR, and (ii) MCS-specific input and output layers (abbreviated by Var-IO).
The first method builds upon an idea mentioned in <cit.>, which was later implemented for a neural demapper in <cit.>. The working principle builds upon the recursive structure of bit labels from Gray-code-labeled QAM constellations, where constellation points from higher-order modulations are recursively derived from lower-order points. The higher-order bit labels then re-apply the lower-order bit labels, and are extended by the additional higher-order bits. For example, if we compare 16-QAM to QPSK, all 16-QAM constellation points within a quadrant of the complex plane have lower-order bits identical to the corresponding QPSK constellation point.
Thus, by masking of unused higher-order LLR outputs, an NRX for the highest modulation order can be applied for detecting lower-order constellation points, too. As mentioned in <ref>, training such a Var-MCS-NRX requires additional considerations.
Note that masking can be also applied to classical LLR demapping algorithms, e.g., to APP demapping. Though such mismatched demapping will produce LLR with the same sign as a matched demapper for the correct constellation,
the LLR magnitudes do in general not match the underlying probabilities, as defined in (<ref>).
Without additional LLR correction, such as proposed in <cit.>, “classical” mismatched demapping can lead to severe performance degradation in FEC decoding.
While our proposed training scheme discussed in <ref> turned out effective to train a Var-MCS-NRX with the masking scheme <cit.> for Gray-code labeled QAM constellations, we also put forward an alternative method for implementing the Var-MCS-NRX. By applying modulation-specific and layers (denoted as input and output layers, respectively), we allow the NRX to fit to varying modulation orders, while sharing the majority of weights (in the blocks) across modulation schemes. Modulation-specific input layers are motivated by improving data-aided channel estimation. The intuition for modulation-specific output layers is to enable the model to learn matched demapping. Only the number of LLR outputs needed for the corresponding modulation order has to be implemented. This Var-IO scheme (as depicted in <ref>) can be also useful with more general, non-Gray-code-labeled constellations, or custom constellations obtained from end-to-end learning <cit.>. Note that although MCS-specific IO layers lead to a slightly increased number of weights, the number of active weights (and, thus, the inference latency) is the same as for the single-MCS NRX.
§.§ Training with Mixed Modulation and Coding Schemes
It has been empirically observed that training a Var-MCS-NRX requires additional considerations beyond those mentioned in <ref>, which we detail in the following:
§.§.§ Random MCS-to-UE association
For each training sample and each UE, we sample an iid random MCS index from a set of supported MCS. This ensures that both, single-MCS and mixed-MCS transmission scenarios, are represented in each training batch.
Note that explicitly training for all possible MCS-to-UE associations is infeasible due to the large number of different transmission scenarios. E.g., for eight active UE and four MCS, we find 165 different associations even without considering the order or cases where not all UE are active.
§.§.§ MCS specific SNR offsets
As higher MCS and a larger number of active users typically result in higher error-rates and higher training loss for a given noise variance N_0, we apply offsets to the random training SNR (in decibels) of each batch sample depending on the random number of active UE and depending on the random MCS-to-UE association. This ensures that batch samples with higher MCS indices are trained (on average) on larger SNR values than batch samples with lower MCS indices. Thereby, we can avoid that the batch loss is dominated by batch samples with many high-MCS UE.
§.§ Simulation Results
We now evaluate the TBLER performance of the Var-MCS-NRX in a MUMIMO scenario with U=2 UE transmitting in UL direction to a BS with B=4 antennas. The BS has dual-polarized antennas with 3GPP TR 38.901 antenna patterns,
arranged in a horizontal ULA. The UE implement two single-polarized omnidirectional antennas which are also arranged in a horizontal ULA, and apply beamforming with =[1,1]^T/√(2).
In this section, we adopt the 3GPP UMi channel model for training.
For evaluation, we combine two TDL models, namely 3GPP TDL-B with 400 Hz Doppler spread and 100 ns delay spread for the first user and 3GPP TDL-C with 100 Hz Doppler spread and 300 ns delay spread for the second user, respectively. In the following, we denote the resulting TDL channel model as DoubleTDL channel.
We simulate 5G NR compliant OFDM slots with a carrier frequency of 2.14 GHz and a bandwidth of approximately 47.5 MHz, which equals 132 PRB, each of which consists of 12 subcarriers spaced with 30 kHz.
For the pilots, we select DMRS type A with one additional DMRS position.
We compare two different NRX architectures both with a feature depth of d_S=56. We denote the first architecture as Large NRX which consist of N_it=8 CGNN iterations and a total of 4.4·10^5 weights. A reduced architecture is denoted as Real-time (RT) NRX and consists of only N_it=2 CGNN iterations resulting in only 1.4·10^5 weights.
The Var-MCS-NRX stores 0.4·10^5 additional weights per set of IO layers.
<ref> compares the TBLER performance of various NRX and classical baselines. The RT Var-MCS-NRX is trained by randomly sampling iid uniformly from MCS index i=9 (QPSK) and i=14 (16-QAM), and the Large Var-MCS-NRX is trained by additionally sampling iid uniformly from i=19 (64-QAM).[For 64-QAM, we observed a larger performance gap between the RT and the Large NRX architecture as compared to QPSK and 16-QAM. Hence, we consider the RT NRX architecture only for detecting QPSK and 16-QAM.]
Note that the selection of i∈{9,14,19} covers all modulation orders defined in <cit.>.
The results in <ref> show that all NRX architectures are capable of approaching the performance of the LMMSE channel estimation baseline with K-Best detection.[Covariance matrices for LMMSE channel estimation are computed from the 3GPP UMi model and the K-Best detector applies a list size of k=64.] In all scenarios, the Var-MCS-NRX implementations closely approach the performance of their single-MCS NRX counterparts.
shapes.geometric, patterns.meta, arrows, calc
§ NRX COMPLEXITY AND REAL-TIME ARCHITECTURE
Practical deployment of the NRX requires real-time inference capabilities, which imposes strict constraints on the computational latency of the underlying NN architecture. Further, the real-time aspects prevent from processing multiple samples in parallel (inter-frame parallelization) and only intra-frame parallelization can be utilized as buffering would be required otherwise.
We assume a strict computational latency budget of 1 ms for the NRX using an NVIDIA A100 GPU. Furthermore, we assume inline acceleration, i.e., we ignore any memcopy latencies from host to device and vice versa.
§.§ Latency Measurements & Optimization
As mentioned earlier, it is a non-trivial task to predict the inference latency only from the model description. This stems from the fact that during inference many processing steps happen in parallel and also the memory access can become the bottleneck. To get a more realistic latency measure, we deploy the trained NRX model using TensorRT as real-time inference engine. The resulting TensorRT engine applies advanced optimization techniques tailored to the targeted inference hardware (and software stack) such as the fusing of operations during model inference. This process resembles the compilation of source code to a target deployment platform.
Weights are quantized to float16. However, we did not implement quantization-aware training techniques in the scope of this work. Exploring even lower quantization levels such as float8 or int8 precision is a subject of future research.
Based on the detailed profiling output of the TensorRT deployment, we carefully adjust the TensorFlow model. Removing compute (and memory) bottlenecks is a cumbersome task that requires carefully performed optimization steps. For brevity, the details are omitted here (though the optimized architecture is available in the code release). We like to emphasize that even if a bottleneck is identified, removing the operation may not solve the problem, as one also needs to understand its impact on the SNR performance of the NRX.
§.§ Controlling the Inference Latency
As mentioned in Sec. <ref>, we incorporate a multi-loss <cit.> in the NRX training pipeline which enables to adjusting the depth of the NRX during inference without the need for any re-training. Thereby, we can control the receiver's computational latency and the accuracy of the NN after training, e.g, to adapt to new hardware platforms or varying system configurations.
Fig. <ref> shows the required SNR to achieve a target TBLER of 10% evaluated for different receiver depths N_it∈{1,…,8}. The Large NRX is trained only once for N_it=8 CGNN iterations (using multi-loss).
The second axis in Fig. <ref> shows the corresponding latency in milliseconds evaluated on an NVIDIA A100 GPU. The latency of the NRX is measured using the exported TensorRT engine and increases linearly with the number of iterations. This is to be expected, as each iteration has the same cost. For the given system configuration of 132 PRBs and 2 active UE, each iteration requires approximately 350 µs and we observe a constant initialization (and readout) overhead of 270 µs.
As can be seen in Fig. <ref>, the strict computational latency constraint of 1 ms restricts the receiver to N_it=2. For comparison, we also evaluate another NRX architecture that was trained for 2 iterations only (RT NRX). The performance degradation of the adaptive receiver version is almost negligible.
From the TBLER curves, it follows that N_it=2 is a sub-optimal solution for the achievable error-rate performance. This implies that in a practical deployment scenario, the achievable TBLER performance of the proposed NRX is limited by its inference latency (and computational complexity).
The proposed real-time NRX architecture is applicable for practical deployment and achieves a strong performance when compared to classical baselines. However, the remaining gap to the larger NRX shows that there is still some potential for future work.
Finally, we want to underline that these experiments including the training results are available online.^1
§ SITE-SPECIFIC FINE-TUNING
In this section, we investigate how well a generalized NRX performs after site-specific deployment in a specific radio environment. Furthermore, we want to answer how much training data is needed for site-specific fine-tuning and how many fine-tuning training steps are required to improve the performance over the pre-trained generalized NRX.
§.§ Ray Tracing-based Training and Evaluation Dataset
We deploy the BS antenna array in Sionna's ray tracer <cit.> in the Munich map on top of a church tower of the Frauenkirche, as depicted by the red dot in <ref>. We equip the BS and UE with the same antenna configurations as in the experiments in <ref>.
However, as the BS should cover the whole area all around the church tower, we apply isotropic antenna characteristics instead of the 3GPP TR 38.901 antenna pattern used for NRX pre-training.
For generating the training dataset, we randomly sample positions from the coverage map and compare different numbers of training data samples N_TD. We only add data samples that have at least one valid path between any transmit and receive antenna, for all of the T=14 OFDM symbols. Together with each random position, we also sample random UE velocities uniformly in [0, 8] m/s, independently for both, the x- and y-velocity, which induces a Doppler shift to the CIR.
The evaluation dataset is generated from two trajectories (one for each UE, depicted by the salmon-colored lines in <ref>), whereof we sample 10^4 positions uniformly between the starting and end points. Each of the two UE moves at a constant speed, 3.5 m/s and 3.0 m/s, respectively. Sample indices where the ray-tracer did not find any valid path for either of the two UE are removed. During simulation, we construct MUMIMO channels by randomly sampling two different indices in {1,…,10^4} for each batch sample, which selects one position for each UE from their respective trajectory (random sub-sampling).
§.§ Effect of dataset Size and Fine-tuning Training Iterations
We now compare the effect of the training dataset size N_TD and the number of fine-tuning training iterations N_FT. For fine-tuning training, we start with the NRX weights from <ref> that have been extensively pre-trained on the 3GPP UMi channel model. Then, we apply N_FT gradient-descent steps with the site-specific training data. Since ground-truth CSI data is typically not available in real-world datasets, we fine-tune without double-readout.
In <ref>, we visualize the performance gains achieved with site-specific training. On the y-axis we compare the required SNR to achieve a TBLER of 10% for different training dataset sizes N_TD. The solid curves are evaluated on the ray tracing-based evaluation dataset. We can see that very small datasets (with N_TD∈{100, 200}) only lead to improvements in the early fine-tuning iterations, but show degraded performance for N_FT>10^3. With the two larger datasets (N_TD∈{10^3, 10^4}), the SNR performance of the NRX increases up to N_FT=10^5.
We expect this phenomena to be caused by overfitting of the NRX to the small datasets. Note that N_FT=10^3 fine-tuning steps only take about 30 s on an NVIDIA RTX 3090 GPU.
The dotted curves in <ref> visualize the SNR performance of the site-specific NRX weights evaluated on the 3GPP UMi channel model.
These curves show the effect of catastrophic forgetting, as we see degraded performance on UMi channels, the better the NRX is fine-tuned to the ray tracing-based radio environment. Catastrophic forgetting happens more with smaller training datasets, where the effect of overfitting is more dominant. Note that more advanced training schemes based on concepts of transfer learning may help to further improve the training convergence and to reduce the effect of catastrophic forgetting. See <cit.> for an early investigation.
In <ref>, we evaluate the TBLER of the NRX and other classical baselines using site-specific data. We observe that the fine-tuned RT NRX with N_FT=10^5 closely approaches the performance of the Large NRX without fine-tuning. The fine-tuned Large NRX with N_FT=10^3 closely approaches the K-Best baseline with LMMSE channel estimation.
§ CONCLUSION
We have proposed solutions for deploying of NRX in real-world 5G NR systems: (i) adaptability for operation with variable and mixed MCS conditions, (ii) a real-time architecture that meets the latency requirements of practical software-defined 5G NR systems, and (iii) generalized NRX pre-training with significant performance gains through site-specific fine-tuning, requiring only a few thousand iterations and data samples.
We leaved fine-tuning on real-world measurement datasets as well as a system-level performance evaluation for future work.
10
url@samestyle
o2017introduction
T. O'Shea and J. Hoydis, “An Introduction to Deep Learning for the Physical
Layer,” IEEE Trans. Cognitive Commun. Netw., vol. 3, no. 4, pp.
563–575, 2017.
honkala2021deeprx
M. Honkala, D. Korpi, and J. M. Huttunen, “DeepRx: Fully Convolutional Deep
Learning Receiver,” IEEE Trans. Wireless Commun., vol. 20, no. 6,
pp. 3925–3940, 2021.
aoudia2021end
F. Aït Aoudia and J. Hoydis, “End-to-end learning for OFDM: From
Neural Receivers to Pilotless Communication,” IEEE Trans. Wirel.
Commun., vol. 21, no. 2, pp. 1049–1063, 2021.
ye2017power
H. Ye, G. Y. Li, and B.-H. Juang, “Power of Deep Learning for Channel
Estimation and Signal Detection in OFDM Systems,” IEEE Wireless
Commun. Lett., vol. 7, no. 1, pp. 114–117, 2017.
lin2023artificial
X. Lin, “Artificial Intelligence in 3GPP 5G-Advanced: A Survey,”
arXiv:2305.05092, May 2023.
cammerer2023neural
S. Cammerer, F. Aït Aoudia, J. Hoydis, A. Oeldemann, A. Roessler,
T. Mayer, and A. Keller, “A Neural Receiver for 5G NR Multi-user
MIMO,” in Proc. IEEE Globecom Workshops, Mar. 2023, pp. 329–334.
korpi2021deeprx
D. Korpi, M. Honkala, J. M. Huttunen, and V. Starck, “DeepRx MIMO:
Convolutional MIMO Detection with Learned Multiplicative Transformations,”
in Proc. IEEE Int'l Conf. Commun. (ICC), Jun. 2021.
fischer2022adaptive1
M. B. Fischer, S. Dörner, F. Krieg, S. Cammerer, and S. ten Brink,
“Adaptive NN-based OFDM receivers: Computational complexity vs.
achievable performance,” in Proc. IEEE Conf. Rec. Asilomar Conf.
Signals, Sys., and Comp.1em plus 0.5em minus 0.4emIEEE, Oct.
2022, pp. 194–199.
Uzlaner2024dynamic
N. Uzlaner, T. Raviv, N. Shlezinger, and K. Todros, “Asynchronous Online
Adaptation via Modular Drift Detection for Deep Receivers,”
arXiv:2407.09134, Jul. 2024.
hoydis2023sionna
J. Hoydis, F. Aït Aoudia, S. Cammerer, M. Nimier-David, N. Binder,
G. Marcus, and A. Keller, “Sionna RT: Differentiable Ray Tracing for Radio
Propagation Modeling,” in Proc. IEEE Globecom Workshops, Mar.
2023, pp. 317–321.
dahlman20205g
E. Dahlman, S. Parkvall, and J. Skold, 5G NR: The Next Generation
Wireless Access Technology.1em plus 0.5em minus 0.4emAcademic
Press, 2020.
38214
ETSI, “ETSI TS 138 214 V16.2.0: Physical layer procedures for data,” Tech.
Rep., Jul. 2020.
9298921
K. Pratik, B. D. Rao, and M. Welling, “RE-MIMO: Recurrent and Permutation
Equivariant Neural MIMO Detection,” IEEE Trans. Signal Process.,
vol. 69, pp. 459–473, 2021.
nachmani2016learning
E. Nachmani, Y. Be'ery, and D. Burshtein, “Learning to Decode Linear Codes
Using Deep Learning,” in Proc. IEEE Ann. Allerton Conf. Commun.,
Contr., and Comput., Sep. 2016, pp. 341–346.
gansekoele2024machine
A. Gansekoele, A. Balatsoukas-Stimming, T. Brusse, M. Hoogendoorn, S. Bhulai,
and R. van der Mei, “A Machine Learning Approach for Simultaneous Demapping
of QAM and APSK Constellations,” arXiv:2405.09909, May 2024.
studer2010soft
C. Studer and H. Bölcskei, “Soft–Input Soft–Output Single Tree-Search
Sphere Decoding,” IEEE Trans. Inf. Theory, vol. 56, no. 10, pp.
4827–4842, Oct. 2010.
|
http://arxiv.org/abs/2409.02621v1 | 20240904112819 | Gate control of magnon spin transport in unconventional magnon transistors based on the van der Waals antiferromagnet CrPS4 | [
"Dennis K. de Wal",
"Raul Luna Mena",
"Muhammad Zohaib",
"Bart J. van Wees"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
APS/123-QED
[email protected]
Zernike Institude for Advanced Materials, University of Groningen,
Groningen, the Netherlands
Zernike Institude for Advanced Materials, University of Groningen,
Groningen, the Netherlands
§ ABSTRACT
Magnon based spintronic devices require the modulation of magnon spin transport for their operations. We provide a proof-of-principle of two unconventional non-local magnon transport devices in which we modulate the diffusive magnon transport of incoherent magnons in the van der Waals antiferromagnet Chromium thiophosphate, CrPS4. The non-local signals generated electrically by spin injection via the spin Hall effect (SHE) and thermally via the spin Seebeck effect (SSE) are altered by a modulator electrode. The current through the modulator increases or decreases the magnon chemical potential via the SHE and changes the magnon temperature through Joule heating. We achieve up to η^1ω_SHE=25%/mA and η^2ω_SHE=16%/mA modulation efficiencies for the electrically and thermally generated magnon spin transport, respectively, for CrPS4 in the collinear state at in plane fields >7T at a temperature of 25K.
Gate control of magnon spin transport in unconventional magnon transistors based on the van der Waals antiferromagnet CrPS4
Bart J. van Wees
September 9, 2024
===========================================================================================================================
§ INTRODUCTION
In information processing technology, encoding, transport, and manipulation is crucial for its operation. Spin wave (magnon) based computing has been shown as a very suitable alternative to CMOS based electronics for encoding<cit.> and transport in three-dimensional ferro, ferri and antiferromagnets<cit.>. Also on investigating the novel two-dimensional (2D) van der Waals magnetic materials, significant efforts have been made with regards to thermally generated<cit.> and electrically generated magnon transport<cit.>. Yet, electrical control over and manipulation of magnon signals in these systems remains a challenge. In Yttrium Iron Garnet (YIG), the “workhorse” of magnonics, a magnon transistor in a conventional three-terminal non-local geometry is demonstrated and explored, showing that the magnon spin conductivity can be modulated both electrically and thermally<cit.>. In this geometry strong signal modulation is achieved up to 40%/mA<cit.>.
Efficient and scalable control over magnon spin transport in 2D van der Waals materials is also crucial for achieving a controllable 2D magnon gas. In the van der Waals magnets advances have been made in magnon valves, based on the non-local geometry, in MnPS3<cit.> and CrPS4<cit.>. Nevertheless, for both these experiments the magnon currents are generated thermally, by the SSE, for which only convoluted information about the magnon transport properties, such as magnon relaxation length and magnon conductivity, can be obtained. Also, in these works, neither via the SHE injected magnon transport, nor modulation of the magnon transport via the SHE, are reported. Moreover, the reported 'off' state, explained as the zero-crossing of the non-local spin Seebeck effect (SSE) voltages at specific gate dc-currents is, in fact, the sign change in magnon chemical potential at the detector as a function of thermal gradient, which is observed in YIG as a function of temperature, Joule heating and injector-detector spacing<cit.>. Although the proof of principle for these thermally controllable magnon transistors in <cit.> and <cit.> is highly relevant, only thermal control over the thermally generated magnon spin current, driven via Joule heating, is shown and not via the SHE.
“All-electrical” non-local magnon transport, where the magnon spin is injected via the SHE, has been shown in the 2D van der Waals antiferromagnet CrPS4<cit.>, making this material very suitable for fully electrical (via the SHE) modulation of magnon spin transport. In this work, we demonstrate and explain the working of a magnon transistor based on CrPS4, similar to the magnon transistor on YIG<cit.>. We elaborate on the effects on the magnon conductivity of electrical gating via the SHE and thermal gating via both injection of heat and through the SSE. which affect both the electrically injected and thermally injected magnon currents. Furthermore, we propose a two-terminal and an unconventional three-terminal non-local geometry instead of using the conventional three-terminal with the gate in the middle, to achieve a greater tunability of the magnon current.
§ EXPERIMENTAL CONCEPTS
For electron transport in metals and semiconductors, the electron conductivity (σ_e) depends on the free electron density (n_e). This Drude model for electrons follows: σ_e=e^2n_eτ_e/m_e, where e, m_e, and τ_e are the electron charge, effective mass, and the scattering time, respectively<cit.>. For magnons in a system at finite temperature (thermal equilibrium magnons) we can define a similar relation. For out-of-equilibrium magnons, such as electrically injected and thermally generated (SSE) magnons, the magnon spin conductivity becomes:
σ_m=ħn_mτ_m/m_m,
where n_m is the magnon density, which depends on both the magnon chemical potential as well as the temperature, τ_m is the magnon scattering time and m_m=ħ^2/(2J_S) is the effective mass with J_S as the spin wave stiffness<cit.>. Therefore, σ_m can be directly tuned via n_m.
In transistors based on magnons, spin is carried by magnons. A current through a gate contact affects the magnon transport in three ways: 1. The current generates an electronic spin accumulation (μ_mod) at the Pt/CrPS4 interface via the SHE. By transfer of spin from the gate contact to the magnetic insulator, the magnon chemical potential μ_m is enhanced or depleted, in case the magnetization of CrPS4 is collinear to μ_mod. Since a change in μ_m changes n_m, hence it also changes σ_m between the injector and detector contacts. 2. The gate current generates heat by Joule heating, this alters the magnon temperature in the area of the CrPS4 flake in proximity to the gate, changing n_m via T. 3. Moreover, the increased temperature creates thermal gradients in the sample as well. These thermal gradients drive magnon currents via the SSE and therefore can lead to a change in μ_m.
We studied two Pt/CrPS4 heterostructures (Device D1 and D2) where the CrPS4 flakes are exfoliated from bulk crystals of CrPS4 (HQgraphene). The 7 nm thick Pt electrodes are sputtered on top of a ∼100 nm thick flake. The contact spacing for the devices varies between 270 nm to 1400 nm edge-to-edge distance (see figure <ref>c) and the contacts have equal width (∼300nm). The length of the Pt strips is 20-40 μm. The Pt strips are contacted with Ti/Au leads to make electrical connections to the device. Angular dependent magnetoresistance (ADMR) measurements are performed for this non-local geometry as a function of in-plane angle α of the applied magnetic field w.r.t. the Pt strips (see figure <ref>). A low frequency (ω/(2π)<20 Hz) ac-current I=I_0sinω t is applied to the injector Pt strip. The first (V^1ω) and second (V^2ω) harmonic voltage responses are measured at the detector Pt strip. All CrPS4 flakes are exfoliated from the same bulk crystal.
First considering the conventional three-terminal magnon transistor, as given in figure <ref>a, we explore the effects of the modulation on the non-local magnon spin transport. The non-local voltages V^1ω_nl and V^2ω_nl in the detector correspond to the electrically and thermally generated magnon transport excited by the ac-current (I_AC) in the injector. Additionally, a modulating dc-current (I_DC) is applied to the gate electrode, affecting the magnon spin transport.
V^1ω and V^2ω will not be offsetted by I_DC as we employ the lock-in method. We can summarize the effect on both V^1ω and V^2ω by:
V^1ω =C_1I_ACσ_m(α)cos^2(α),
V^2ω =C_2I^2_ACσ_m(α)cos(α),
where C_1 and C_2 are the constants capturing the conversion of the charge currents to spin currents in the injector and detector<cit.>, for the electrical and thermal injection, respectively. The magnon conductivity σ_m depends on I_DC and is given by:
σ_m(α) = σ_m^0 + Δσ_JI^2_DC + Δσ_SHEI_DCcos(α).
Here σ_m^0 is the spin conductivity without any modulation by I_dc, Δσ_J is the efficiency of modulation by Joule heating and Δσ_SHE for the magnons injected by the SHE. For the latter, the injection depends on the col-linearity of μ_mod and the net magnetization of the CrPS4 (𝐦=(𝐦_1+𝐦_2)/2, 𝐦_1,2 are the sub-lattice magnetizations) via the SHE, as is the case for equation <ref> and <ref>.
Substituting σ_m in equation <ref> and <ref> we arrive at the following responses:
V^1ω =A^1ωcos^2(α) + B^1ωcos^3(α),
V^2ω =A^2ωcos(α) + B^2ωcos^2(α),
for which the Joule heating affects the amplitudes A^1ω(2ω), scaling with I_DC^2, and the injected magnons via the SHE modify the amplitude B^1ω(2ω), scaling with I_DC.
§ RESULTS
For our experiment employing the conventional three terminal magnon transistor (as given in Fig. <ref>a) we could not observe a signal for V^1ω_nl above the noise level of typically 5nVRMS. This is caused by two factors: Firstly, the injector-detector distance is comparable to the magnon relaxation length λ_m, meaning the magnons can already decay before they reach the detector. Secondly, the Pt gate contact in between the injector and detector contact functions as a spin sink, absorbing magnon spin by spin flip scattering in the Pt at the Pt/CrPS4 interface. Therefore, we combined the injector and gate into one contact, making a two terminal transistor (see Fig. <ref>b), with injector-detector spacing d=340nm. As the gate affects the magnon density over a distance λ_m from the gate, the working principle is of a two-terminal magnon transistor is similar to that of a three-terminal magnon transistor as long as d<λ_m. However, the two-terminal magnon transistor the coinciding injector and gate contact give rise to an additional I_ACI_dccosα cross term for V^1ω compared to equation <ref>, with D being a prefactor (see SI):
V^1ω=A^1ωcos^2(α) + B^1ωcos^3(α) + Dcos(α).
In figure <ref> the non-local voltages as a function of in-plane angle α for an injector current I_AC= 60 μA and a dc-current of I_DC=-50μA and I_DC=-100μA are shown at a field of B= 7.5 T at T= 25 K. In <ref>a,c the symmetrized, around α=90^∘ see SI, (antisymmetrized) non-local voltage responses are shown for V^1ω (V^2ω). The fits correspond to the A^1ω (A^2ω) term in equation <ref> (<ref>). In figure <ref>b,d the antisymmetrized, (symmetrized) voltage responses for the first (second) harmonic are given. The fits correspond to the (Dcosα) term.
In figure <ref>b the Dcosα cross term dominates over the B^1ωcos^3α term. Extraction of B^1ω only yielded values for B^1ω>A^1ω, which is nonphysical.
Yet, in figure <ref>d the effect of I_DC via the SHE is clearly observed as the B^2ωcos^2α (the latter part of equation <ref>). As I_DC affects the magnon density via the SHE, we expect the same modulation via the SHE in the first harmonic signal B^1ω compared to the second harmonic signal B^2ω.
Both A^1ω and A^2ω are given as a function of I_DC in figure <ref>a and <ref>b for I_DC between -200μA and +200μA. Fit results for A^1ω and A^2ω are indicated by the dashed lines and show a quadratic dependence on I_DC. The sign of the gate current dependence of A^1ω and A^2ω is equal, indicating that both the electrically and thermally generated magnon spin transport are reduced by an increased magnon temperature. In contrast to the magnon gate on YIG, in which the thermally generated magnon transport is enhanced by the enhanced temperature due to Joule heating by the modulator<cit.>, we only see a decrease in V^2ω_nl. Figure <ref>c, the fit results for B^2ω, shows a linear dependence on I_DC as expected. The slope dB^2ω/dI_DC expresses the modulation efficiency by the SHE injection by I_DC. At 25 K for a magnetic field of 7.5 T, we find dB^2ω/dI_DC=13±2 nV/mA for I_AC=60 μA and dB^2ω/dI_DC=5±2 nV/mA for I_AC=40 μA. These values are comparable to the values measured in the three-terminal magnon transistor on YIG<cit.>. Comparing the modulation to the zero-gating (I_DC=0) signals, where B^1ω(2ω) = 0, we can extract the relative efficiency of modulation:
η_SHE = dB^1ω(2ω)/dI_DC/A^1ω(2ω)_0,
where A^1ω(2ω)_0=A^1ω(2ω) (I_DC=0). We find η_SHE=13±2%/mA for the second harmonic for I_AC=60μ A and η_SHE=5±2%/mA for I_AC=40μ A.
The sign of dB^2ω/dI_DC is consistent, as a positive (negative) dc-current via the SHE corresponds to an increase (decrease) in μ_m and therefore in n_m (see Fig. <ref>d and Fig. <ref>c).
In short, the use of the conventional three terminal magnon transistor geometry for CrPS4 is restricted by the relatively short λ_m together with the middle gate contact functioning as a spin sink. However, by exploring a two terminal transistor, where the injector and gate coincide, we show clear modulation by both changing n_m by the SHE as well as by Joule heating. Yet, for the first we can only observe it for the thermally generated non-local magnon transport signal (V^2ω_nl).
In addition to this we measured a second device D2. As a proof of principle we investigated another unconventional magnon transistor geometry, where the injector-detector contacts are directly adjacent and the gate contact is located on the opposite side of the detector compared to the injector. This is illustrated in figure <ref>c. Both the electrically and thermally generated magnon transport channels can be modulated by the gate contact as the injected magnon spin and heat can diffuse towards the magnon transport channels, even though the gate contact is outside these channels, as long as d_g≪λ_m (where d_g is the distance between the detector and gate). In figure <ref>a and <ref>b the first and second harmonic non-local voltages at an in plane field of 10 T in the y-direction at 25 K, with I_AC=100μA for device D2, are shown as a function of I_DC with the modulator strip as depicted in figure <ref>c. The edge-to-edge distance d between the injector and detector contact is 270 nm and the distance d_g between the detector and gate contact is 480 nm. Both d≪λ_m and d_g≪λ_m, where λ_m is the magnon relaxation length of CrPS4. At 10 T the sub-lattice magnetizations of CrPS4 are saturated in plane and aligned with the magnetic field (see <cit.>). The first harmonic voltage response is fitted (solid red line in Fig. <ref>a) and shows a quadratic dependence of A^1ω on I_DC, as is the case for device D1 (Fig. <ref>a).
In contrast, the second harmonic voltage response, given in figure <ref>b, show a very different modulation as a function of I_DC. To our surprise, V^2ω_nl is strongly enhanced by I_DC up to one order of magnitude at I_DC≈|370μ A| and at larger I_DC, V^2ω_nl decreases. This is different than has been observed in conventional three terminal magnon valve systems on MnPS3<cit.>, CrPS4<cit.>, and even in the two-terminal geometry in device D1 (Fig. <ref>b), where only a strong suppression of the V^2ω_nl is observed and at sufficiently large I_DC the zero crossing is realized. Although the temperature dependence of the non-local SSE in CrPS4 is not fully understood, such a strong enhancement of nl-SSE voltage is at least striking and possibly opens up a completely new route towards the control of magnon spin transport. The absence of this enhanced transport in V^1ω_nl shows that the enhancement in V^2ω_nl is likely a combined effect of changing n_m via the temperature and the SSE both originating from the Joule heating by I_DC.
Lastly, we compared the effect of I_DC via the SHE altering the B parameters in equations <ref> and <ref>. In figure <ref>d the dependence of B^1ω(2ω) for the first harmonic (second harmonic) at an external field of 12 T is plotted as function of I_DC (extracted from field dependent measurements, see SI). The fit shows a linear dependence on I_DC which is consistent with the results on device D1. Albeit, the modulation efficiency dB/dI_DC achieved is larger. Moreover, we found dB/dI_DC depends on the magnitude of the magnetic field applied, which is likely caused by the not fully saturated V^1ω_nl for larger I_DC, due to the Joule heating, and the not understood behavior of the nl-SSE as a function of temperatures (see SI). For the efficiency of modulation at 12 T in device D2, we found η^1ω_SHE=25%/mA (η_SHE^2ω=16%/mA) for the first (second) harmonic. These values are slightly larger than found for device D1 which is likely caused by the stronger magnetic field. Compared to the modulation efficiencies found in YIG, the values here are 3-5 times larger for device D1 and 5-8 times larger for device D2<cit.>.
§ CONCLUSION
Summarizing, we demonstrate control over the magnon spin transport in two magnon transistors with unconventional non-local geometries.
In these transistors we modulated the electrically and thermally generated magnon spin currents, by using a Pt gate contact which injects magnon spins via the SHE, and by altering the magnon temperature and via the SSE, both by Joule heating. Moreover, we separate the effects of modulation on both the electrically and thermally generated magnon spin currents. In device D1, a two terminal transistor where the injector and gate contact coincide, we find a modulation efficiency by the SHE of η_SHE^2ω=13±2%/mA for the thermally generated magnon current at 7.5 T, 25K. In device D2, a three terminal magnon transistor with the gate electrode on the opposite side of the detector compared to the injector, we find modulation efficiencies by the SHE of η^1ω_SHE=25%/mA and η^2ω_SHE=16%/mA at 12 T, 25K, for the electrically and thermally generated magnon currents, respectively. These values are 3-8 times larger than modulation efficiencies found in YIG<cit.>. Moreover, we show that via altering the temperature of device the thermally generated magnon current can be enhanced by one order of magnitude. These results, pave the way for valorization of magnon spin transport in antiferromagnets in technological applications and contribute to our understanding of controlling magnon spin transport.
We acknowledge the technical support from J. G. Holstein, H. Adema, H. H. de Vries, and F. H. van der Velde. We acknowledge the financial support of the Zernike Institute for Advanced Materials and the European Union’s Horizon 2020 research and innovation program under Grant Agreements No. 785219 and No. 881603 (Graphene Flagship Core 2 and Core 3). This project is also financed by the NWO Spinoza prize awarded to B.J.W. by the NWO and has received funding from the European Research Council (ERC) under the European Union’s 2DMAGSPIN (Grant Agreement No. 101053054).
|
http://arxiv.org/abs/2409.02168v1 | 20240903180001 | Spectroscopic confirmation of the galaxy clusters CARLA J0950+2743 at z=2.363, and CARLA-Ser J0950+2743 at z=2.243 | [
"Kirill A. Grishin",
"Simona Mei",
"Igor V. Chilingarian",
"Marika Lepore",
"Paolo Tozzi",
"Anthony Gonzalez",
"Nina Hatch",
"Spencer A. Stanford",
"Dominika Wylezalek"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO"
] |
Spectroscopic confirmation of CARLA J0950+2743 and CARLA-Ser J0950+2743
Grishin et al.
Université Paris Cité, CNRS(/IN2P3), Astroparticule et Cosmologie, F-75013 Paris, France [email protected]
Jet Propulsion Laboratory and Cahill Center for Astronomy & Astrophysics, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, California 91011, USA
Center for Astrophysics — Harvard and Smithsonian, 60 Garden Street MS09, Cambridge, MA 02138, USA
Sternberg Astronomical Institute, Moscow M.V. Lomonosov State University, Universitetsky pr., 13, Moscow, 119234, Russia
INAF-Osservatorio Astrofisico di Arcetri, Largo Enrico Fermi 5, 50125, Florence, Italy
Department of Astronomy, University of Florida, 211 Bryant Space Center, Gainesville, FL 32611, USA
School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD, UK
Department of Physics, University of California, One Shields Avenue, Davis, CA 95616, USA
Astronomisches Rechen-Institut, Zentrum fur Astronomie der Universitat Heidelberg, Monchhofstr. 12-14, 69120, Heidelberg, Germany
Galaxy clusters, being the largest gravitationally bound structures in the Universe, are a powerful tool to study mass assembly at different epochs. At z>2 they give an unique opportunity to put solid constraints not only on dark matter halo growth, but also on the mechanisms of galaxy quenching and morphological transformation when the Universe was younger than 3.3 Gyr. However, the currently available sample of confirmed z>2 clusters remains very limited. We present the spectroscopic confirmation of the galaxy cluster CARLA J0950+2743 at z=2.363±0.005 and a new serendipitously discovered cluster, CARLA-Ser J0950+2743 at z=2.243±0.008 in the same region. We confirm eight star-forming galaxies in the first cluster, and five in the second by detecting [Oii], [Oiii] and Hα emission lines. The analysis of a serendipitous X-ray observation of this field from Chandra reveals a counterpart with a total luminosity of L_0.5-5 keV = 2.9±0.6×10^45 erg s^-1. Given the limited depth of the X-ray observations, we cannot distinguish the 1-D profile of the source from a PSF model, however, our statistical analysis of the 2-D profile favors an extended component that could be associated to a thermal contribution from the intra-cluster medium (ICM). If the extended X-ray emission is due to the hot ICM, the total dark matter mass for the two clusters would be M_200=3.30 ^+0.23_-0.26 (stat) ^+1.28_-0.96 (sys)×10^14 M_⊙. This makes our two clusters interesting targets for studies of the structure growth in the cosmological context. However, future investigations would require deeper high-resolution X-ray and spectroscopic observations.
Spectroscopic confirmation of the galaxy clusters CARLA J0950+2743 at z=2.363, and CARLA-Ser J0950+2743 at z=2.243
Kirill A. Grishin1
Simona Mei1,2
Igor V. Chilingarian3,4
Marika Lepore5
Paolo Tozzi5
Anthony Gonzalez6
Nina Hatch7
Spencer A. Stanford8
Dominika Wylezalek9
Received ; accepted
===========================================================================================================================================================================================================================================================
§ INTRODUCTION
Galaxy clusters are the largest gravitationally bound structures in the Universe, which makes them direct probes of not only the structure growth, but also of the environmentally driven galaxy evolution and transformation processes <cit.>. Galaxy cluster studies have important cosmological implications provided that they can constrain baryon processes and mass assembly channels at different epochs <cit.>.
The upcoming deep wide-field sky surveys such as the Rubin Legacy Survey of Space and Time <cit.>, Euclid <cit.>, and the Nancy Grace Roman Telescope <cit.> as well as sub-mm surveys like CMB-S4 <cit.>, will provide the opportunity to systematically study properties of galaxy clusters and their individual members out to z∼2-3 <cit.>.
However, the currently available sample of known galaxy clusters at z>2 is too small to understand their statistical properties crucial for the preparation for future surveys.
Deep observations using Hubble and James Webb Space Telescopes opened new discovery space for galaxy clusters and proto-clusters at z>2 <cit.>. At the same time, ground-based facilities still play an important role in the identification and confirmation of candidate clusters and proto-clusters <cit.>. A systematic search in the area of the COSMOS UltraVISTA field, which has a deep multi-wavelength coverage, also yielded several high-fidelity proto-clusters at 2<z<4 <cit.>.
Of those detections, only two have been confirmed as clusters with intra-cluster medium (ICM) at z>2: J1449-0856 at z=2.07 <cit.> and J1001+0220 at z=2.51 <cit.>.
In this letter, we present (i) the spectroscopic confirmation of a cluster candidate identified by the Clusters Around Radio-Loud Active Galactic Nuclei (AGN) survey <cit.>, CARLA J0950+2743 at z=2.363±0.005, and (ii) an identification of a new cluster, CARLA-Ser J0950+2743 at z=2.243±0.008 in the same area of the sky, and superposed to the first.
An archival Chandra dataset reveals an extended X-ray counterpart that is consistent with the existence of ICM either in the form of hot X-ray emitting gas or as a result of inverse Compton scattering of the cosmic microwave background on radio-lobes of a radio-loud quasar.
Throughout this paper, we adopt the Λ CDM cosmology, with Ω_M =0.3, Ω_Λ =0.7, h=0.72, and σ_8 = 0.8. In our X-ray analysis, we use the widely-used self-similar evolution with E^2(z), which is also consistent with observations <cit.>. All magnitudes are given in the AB system <cit.>.
§ OBSERVATIONS
§.§ Existing Spitzer Space Telescope and ground-based observations
CARLA J0950+2743 is a cluster candidate around an AGN at z=2.36 from the CARLA survey <cit.>, whose main goal was the identification of galaxy cluster candidates around radio-loud AGN at z>1.3 by selecting galaxy overdensities with Spitzer IRAC 3.6μm (hereafter IRAC1) - IRAC 4.5μm (hereafter IRAC2) color using following criteria:
(IRAC1-IRAC2)>-0.1 <cit.>. 46% and 11% of the CARLA fields are overdense at a 2σ and a 5σ levels respectively, with respect to the field surface density of NIR sources in the UKIDSS Ultra Deep Survey <cit.>.
CARLA J0950+2743 shows a galaxy overdensity at ≳ 3σ with respect to the field <cit.>. The Spitzer IRAC1 and IRAC2 images were obtained over a common 5.2 × 5.2 arcmin^2 field of view with total exposure times of 1000 s and 2100 s, point spread function (PSF) FWHMs of 1.95 and 2.02 , yielding 95% completeness limits of 22.6 mag and 22.9 mag, respectively <cit.>.
This cluster candidate was also observed in the i'-band with the ACAM camera at the 4.2 m William Herschel Telescope with a total integration of 7200 sec (PI: N. Hatch). The image has a sampling of 0.25 pix^-1, and the atmospheric seeing of 1.31 FWHM. The 5-σ i-band limiting magnitude is 24.92 mag <cit.>.
The CARLA J0950+2743 AGN was spectroscopically observed three times by the Sloan Digital Sky Survey <cit.> and Extended Baryon Oscillation Spectroscopic Survey <cit.>. The average redshift measurement is z_AGN = 2.354 ± 0.004 based on measurements from one SDSS <cit.> and two eBOSS spectra <cit.>[<https://rcsed2.voxastro.org/data/galaxy/3038340>].
§.§ New spectroscopic observations
We observed CARLA J0950+2743 with the 6.5m converted Multiple Mirror Telescope (MMT) using the MMT and Magellan InfraRed Spectrograph <cit.>.
To select galaxy targets, we first built a multi-wavelength catalog (IRAC1, IRAC2, and ACAM i-band) using SExtractor <cit.> in multiple detection mode, using IRAC1 as the detection image. Then, we selected galaxy candidates at z>2, applying a cut in the (i-IRAC1) vs IRAC1 color–magnitude diagram following <cit.>.
Using the MMIRSMask web-tool[<https://scheduler.mmto.arizona.edu/MMIRSMask/index.php?>] we designed the two slit masks C0950p27 (hereafter mask1) and C09p27_2 (hereafter mask2), each covering a rectangular area of the sky of 4.0×6.9 arcmin with 6 arcsec long slitlets.
The slitlets were 0.8 arcsec, and 0.5 arcsec wide, for mask1 and mask2, respectively, and matched the average seeing quality during observations in the K_s band (0.7 and 0.55 arcsec, respectively).
We used the HK grism with the HK3 cutoff filter that covers the wavelength range 1.25–2.35 μm (50% transmission limits) at the spectral resolving power R∼1400 and R∼ 1700 for the 0.8 arcsec and 0.5 arcsec slits, respectively. The selected setup covers AGN restframe Hβ and [Oiii] emission lines in the H band, and Hα in the K band. mask1 was observed on April 3, 6, and 8; and May 12, 2023, with a total integration time of 7 h. mask2 was observed on April 9 and 10, 2023, with a total integration time of 5 h. mask1 was also observed on May 7–8 2023 with 4 h of total integration in the J/zJ setup, which covers the range 0.949 - 1.500 μm (50% transmission limits) at the spectral resolving power R∼ 2200 and includes the [Oii] doublet at z∼2.3.
Further details about spectroscopic observations and data reduction are provided in Appendix <ref>.
§.§ Archival Chandra X-ray data
The CARLA J0950+2743 region was serendipitously observed by Chandra with the ACIS instrument on January 17, 2010 with an integration time of 8.2 ksec (dataset ID: 11376, PI: E. Gallo, target: PGC028305). This dataset unveils an X-ray source close to the center of the galaxy overdensity (Fig. <ref>) in the ACIS-S0 detector, which is ∼15 arcmin away from the aimpoint.
We measured the total flux of the X-ray counterpart as 6.77±1.51×10^-14 erg s^-1 cm^-2 in the 0.5-5 keV energy band in a circular aperture of radius 27 arcsec (220 kpc). The Analysis of its 1-D profile did not allow us to securely distinguish it from a point-source given a small number of photons. At the same time a Kolmogorov-Smirnov test of the observed distribution of photons shows that the data are consistent with a point source distribution only at the confidence level of p=0.0037, which confirms that X-ray the source has an extended component in addition to point source corresponding to the AGN. This is because this test is more sensitive to the signal 2-D distribution, e.g. to differences in ellipticity.
Further technical details about the analysis of the X-ray data are provided in Appendix <ref>.
We discuss possible sources of the extended emission other than the ICM in Section <ref>.
We conclude, that the observed X-ray source is likely a combination of the AGN and some extended component, and a more precise analysis of the source shape would require deep high-resolution observations.
§ RESULTS AND DISCUSSION
§.§ CARLA J0950+2743 spectroscopic confirmation and the serendipitous discovery of a new cluster at z=2.24
We extract emission line fluxes using the optimal extraction method, described in <cit.>. For each individual emission line we modelled its 2-D profile with a single 2-D Gaussian, and divided it by the error frame. We then extracted fluxes and uncertainties using a 2-D Gaussian weighting. Our measurements are shown in Fig. <ref> and Table <ref>.
Following <cit.>, we spectroscopically confirmed CARLA J0950+2743 using the <cit.> criteria for z > 1 clusters of having at least 5 galaxies within ±2000 (1 + z) km s^-1 from the AGN redshift range within a physical radius of 2 Mpc. In fact, we identified eight galaxies that satisfied these criteria with H_α detections (SNR > 8), of which six also show other prominent emission lines. The shorter J/zJ setup integration time led to lower SNR in the [Oii] 3727Å detections, which might also be affected by stronger dust extinction.
We measure a cluster redshift of z=2.363±0.005, as the average of the redshifts of all spectroscopically confirmed members.
We also discovered a foreground structure at z=2.243±0.008 (155 Mpc co-moving distance from CARLA J0950+2743), consistent with the <cit.> criteria. This cluster, which we name CARLA-Ser J0950+2743 following <cit.>, presents five galaxies with Hα emission, and three that also show [OIII] emission (Fig. <ref> and Table <ref>). Given that only three galaxies show multiple emission lines, this confirmation and cluster classification should be validated with further observations.
§.§ Cluster mass constraints from X-ray data assuming a hot ICM
The estimated flux in the 0.5–5 keV (see Fig. <ref>) band corresponds to the observed luminosity of L_0.5-5 keV = 2.9 ± 0.6 ×10^45 erg s^-1. In this study we denote L_0.5-5 keV as an X-ray luminosity in 0.5-5 keV observed band, which corresponds to 1.7-17 keV restframe at z=2.36.
Given the small number of detected photons, and hence, large errorbars, the shape of the X-ray spectrum is consistent with a wide range of possible components, including: a power-law component with Γ=2, typical for AGNs and all possible variations of ICM bremsstrahlung emission. We expect that the X-ray spectrum is likely to be a combination of these components. However, the limited depth of the dataset prevents us from a more precise decomposition. Hereafter, we choose a bremsstrahlung model for the parametrization of the X-ray spectral shape, but we verified that the choice of other models, or their combination, does not have a significant effect on the results. The use of pure bremsstrahlung model delivers the most conservative estimate of k-correction range given that for the power-law with Γ=2 k-correction is 1.
To estimate the k-correction, we modeled the observed X-ray spectrum with XSpec <cit.> using the model apec that corresponds to “free-free” transitions, which is a dominating regime of emission in sparse hot plasma in galaxy clusters <cit.>. This modelling resulted in an electron temperature estimate of T_X = 5.77_-1.09^+10.2 keV. This temperature corresponds to a k-correction of k_X=0.612 that is needed to convert the observed flux into the restframe. Using the Sherpa tool <cit.>, we estimate L_0.2-2 keV = 1.7 ± 0.4 ×10^45 erg s^-1 and L_X = 4.5 ± 1.0 ×10^45 erg s^-1[Hereafter, with L_X we denote “bolometric” luminosity following <cit.>].
If we assume a redshift of z=2.363, according to the L_X–M_2500 relation <cit.>, the luminosity estimate corresponds to a total dark matter mass M_2500 = 0.85 ± 0.07_(stat.)± 0.26_(sys.)× 10^14 M_⊙, considering the 1σ scatter in the relation, which is 0.1 dex (added to the systematic uncertainty). We also estimate the mass iteratively, using two scaling relations – the M-T and the T-L relation; this allows to avoid uncertainties related to k-corrections, given that we don't need to estimate the k-correction from observations, but only from the temperature using M-T scaling relation. This approach yielded M_2500 = 0.80 ± 0.25 × 10^14 M_⊙, which is very close to the mass estimate that uses k-corrections.
Considering the transformations calibrated on the Magneticum cosmological hydrodynamic simulation [<https://c2papcosmosim.uc.lrz.de/static/hydro_mc/webapp/index.html>] <cit.>, we converted the measurement of M_2500 to M_200=3.30 ^+0.23_-0.26 (stat) ^+1.28_-0.96 (sys)×10^14 M_⊙. If using a redshift z=2.243, our mass estimate does not change significantly provided that the difference of the luminosity distances results in a luminosity ratio of 1.136. In fact, given the power-law index of the L_X-M_2500 relation of 0.305, the mass estimate would be 8% lower.
We use T_X = 5.77_-1.09^+10.2 keV to estimate the k-correction, therefore its uncertainty contributes to the systematic uncertainty of the mass estimate. In fact, for the 3-σ lower limit of the temperature (T_X=2.50 keV), the k-correction is k_X=1.43 yielding M_200= 3.76×10^14 M_⊙. At the same time, the 3σ upper limit for T_X of 35 keV is not physically possible, because the highest values observed in galaxy clusters are of the order of T_X∼10 keV yielding k_X=0.414 that corresponds to M_200=3.25×10^14 M_⊙.
Considering both statistical and systematic errors, which are independent, we conclude that, if all the X-ray emission would be due to an extended ICM emission, it corresponds to a total dark matter mass of M_200≈ 3.0-3.3^+0.23_-0.26 (stat) ^+1.28_-0.96 (sys) ×10^14 M_⊙ for the two clusters.
Since our observations do not permit us to confirm a thermal emission, a discussion about alternative explanations for the observed X-ray extended contribution is given in the Appendix <ref>.
We also have to take into account that at least part of the X-ray emission is due to the AGN. The total X-ray luminosity and the estimated AGN luminosity from multi-wavelength data are consistent within ∼3 σ (see Appendix <ref>). This means that we cannot formally rule out that the X-ray flux is completely due to the AGN.
However, if that were true, it would contradict our results from the KS-test that show that the observed X-ray emission is not consistent with a point source and, moreover, the X-ray emission is not centered on the AGN but rather on the galaxy overdensity.
Our best estimate of the AGN contribution from multi-wavelength scaling relations of the X-ray surface brightness distribution is 32±22% (Appendix <ref>), and yield our final total dark matter mass estimate of M_200≈ 2.7-3.0^+0.20_-0.23 (stat) ^+1.13_-0.85 (sys) ×10^14 M_⊙. However, the scatter in the fraction of a possible AGN contribution does not have a very strong affect on the mass estimate: in a very pessimistic case of a +2σ outlier resulting in an AGN contribution of 76%, the mass estimate will change only to M_200≈ 2.0-2.2^+0.14_-0.16 (stat) ^+0.83_-0.62 (sys) ×10^14 M_⊙.
§.§ Perspectives of the cosmological implications
CARLA J0950+2743/CARLA-Ser J0950+2743 and their X-ray counterpart can be used for future studies of structure formation in the cosmological context. However, the interpretation of the X-ray observations of these clusters will be also affected by the superposition of these systems.
If the diffuse emission originates from the ICM in only one of these two clusters, its total luminosity can be used to put a lower limit on the cluster mass, given that the luminosity of a more massive cluster is always higher (or equal) than the sum of the luminosities of two clusters.
The analysis of the Magneticum Pathfinder suite of cosmological simulations shows that the virial masses of clusters at z=2.36 in case of the largest “Box 0” (2688 Mpc h)^-3 box size <cit.> do not exceed M_vir≃3×10^14 M_⊙. This value is close to the total dark matter mass estimate for our two clusters derived from the X-ray analysis, when we convert M_200 to M_vir following . We show our two cluster total virial mass in color in Fig. <ref>, considering different percentage levels of contamination from the AGN.
Even under the assumption of an AGN X-ray contamination level of 90% (e.g., ≈ 2.5-σ higher that the average contamination derived from scaling relations), if the X-ray thermal emission were dominated by one of the two clusters, it would still be among the most massive cluster at its redshift.
The implications of CARLA J0950+2743/CARLA-Ser J0950+2743 in the cosmological context require deeper high-resolution X-ray observations to precisely constrain contribution from AGN and a possible contamination from other point sources, like those found in the field of some other high-redshift galaxy clusters and proto-clusters such as Spiderweb at z=2.16 <cit.>, and deeper and more complete spectroscopic observations.
§ SUMMARY
Using deep ground-based NIR spectroscopy, we confirmed the CARLA J0950+2743 galaxy cluster at z=2.363±0.005 following the <cit.> criteria, i.e., by confirming eight spectroscopic members within 2000×(1+z) km s^-1 and a physical radius of 2 Mpc, including five galaxies with multiple emission lines.
We also surreptitiously discover another structure, CARLA-Ser J0950+2743 at z=2.243±0.008, that we classify as a cluster following the <cit.> criteria. However, this last classification is based on five spectroscopically confirmed members, of which only three with multiple emission lines. This confirmation would benefit from deeper observations to obtain more galaxies with multiple emission lines, while the <cit.> criteria would need at least five sure spectroscopically confirmed members.
Archival X-ray observations by the Chandra observatory reveal a counterpart that is consistent with the existence of a total dark matter halo of mass M_200≈ 2.7-3.0^+0.20_-0.23 (stat) ^+1.13_-0.85 (sys) ×10^14 M_⊙ for the two clusters, if the emission is associated with the cluster ICM.
To improve each cluster mass estimate, deeper high-resolution X-ray observations are needed to better constrain the AGN contribution to the signal.
CARLA J0950+2743 and CARLA-Ser J0950+2743 provide an unique opportunity to study the formation of the large scale structure at the age of the Universe of 2–3 billion years after the Big Bang, as well as evolution of galaxy population in dense environments and the main factors driving it.
We thank Université Paris Cité, which founded KG's Ph.D. research. We also thank Franz Bauer and Alexei Vikhlinin for useful discussions. KG thanks Victoria Toptun for the fruitful discussions related to the X-ray analysis. IC's research is supported by the Telescope Data Center, Smithsonian Astrophysical Observatory. Observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona. We gratefully acknowledge support from the CNRS/IN2P3 Computing Center (Lyon - France) for providing computing and data-processing resources
needed for this work.
aa
§ SPECTROSCOPIC OBSERVATIONS AND DATA REDUCTION
For the MMIRS observations, we used a 4-position dithering pattern (ABA'B') at +1.4, -1.0, +1.0, -1.4 arcsec. The individual exposure times were set to 300 sec with the 4.426 sec up-the-ramp non-destructive readout sequence using the 0.95 e-/ADU inverse gain. The readout noise per readout ∼15 e- was reduced to the effective value of ∼3 e- after 69 readouts.
We reduced data with the MMIRS pipeline <cit.>, which included the following steps: (i) reference pixel correction and up-the-ramp fitting of raw readouts; (ii) dark subtraction; (iii) flat fielding; (iv) extraction of 2D slitlets; (v) wavelength solution using OH lines; (vi) sky background subtraction using a modified <cit.> technique with a global sky model; (vii) correction for the telluric absorption and relative flux calibration using observations of a A0V telluric standard star. We ran the pipeline on individual A-B (or A'-B') dithered pairs.
Then, we co-added the dithered pairs from observations collected during different nights applying the weights inversely proportional to the squared seeing FWHM.
At the end, we performed the absolute flux calibration by using secondary calibration stars included in the masks by re-normalizing their fluxes to the H and K_s-band measurements from the UKIRT Hemisphere Survey <cit.>.
Our datasets reach a 3σ sensitivity at 1.1×10^-17 erg s^-1 cm^-2 and 1.4×10^-17 erg s^-1 cm^-2, for mask1 and mask2, respectively, for a typical H_α emission line, with the restframe full width at half maximum of 9.7 Å. This corresponds to a 1.2×10^-18 erg s^-1 cm^-2 Å^-1 and 1.5×10^-18 erg s^-1 cm^-2 Å^-1, for mask1 and mask2, respectively, in the continuum averaged between Hα and [Sii].
For mask1, the achieved depth in J/zJ reached 1.6×10^-17 erg s^-1 cm^-2 for the [Oii] emission line, and 1.8×10^-18 erg s^-1 cm^-2 Å^-1 for the continuum in the region of this emission line.
§ SPATIAL EXTENT OF AN X-RAY COUNTERPART
We measured the flux of the X-ray cluster counterpart in the following way: (i) We selected a subsample of the registered events within a radius of 27 arcsec from the peak of the counts in the binned dataset, which identify our target; (ii) To estimate the background, we select a subsample of events in two circular areas with radii 86 and 62 arcsec, in the same detector, but far enough from the extraction region of the main source; (iii) Then we calculated the energy spectra of these three photon subsamples; (iv) We subtracted the background from the target spectrum, after renormalising by the area covered by the regions; (v) We corrected the spectra taking into account the photon energy and effective area. We obtained a total target flux of 6.77±1.51×10^-14 erg s^-1 cm^-2 in the 0.5-5 keV energy band in the circular aperture with a radius of 27 arcsec (220 kpc).
Using the ChaRT web-tool[<https://cxc.cfa.harvard.edu/ciao/PSFs/chart2/runchart.html>] we ran a raytrace simulation for the position of the AGN on the archival Chandra dataset for 1000 rays for a source with a spectral shape obtained from the modelling with XSpec as discussed in Section <ref>. The obtained ray map was then reprojected on the detector plane using MARX <cit.> to obtain a model of the Chandra Point Spread Function (PSF).
To asses whether the distribution of detected photons corresponds to what is expected for a point source, we performed a Kolmogorov–Smirnov test (KS-test) <cit.>. For this test, we estimated a 2-D distribution of detected photons that contains two components: background and a point source. To estimate the contribution of each component, we spatially binned the observed photon events within a square window with the side of 600 pixels and the bin size of 32 pixels. Then, we modelled the derived 2-D distribution by a linear combination of a constant component (background) and a PSF model, inferred coefficients were used to assign background and point source the same flux amplitude as in the Chandra dataset.
We use a classical two-sample KS test <cit.>, which provides a better ability of separating two distributions.
Using the 2DKS Python package[<https://github.com/Gabinou/2DKS>], we ran a KS test for the observed distribution of detected photons and 500 realisations of the model distribution. The median value of the KS-statistic D (the maximum deviation between the observed cumulative distribution function of the sample and the cumulative distribution function of the model distribution) is D=0.0642, with a median value of the significance level to follow the same distribution of p=0.0037 (see Fig. <ref>).
As an additional cross-validation, we ran the same KS test for independently generated mock event lists that follow the model distribution. This test yielded a median D=0.0362 and p=0.27. At the same time to confirm the source detection, we also did a similar test between observed photon distribution and a pure background distribution what yielded a median D=0.0735 and p=0.00054, which securely confirms the presence of a source. From these results we conclude that: (i) Using a two-sample KS test for the Chandra dataset we can separate with high confidence a case of a pure point-source from a combination of a point source and extended one; (ii) The observed observed photon distribution cannot be described by a single point source and constant background at a significance level of 0.997.
In Figures <ref> and <ref>, we show the radial profiles of the cluster counterpart in X-ray obtained with 55 photons and a generated PSF model.
In Figure <ref> and <ref> we provide the 1-D profile for the source and for PSF model extracted in circular and elliptical (b/a=0.75) annuli respectively, both centered on the position of the AGN. The profile derived with elliptical annuli is normalized to the total number of photons, while normalization of the coefficients for the profile in circular annuli is obtained by using a χ^2 minimization.
Both 1-D profile analyses are consistent with emission from a point source, but the extended emission detected from our spatial analysis is also consistent with these profiles, given the large uncertainties on the data.
The spatial distribution of our source is also substantially rounder (b/a=0.80±0.16) than the PSF (b/a=0.5) at that position in the Chandra FoV (Fig. <ref>), and this could explain why our spatial analysis and KS test are more conclusive to show evidence for an extended source component.
The error bars clearly demonstrate that our modelling of the observed photon distribution is limited by the statistical effects, i.e. number of detected photons, rather than the PSF systematics.
§ ESTIMATES OF THE AGN CONTRIBUTION TO X-RAY LUMINOSITY
§.§ Estimates of the AGN X-ray luminosity
The only available archival Chandra X-ray dataset does not provide a precise measurement of the AGN contribution, therefore we use several multi-wavelength scaling relations to estimate it. Each of these relations has a relatively high intrinsic spread, however, because they probe different physical regions and/or mechanisms of an AGN, together they provide a very good constraint of the flux.
By combining the estimates obtained using relations between AGN luminosities in the X-ray and UV or IR, we estimate a possible AGN contribution as L_AGN / L_tot = 0.32_-0.12^+0.22, taking the average of all contributions estimated in the following subsection, Fundamental Plane of the black hole activity for which we cannot conclude with reasonable uncertainties. For the uncertainties, we took into account the scatter of the scaling relations, the measurements of the X-ray source flux, and the uncertainty on the variables used in the scaling relations. The details of our estimations are given in the subsections below.
§.§.§ Fundamental Plane of the black hole activity.
The fundamental plane of black hole activity <cit.> relates the black hole mass, the radio luminosity at 5 GHz, and the X-ray luminosity in the 2–10 keV range. It suggests that the processes governing black hole accretion and jet emission are scalable and can be described by universal laws for black holes in the mass ranges from stellar to supermassive. A recent re-calibration of the fundamental plane of black hole activity based on observational data for intermediate-resolution quasars <cit.> has a scatter of ∼0.62 dex for radio-loud AGN, which is smaller than that of the original relation <cit.>. But it still leads to 1σ uncertainties of a factor of ∼3 in X-ray luminosity. The black hole mass estimate for the AGN from the broad lines (Civ]) is M_BH = 2.0-2.6 × 10^9 M_⊙ <cit.> and the radio continuum luminosity ν F_ν=2.9×10^44 erg s^-1 estimated from <cit.> yields the AGN X-ray luminosity of L_0.5-5 keV = 3.79_-2.88^+12.0× 10^46erg/s, which substantially exceeds the X-ray luminosity that we measured for our source. However, given the large scatter, we cannot consider this estimate for assessing the AGN contribution, but can only conclude that it can be between 20% and 100% within the 3σ range.
§.§.§ L_0.2-2 keV - L_6μ m relation.
The relation between the AGN luminosity in the X-ray and in the infrared (IR) reflects the tight dependence of the emission of the hot corona and the emission from the accretion disk irradiated by the dust <cit.>.
We use the correlation between the mid-IR flux at 6 μm <cit.> and X-ray that relates the X-ray emission to the emission of the warm dust in the AGN torus, and is less affected by the intrinsic dust attenuation unlike the correlations based on the UV luminosity. Using the WISE W4 magnitude measurement for the AGN that perfectly matches the restframe 6 μm, we estimated L_0.5-5 keV = 0.75× 10^45 erg s^-1, or ∼25% of the total observed X-ray luminosity.
The scatter, estimated for the "filtered high-luminosity quasar" sample from <cit.>, used to build the regression fit in <cit.> is 0.26 dex. Adding the uncertainty of the W4 magnitude measurement, this leads to the final estimate of L_0.5-5 keV = 0.75_-0.39^+0.71× 10^45 erg s^-1. This corresponds to an AGN contribution of 0.26_-0.13^+0.22.
§.§.§ L_2keV - L_NUV correlation.
We use the empirical correlation between the AGN luminosity in soft X-ray and the near-UV from <cit.> with an intrinsic scatter of 0.35 dex. It relates the accretion disk luminosity in the continuum at 2500Å to the X-ray generated by the corona.
From available SDSS spectra, at λ=8400 Å, we can directly measure the UV restframe continuum flux as F_2500A = 3×10^-17 erg s^-1 cm^2 Å^-1 that corresponds to the UV restframe luminosity density of L_ν, 2500A = 1.10×10^31 erg s^-1 cm^2 Hz^-1.
From this, we derive the X-ray luminosity spectral density L_2keV = 5.1×10^26 erg s^-1 cm^2 Hz^-1 <cit.>. The uncertainty on F_2500A is less than 5%, so it does not affect the uncertainty of L_2keV estimate.
Using the calc_kcorr procedure in the Chandra Sherpa toolkit, we converted the L_2keV to the observed X-ray luminosity in the 0.5-5 keV bandpass and obtain L_0.5-5 keV = 1.1×10^45 erg s^-1 assuming Γ = 1.4 for RLAGN <cit.>. Adding the scatter of the L_X-L_2500Å correlation of 0.35 dex, we obtain L_0.5-5 keV = 1.1_-0.6^+1.2×10^45 erg s^-1.
This corresponds to an AGN contribution of 0.38_-0.22^+0.42.
§.§ AGN contribution from the decomposition of the X-ray photon distribution
To directly constrain a contribution from the AGN point source in X-ray, we performed a decomposition of the observed background subtracted distribution of detected X-ray photons by representing it with a linear combination of a point-source component used in Section <ref>, and an extended Gaussian distribution, both convolved with a PSF (Fig. <ref>). For a bin size of 18×18 pixels (9 arcsec)[We can consider this bin size optimal because it yields SNR>3 in most bins in the area of the detection and it does not yet affect much the spatial information.], the recovered contribution of the point source is 35±26%.
Given that the shape of the point source component is fixed while the shape of the diffuse emission is flexible, the estimated AGN fraction can be treated as a upper limit of AGN fraction in the observed X-ray luminosity.
§.§ Extended X-ray emission from inverse Compton scattering on a radio jet
The observed X-ray counterpart can be explained by sources other than the ICM. For example, deep high-resolution Chandra X-ray imaging of the Spiderweb proto-cluster revealed a population of point sources (AGN) <cit.> that would be interpreted as extended ICM emission in case of lower angular resolution, similar to the Chandra dataset for our cluster.
The inverse Compton (IC) scattering of the CMB photons on the electrons in the AGN jet can produce substantial X-ray emission <cit.>. To estimate the possibility of high contribution of possible IC from the AGN to the observed X-ray luminosity, we followed an approach similar to that used for the cluster J1001+02 at z=2.51 <cit.>, assuming that all the observed radio and X-ray emission originates from the AGN.
Our calculations show a magnetic field estimate of ∼0.5μG that is almost an order of a magnitude smaller than typical magnetic fields in the systems where the IC is observed <cit.>. We therefore conclude that the observed extended X-ray emission is unlikely to be produced by IC scattering.
The observed X-ray flux can also emerge from the IC structures in the radio-lobes <cit.>, but observed X-ray luminosities of these objects are of the order of a few times of 10^44 erg/s, substantially lower then expected luminosity of the diffuse emission. However, the very existence of lobe may already serve as a indirect confirmation of the ICM.
|
http://arxiv.org/abs/2409.03140v1 | 20240905002537 | GraphEx: A Graph-based Extraction Method for Advertiser Keyphrase Recommendation | [
"Ashirbad Mishra",
"Soumik Dey",
"Marshall Wu",
"Jinyu Zhao",
"He Yu",
"Kaichen Ni",
"Binbin Li",
"Kamesh Madduri"
] | cs.IR | [
"cs.IR",
"cs.CL",
"cs.LG"
] |
Pennsylvania State University
USA
eBay Inc.
USA
eBay Inc.
USA
eBay Inc.
USA
eBay Inc.
China
eBay Inc.
China
eBay Inc.
USA
Pennsylvania State University
USA
§ ABSTRACT
Online sellers and advertisers are recommended keyphrases for their listed products, which they bid on to enhance their sales. One popular paradigm that generates such recommendations is Extreme Multi-Label Classification (XMC), which involves tagging/mapping keyphrases to items. We outline the limitations of using traditional item-query based tagging or mapping techniques for keyphrase recommendations on E-Commerce platforms. We introduce GraphEx, an innovative graph-based approach that recommends keyphrases to sellers using extraction of token permutations from item titles. Additionally, we demonstrate that relying on traditional metrics such as precision/recall can be misleading in practical applications, thereby necessitating a combination of metrics to evaluate performance in real-world scenarios. These metrics are designed to assess the relevance of keyphrases to items and the potential for buyer outreach. GraphEx outperforms production models at eBay, achieving the objectives mentioned above. It supports near real-time inferencing in resource-constrained production environments and scales effectively for billions of items.
<ccs2012>
<concept>
<concept_id>10002951.10003317.10003347.10003350</concept_id>
<concept_desc>Information systems Recommender systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003227.10003351</concept_id>
<concept_desc>Information systems Data mining</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003260.10003272.10003273</concept_id>
<concept_desc>Information systems Sponsored search advertising</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10002950.10003624.10003633.10010917</concept_id>
<concept_desc>Mathematics of computing Graph algorithms</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010178.10010179.10003352</concept_id>
<concept_desc>Computing methodologies Information extraction</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[500]Information systems Recommender systems
[100]Information systems Data mining
[100]Information systems Sponsored search advertising
[300]Mathematics of computing Graph algorithms
[300]Computing methodologies Information extraction
GraphEx: A Graph-based Extraction Method for Advertiser Keyphrase Recommendation
Kamesh Madduri
September 9, 2024
================================================================================
§ INTRODUCTION
In the online e-commerce advertisement space,
keyphrase recommendation is offered to sellers/advertisers who want to bid on buyers/users' search queries for a better placement of their inventory on the search result pages (SRP) which increases their engagement. Keyphrases are generally recommended as shown in figure <ref> in real time for the items if they are relevant to them. Advertisers only want to bid on keyphrases that are actual queries and not queries that seem plausible but non-existent for targeting purposes. Since the nature of the problem is mapping items to multiple queries to increase the potential reach of advertisers, this problem of keyphrase recommendation can be formulated as an Extreme Multi-Label Classification (XMC) problem, see <cit.>. The data for training the XMC models is sourced from search logs that document the items shown to buyers when they input search queries[We term buyer search queries as keywords. And use keywords and keyphrases interchangeably.]. A keyword and an item are paired together and become a part of the training dataset when they co-occur in the search logs a certain number of times with significant buyer interactions on those items. XMC models in the context of keyword recommendation are supervised tagging techniques that map items to keyphrases.
The impact of XMC tagging models can be determined by how often each keyphrase is searched. Keyphrases can be classified as head or tail keyphrases according to their search frequency. Head keyphrases are generally less in number but searched frequently by buyers. Targeting such head keyphrases leads to increased revenue since more buyers inclined to search for them, resulting in more clicks and more buys. XMC models focus on recommending more tail keyphrases <cit.> and avoid the head keyphrases despite them being relevant to the item. In addition, tail keyphrases/queries will not have too many items relevant to it, so the chances of the relevant seed item getting proper positioning through organic retrieval without any promotion increases — making them unappetizing to advertisers.
There are various caveats to recommending keyphrases using XMC tagging formulation trained on engagement data from search logs. Search logs with engagement data is highly skewed as shown in Figure <ref>, with 90% of items associcated with only one query in terms of clicks/sales. This absence of engagement (clicks/sales) doesn't necessarily mean that the keyphrase/query is irrelevant to the item. When buyers are presented items in relation to the query/keyphrase, the presentation is biased as the items are ranked and their ranking can affect buyer engagement. This biased presentation implies that just because an item doesn't have clicks/sales in relation to the keyphrase doesn't mean that the item is irrelevant to the keyphrase. It could mean that the item was not popular and was retrieved at a lower rank for the query in question and influenced buyers to ignore it. These unpopular items need the help of advertisement to level the playing field by promoting their items to a favourable rank to garner more buyer engagement. Thus traditional XMC models which rely on modelling item-query interaction inherit this bias towards popular items which are ranked at the top and miss the targeting for non-popular items which constitute 90% of the inventory and are the main focus of advertisement.
While this bias has been discussed in <cit.>, we contextualize it in this domain of advertiser keyphrase recommendation. Continuing from the limitations of the data on which XMC tagging models are trained, the same training set and bias perpetuate the lack of diversity of different XMC models. Even with a 10% increase in the precision/recall scores of subsequent XMC models, the recommendations do not have sufficient diversity to obtain substantial clicks. In addition the lack of ground-truth for most items leads to inaccurate offline evaluation which is conventionally ground truth-based (Precision/Recall). We offer a diverse set of alternate evaluation metrics which are comprehensive in measuring a model's performance and diversity for these skewed distributions and test it in an online setting.
§.§ Scope and Contributions
In this work, we limit ourselves to retrieving keyphrases based on item's title and the keywords from the items's categorical populace, especially those keyphrases that are actively and frequently searched by buyers. The extraction is done in an unsupervised setting where the keyphrases for each item are unknown during training.
In fact, we restrict the curation of keyphrases (more details in <ref>) to include only those that have high search volume (number of searches made by buyers) based on buyer searches. XMC models suffer from item popularity biases, which can result in a severely restricted set of keyphrases for non-popular items[Non-popular items constitutes more than 80% of total items] or recommending a lot of tail keyphrases for items. Our distinction is that by decoupling the keyphrases from item engagement and using categorical population dynamics of keyphrases, we keep the essential bias towards head keyphrases (attractive to advertisers) while getting rid of the negative bias (against non-popular items which are the main target of advertisement).
The models are required to be frequently refreshed, i.e., the models are regularly trained and tuned on recent data to keep up with the latest buyer search queries. Thus, models with smaller training and setup times are necessary, and minimal tuning is crucial to decrease the engineering effort. In addition, due to the ever changing buyer query space, frequent model refreshes is required accommodate newer keywords. For e-commerce platforms, it is also vital that the recommendations are in real-time or near real-time, so the models should also have inference latency of a few milliseconds.
In this work, our aim is to overcome the many challenges as described above. Our contribution to this work is summarized as:
* An innovative graph-based extraction algorithm for keyphrase recommendation that is simple, transparent and easy to interpret.
* The design of the algorithm and the process of data collection have been specifically geared towards mitigating item popularity bias while maintaining advertiser friendly head keyphrase bias.
* Provide a new robust framework for evaluation of incremental impact of recommendation models in terms of performance and diversity metrics.
* A scalable and sustainable model that runs without GPUs for hundreds of millions of items in real-time.
§ RELATED WORK
Keyphrase generation via open-vocabulary models like GROOV <cit.>, One2Seq <cit.> and One2One <cit.> are susceptible to recommending keyphrases that are not part of the label space. Another formulation for keyphrase recommendation is keyphrase extraction with methods such as keyBERT<cit.>, which have conventionally treated keyphrase recommendation as a two-step problem: keyphrase generation and keyphrase ranking. The basic keyBERT module considers keyphrase generation as an n-gram based permutation problem, i.e., it generates all possible n-grams for a given n-gram range. The keyphrase ranking module then orders them using an encoder-based ranker tuned on some domain-specific supervised signal. This simple generation framework presents two main issues: 1) the token space is limited by token adjacency and token presence in item's text 2) the keyphrase should also be in the universe of queries that buyers are searching for; which this simple generation model does not ensure.[keyBERT can also use LLMs as generators, but their time complexity is substantial.] We limit our comparison to models with shorter training and inference times based on the scope discussed in section <ref>. The models deployed at eBay fulfill these requirements and provide good recommendations. We choose 4 representative models from the deployed ones; fastText, Graphite, Rules Engine (RE) and SimilarListing(SL) variants.
fastText <cit.> is a basic linear neural network model that generates word vectors with CBOW architecture and uses a single classification layer to generate predictions. It uses hierarchical softmax for faster training on a large number of classes (keyphrases) and subword embeddings for better representation and inferencing. Graphite <cit.> is the state-of-the-art fastest XMC model that uses bipartite graphs to map words/tokens to the data points and then map them to the labels associated with the data points. It is implemented for multi-core systems having infinitesimal training time and uses parallelization for real-time inferencing. Both fastText and Graphite are deployed at eBay. There are four other proprietary models also deployed at eBay for keyphrase recommendation. Rules Engine (RE) is a simple technique that stores item-keyphrase associations based on their cooccurrences (associated with buyer activity) in the search logs during the last 30 days. It recommends keyphrases only for items in which buyers have shown interest and not for any new items. Likewise, RE-trank recommends only the queries that were ranked in the top slots for the existing association of items. We also compare with a few other variants that are based on similar listings (SL) and their related queries. These are; SL-query that determines the existing listings/items which share some keywords, thus recommending other's associated keywords, and lastly
SL-emb <cit.> which uses embeddings of the item's title to compare and find similar listings then recommend the related queries. SL-emb is a
dense retrieval model whose inference is implemented in two stages, embedding generation and ANN<cit.>. Note that of the SL models only SL-emb can handle cold start conditions on new items such as fastText and graphite. The implementation of the RE and SL techniques also employs a few other methods which we can't discuss due to proprietary constraints.
§ GRAPHEX MODEL
We first formulate the keyphrase recommendation problem and then briefly go through the data set curation process. Next, we describe the notations, then the Construction of the graph which is the training part of GraphEx and the Inference method for obtaining the predictions.
§.§ Problem Formulation
For efficiently solving the recommendation problem, we use the formulation of a permutation problem that permutes the title strings to match a given set of keyphrases. Let's consider a title string with l words in it. The goal is to generate permutations of differing lengths from the l words. Now, given a list of predefined keyphrases, the possible permutations of the title string is constrained to match the keyphrases. Therefore, each permutation can exactly match a keyphrase or be part of some keyphrases, but if a title token isn't part of any keyphrase then it is ignored. Thus, it does not limit the permutations to token adjacency or token presence in item's text. A naive brute force method is to generate all possible permutations of the l words which will take O(l!) time. Each keyphrase can be validated using hashing and string comparisons (each word can be an integer) and thus can take overall O(l× l!) time. This is infeasible to perform in real-time with limited amount of resources.
§.§ Dataset Curation
We aggregate our datasets from the search logs generated during buyer sessions on eBay.com. The keyphrases that buyers input during the search sessions are curated based on certain criteria, which we discuss here. eBay's search engine Cassini shows a sufficient number of items (Recall Count) for each input query in its search results. Cassini determines the Leaf Category of the keyphrase and it is the same as the top-ranked item's leaf category (lowest-level product categorization). We restrict the number of curated keyphrases by only considering those that are heavily searched by the buyers. The number of times a keyphrase is queried is termed as (Search Count). Due to proprietary reasons, we cannot further delve into the details. The absolute values of Recall Count and Search Count are not essential in fact, an anonymized ranking works well. All the unique keyphrases are aggregated for each Top level category (metacategory) and are grouped for each Leaf Category within the metacategory. Each keyphrase is associated with a Search Count and a Recall Count. Note that a keyphrase can be duplicated across different Leaf Categories.
§.§ Terms and Notations
We consider a set of unique keyphrases termed as Q=k_1,k_2,...,k_K. Each keyphrase k_i can be considered as a set of words w_1,w_2,...,w_l, where w_* are tokenized[The tokenization scheme can be anything as long as string comparison functions are well-defined and consistent for that scheme. By default we consider space delimited tokenization.] from the keyphrase string k_i. Each k_i is further associated with a Leaf category l, Recall Count/Rank R and Search Count/Rank S. Given a test item's title T, the goal is to recommend a subset of keyphrases from Q that are relevant to T. We can consider the title as a string with tokenizedfn:tokenization words T=w_1,w_2,...,w_t similar to a keyphrase, but titles are generally longer than the keyphrases. We denote a graph G(V,E) where V is the set of vertices and E is the set of edges. Each edge e∈ E is denoted by a pair of vertices e=(v_1,v_2) indicating a connection between the vertex pair. In a Bipartite Graph, the set of vertices V are divided into a pair of disjoint subsets V=X⋃ Y. Each vertex in the same subset (X or Y) isn't connected by an edge and only vertices in different subsets can be connected by an edge. We define the function Deduplicate and Count or DC(·) which, given a list of elements, counts the occurrences of each unique element in the list. It outputs a list of tuples of the form (element,count) for each unique element in the list.
§.§ Construction Phase
In this phase, the method relates the words in the keyphrases to the keyphrases themselves by mapping the relation using Bipartite Graphs. For a particular metacategory, the model constructs a series of Bipartite Graphs G_l(V,E) one for each leaf category l from only those keyphrases Q_l that belong to the same leaf category. For each graph G_l(V,E), the two subsets X and Y of the vertex set V are constructed as follows: All the unique words in the keyphrases are considered as the set X, while the unique keyphrases are considered as Y. Each unique word and unique keyphrase is represented as non-negative integers, to avoid string comparison and manipulation costs. Mathematically, X=⋃_∀ w∈ k_i,∀ k_i∈ Q_l{w} and Y=Q_l. An edge e=(x,y) in set E, is permitted from vertex x∈ X to vertex y∈ Y when x⊂ y, indicating an edge from a word to the keyphrase that it is a part of. Such edge relations are created for all the Bipartite Graphs using the unique words in all the unique keyphrases within each leaf category.
An example of a constructed Bipartite Graph is shown in Figure <ref>. Each vertically stacked vertex belong to the same subset. The left set of vertices are the words/tokens and the right set are the keyphrases. The vertices are shown as strings here for presentation, but during implementation, the integer IDs are used. Each tokenizedfn:tokenization word is connected to the keyphrase that it is a part of. The graph is stored in Compressed Sparse Row (CSR) format, which occupies the least amount of space. Each word/token can be accessed in unit time whereas the adjacencies of a word can be traversed in O(d) where d is the degree of the word or the number of keyphrases that contain that word. A map type data-structure is used to associate the leaf category ID to the CSR structure for each graph.
The keyphrase's Recall and Search Count[Defined in section <ref>] are stored in separate arrays. So given a keyphrase ID l, R(l) and S(l) will directly index into the arrays and return the values taking unit time. The space occupied by each leaf category graph depends linearly on the number of unique words and edges, as CSR structure occupies |X|+|E| space. The count of edges |E| depends on the sum total of occurrence of each word in the keyphrases/labels which is difficult to generalize and depends on the datasets. Separate graphs for each leaf category help in recommending more relevant keyphrases which becomes more clear in the next section.
§.§ Inference Phase
Given a test item T and a leaf category l with the tokenized words in the title as T=w_1,w_2,...,w_t, the goal is to extract a list of keyphrases in decreasing order of relevance to the item. GraphEx's recommendation is based on permuting the word in the item's title as discussed in section <ref>. To enable this, the Inference Phase is divided into two steps: Enumeration step that generates keyphrases from words of title and the Ranking step that ranks the keyphrases in order of relevance to the item.
§.§.§ Enumeration Step
GraphEx first determines the Bipartite Graph G_l(V,E) that corresponds to the leaf category l of the input item. The corresponding graph G_l can be obtained in O(1) time if a hashing data-structure is used for to map the leaf categories to the graphs defined in section <ref>.
The step first tokenizes the item's title into words and uses them as input along with the graph G_l in the Algorithm <ref>. Lines 3-5 of the algorithm map the tokenized words of T using the bipartite graph G_l to the labels/keyphrases. Let's look at an example to understand this process. Given an item “audeze maxwell gaming headphones for xbox”, we highlight the corresponding words on the left in the illustrated figure <ref>. The keyphrases (l) connected to the highlighted words are candidates for recommendation and are collected in C_L in algorithm <ref>. Line 6 uses the DC function to de-duplicate and count the redundancies in the candidate keyphrases. E.g. in figure <ref> the keyphrase “audeze maxwell” is connected to two words “audeze” and “maxwell”, whereas “gaming headphones xbox” is connected to 3 words. Hence, after the execution of line 6, it results in the duplication count of 2,2,3,2, and 1 in the given order for each of the keyphrases on the right side of the illustrated figure <ref>. The count indicates the number of words in the keyphrase that are common with the item title T.
The next part of the Enumeration step generates a tuple corresponding to each label in C_L using lines 7-9 in algorithm <ref>. We define the function Label Title Alignment or LTA that uses the common word count (or duplication count) c=|T∩ l| between the title T and the label l as LTA(l,c)=c/|l|-c+1. The LTA ratio is the second element of the tuple or the first attribute of the label l. The two attributes are the Search S(l) and Recall count R(l) of the label. The tuples generated by this process are returned in C_R. The time complexity of this step primarily depends on lines 3-5 due to restriction on prediction count which we discuss later in Section <ref>. The time complexity can be uncertain to determine due to the varying number of edges for each word. For simplification, we consider the average degree of each word as d_avg=|E|/|X|. Then asymptotically the time taken to gather the candidate labels for each word of the item title T is O(|T|.d_avg). Modeling the problem as a Bipartite graph helps to efficiently permute all the words in the title T while only generating permutations that are valid keyphrases.
§.§.§ Ranking Step
In this step, the candidate labels in C_R are sorted in the non-increasing order of the first attribute or second tuple element LTA and to break ties, S(l) and subsequently R(l) is used. While tie-breaking, those keyphrases are preferred that have higher search counts and lower recall counts. Higher search counts will have more clicks while lower recall count indicates the keyphrases have fewer items associated with them. So, when a keyphrase is input by a buyer, the search engine displays relatively fewer items, boosting click probability per item. The LTA function was designed to provide a higher score to those keyphrases that have less words in the label that aren't part of the title. Let's compare two keyphrases from the figure <ref>, “audeze maxwell” and “wireless headphones xbox”, both have 2 words in common with the sample title shown in section <ref>. The first's LTA is 2/1 and second's is 2/2, thus ranking “audeze maxwell” higher. LTA minimizes the risk involved by preferring those keyphrases that have more complete information (or more matching words).
§.§ Implementation Details
The edges of the bipartite graph of each leaf category are constructed as tuples, sorted and then de-duplicated based on their IDs which are finally stored in the CSR format. The space complexity is linear in the number of edges for each graph given by O(|X|· d_avg) where X∈ V, which is the set of unique words in all the keyphrases for the leaf category. The words and the labels are represented as unsigned integers to occupy minimal space and convert string comparisons to integers ones. Therefore, comparing two words or two labels takes O(1) time. The construction phase does not involve any weight updates or hyper-parameter training, making it quite fast and efficient.
A drawback of directly using the algorithm <ref> is the large number of keyphrases that are generated in the initial C_L. This results in a poly-logarithm time complexity for line 6 in the algorithm. To circumvent this we used count arrays to calculate the redundancies of each unit keyphrase. The space taken for the storage of C_L and the count array is approximately 2 |Q_l|. A predetermined number of keyphrases (10-20) are generated for a given test instance during the inference phase. So, after the counting in line 6, the number of unique keyphrases in C_L is pruned based on this requirement. This is done by first grouping each keyphrase with similar counts, then restricting the number of groups such that the sum of group sizes is equal to the required number of predictions. Groups with larger keyphrase redundancy counts are preferred, and all the keyphrases of the threshold group are included even if the group size overflows the number of required predictions. Thus the time complexity of the Enumeration step remains as O(|T|.d_avg). Though the sorting in the Ranking step seems expensive, the list length is always approximately a constant because of |C_L|=|C_R|. This is due to the restriction on prediction count as mentioned above thereby not contributing asymptotically to overall time complexity.
§ EXPERIMENTATION AND RESULTS
We perform experiments on representative datasets from eBay and compare our model's results with representative models in production at eBay. We first describe our experimental setup, the datasets we use, and the models we compare in Subsection <ref>. Then, we analyze the results of each of our accuracy experiments in Subsection <ref> and show the execution performance of each model in <ref>. Next, we describe the deployment in production in section <ref> and its impact in section <ref>.
§.§.§ Setup and Datasets
GraphEx is implemented for multi-core systems without requiring any GPUs. Its inference part is implemented on C++ (≥ g++-9.3.0) using OpenMP threading with Python wrappers using pybind11. The construction part is implemented in Python (≥ 3.7); due to its lightweight approach since the construction doesn't require large resources and takes much less time. We used a system with 4 Intel Xeon Gold 6230R CPUs with 2 sockets each containing [email protected] cores, and 500 GB of RAM for the analysis. GraphEx employs coarse-grained multi-threading, assigning each input's inference to an individual thread. We launched 20 threads with compact pinning to occupy only a single socket sufficient for our dataset size.
§.§ Experimentation Details
We contrast our model against notable models used at eBay for keyword recommendation, as elaborated in section <ref>. We present findings on three product meta-categories from eBay, each symbolizing a classification of large, medium, and small categories. The classification is determined by the count of items and the quantity of unique keyphrases within each meta-category. Table <ref> shows the anonymized categories and their details. Even though our methodology does not require knowledge of the items or their meta-data, the XMC models require them, hence we show their numbers for perspective. Our data curation and analysis are limited to eBay, due to the absence of any publicly available keyword recommendation datasets from e-commerce advertisement platforms.
The data is collected from search logs for the duration of one year for both XMC models and GraphEx. For XMC, the item-keyphrase pairs are constrained based on their co-occurrence count, number of buyer clicks/purchases, etc.
The curated unique keyphrase count shown in the third column of table <ref> contains both the head and tail keyphrases and is incorporated by XMC models. On the other hand, GraphEx's data curation for training, aggregates keyphrases without looking at any association with the items. It restricts the keyphrases[Shown in the right most column of table <ref>.] to contain a higher number of head and a lower number of tail keyphrases using the curation process described in Section <ref>. Generally, keyphrases that on an average weren't searched atleast once per day were filtered for GraphEx[This restriction was relaxed for CAT_3 that didn't have sufficient keyphrases.]. For testing, we sampled a set of 1000, 400 and 200 items from actively listed items on eBay.com for the categories CAT_1, CAT_2, and CAT_3 respectively. We also computed the search count of each unique keyphrase by considering a 15 day duration different from the one year duration for the training set. This removes any bias that models have based on their training data. For each of the test items, all the models generate a variable number of keyphrases with a limit of 40.
§.§.§ Traditional Metrics
Typically, metrics like Precision, Recall, F1 and so on are used for comparing recommendation models. These metrics facilitate comparison by emphasizing retrieval capability, which is suitable for XMC tagging models, but does not pertain to extraction or generation models. There are three main issues with using these metrics: Prediction Diversity, Ground Truth Incompleteness/Uncertainty, and Ground Truth Bias. Let us take an example instance T, associated with a ground truth label k1 in the training set. All models make certain choices to increase the probability of predicting k1 for inputs that are similar to T. This aligns the tagging-based models to predict a similar subset of labels for T, thus reducing diversity among the predictions of different models. Another facet is the assumption that the curated ground-truth labels for each instance or data point are correct. Labels for some instances or items might be missing due to rigid thresholds in the curation process. Hence, a metric that measures retrieval ability hinders the performance of unsupervised extraction or generation tasks that don't rely on ground truth labels for items. Finally, if k1 is a biased label for T, tagging models tend to replicate the bias in their predictions. Improving diversity and reducing biases is possible through the alteration of techniques, though this would complicate their implementation and the process of comparison.
Ideally, metrics should compare the relevancy of the predictions to the input text without limiting the comparison to a set of predefined labels/keyphrases. However, it is difficult to determine the relevance of predictions without any prior labels. So, while previous research have used human judgement <cit.>, we use AI-generated evaluations to evaluate at scale.[The AI predictions were benchmarked against positive buyer judgement and achieved more than 90% alignment, similar to how it was done in <cit.>.] We generate prompts for Mixtral 8X7B <cit.> per item, which contains the item's title and a set of predicted keyphrases. The response is “yes” or “no” for each keyphrase, indicating whether the keyphrase is relevant to the item or if it is irrelevant.
Once a set of keyphrases is determined as relevant for an item, we filter the keyphrases through the high Search Count threshold. This threshold is determined as the 75th percentile of the descending order of search counts of all unique keyphrases in the category such that 25% of the unique keyphrases are above this limit. The keyphrases whose Search Counts are above the threshold are considered as Relevant Head Keyphrases[High Search Count.] otherwise they are considered as Relevant Tail Keyphrases. Note that the head keyphrases determined as irrelevant by the AI are not considered for that item's evaluation, as even though buyers search the keyphrases in large volume they mostly won't click on the corresponding item. Hence forth, whenever we use the term head keyphrases it means relevant head keyphrases.
§.§ Performance Results
We compare the models based on the effective (relevant and head) keyphrases that each model recommends. Figure <ref> shows the per-model number of keyphrases averaged over all items that are evaluated as relevant or irrelevant by AI, while also distinguishing the head and tail types in the relevant keyphrases. The x-axis shows all the models under comparison. The y-axis in figure <ref> shows the average number of keyphrases per item that are irrelevant and relevant head/tail keyphrases, while summing up to the total predictions by each model.
It is evident from figure <ref> — as the number of predictions generated by a model increases, the number of irrelevant predictions also tends to rise. The predictions from subsequent models (left to right on the x-axis) are deduplicated against keyphrases already recommended by the previous models. A diverse set of keyphrases is beneficial as it typically results in more engagement, especially if the keyphrases are relevant and are head keyphrases. Based on feedback from sellers, they prefer a balance: they dislike having either too many or too few keywords. They prefer as few irrelevant keywords, while still driving more engagement and having a diverse set of keywords. Thus the ultimate goal for models is to predict a reasonable number of total keyphrases with a higher proportion of relevant head keyphrases and a diverse set of keyphrases. Due to the varying number of predictions by each model, we use one set of metrics to compare the relevant and head keyphrases within each model and another set to compare between different models:
* 𝑅𝑒𝑙𝑒𝑣𝑎𝑛𝑡 Proportion (RP)=# 𝑟𝑒𝑙𝑒𝑣𝑎𝑛𝑡 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠/# 𝑡𝑜𝑡𝑎𝑙 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠
* 𝐻𝑒𝑎𝑑 Proportion (HP)=# ℎ𝑒𝑎𝑑 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠/# 𝑡𝑜𝑡𝑎𝑙 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠
* 𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝑅𝑒𝑙𝑒𝑣𝑎𝑛𝑡 𝑅𝑎𝑡𝑖𝑜 (𝑅𝑅𝑅)=# 𝑟𝑒𝑙𝑒𝑣𝑎𝑛𝑡 𝑚𝑜𝑑𝑒𝑙1 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠/# 𝑟𝑒𝑙𝑒𝑣𝑎𝑛𝑡 𝑚𝑜𝑑𝑒𝑙2 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠
* 𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝐻𝑒𝑎𝑑 𝑅𝑎𝑡𝑖𝑜 (𝑅𝐻𝑅)=# ℎ𝑒𝑎𝑑 𝑚𝑜𝑑𝑒𝑙1 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠/# ℎ𝑒𝑎𝑑 𝑚𝑜𝑑𝑒𝑙2 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠
Table <ref> demonstrates the assessments using both sets of metrics on relevant and head keyphrases. The metrics RRR and RHR are calculated using the GraphEx's predictions as the denominator. It is important to note that each set of metrics alone don't offer a comprehensive view. Depending on the variation in total predictions between the two models, the RP and HP tend to favor the model with fewer predictions, while RRR and RHR favor model1 if it has a larger count. We don't show absolute numbers due to the proprietary nature of data and the models.
For clarity, we first discuss the models that have a much larger number of predictions, as seen in Figure <ref> which are SL-emb and fastText. For Table <ref>, fastText and SL-emb has lower RP and HP (columns 2^nd and 3^rd) than GraphEx due to their large prediction count. However, we can also see that GraphEx outperforms fastText (except CAT_3) and SL-emb in RRR and RHR (columns 4^th and 5^th). Thus GraphEx has a lower percentage of irrelevant keyphrases and a higher count of relevant and head keyphrases. CAT_3 is a small metacategory with fewer items and lower buyer interaction, leading to fewer keyphrases. Therefore, creating effective keyphrases for GraphEx becomes difficult and necessitates tailored curation. The models that have much smaller total count of predictions are RE, RE-trank, SL-query, and Graphite. In Table <ref>, it is evident that these models possess a higher RP compared to GraphEx. This is attributable to their lower number of predictions, which skews the proportions. However, excluding RE and Graphite, the models exhibit a significantly smaller HP than GraphEx. Additionally, these models have much smaller RRR and RHR, as shown by the third and fourth columns (metrics) of the table. Despite Graphite having a slightly higher HP for CAT_3, its RRR and RHR is still lower than that of GraphEx for all categories. Consequently, the models are unlikely to achieve substantial clicks like GraphEx due to the fewer head keyphrases.
The models RE and RE-trank are simple retrieval techniques, based on recalling the ground truth (item-query combinations with associated buyer activity) with a minimum amount of buyer activity in a short lookback period. The results of RE as seen in Table <ref> are mixed, with lower HP in CAT_1, while 2.8% and 15.9% more HP than GraphEx in CAT_2 and CAT_3 respectively. The RRR and RHR of RE is always lower than GraphEx. Albeit their simple nature, both RE and RE-trank are recommenders that are closest to the ground truth in terms of actual buyer-engagement.
While we covered the two aspects of comparison, lower irrelevant and higher head keyphrase counts; diversity is another aspect that determines whether the effective keyphrases generated by a model will bring substantial incremental impact. Therefore, the final metric to compare with other models is the diversity of GraphEx's predictions. We first separate out (mentioned earlier) the unique or diverse head keywords recommended by each model that are relevant to the item. Table <ref> shows the relative amount of diverse keyphrases of each model to the diverse keyphrases of GraphEx (averaged per item). It is evident that GraphEx recommends the highest amount of diverse head keyphrases in constrast to any other model.
§.§ Execution Results
It is important for the models to attain the real-time recommendation and model refresh goals as described in section <ref>. We compare only the XMC models with GraphEx as the REs and SLs (except SL-emb) are simple retrieval techniques implemented in the Spark/Hadoop ecosystem while model inferencing is more complex technique. We examine the models based on Inference Latencies, Model Sizes, and Training times. [SL-emb inference stages are complex, embedding generation occurs in GPU whereas ANN is done in CPU, thus it is difficult to compare the inference latencies with other models.]
For near real-time recommendation, the Inference Latency of a single input should be in milliseconds. The left image of Figure <ref> compares the per-input inference latency of the XMC models and GraphEx. The latencies for each model are computed by amortizing the time taken for prediction over the entire test set. We can see that all the models are within the required limit of 10 ms, but fastText takes more time for a prediction. Graphite and GraphEx's latencies are comparable for the smaller categories (CAT_2 and CAT_3). The performance of GraphEx is superior, attaining up to 17× and up to 13× more speed up in contrast to fastText and Graphite on CAT_1. If we infer 20 million items in CAT_1, GraphEx will result in energy savings of 11 hours and 8.5 hours with respect to fastText and Graphite, respectively.
The right of Figure <ref> compares the storage sizes of the above models.
The fastText model requires significantly more storage across all categories because of the extensive weight matrix and the word embeddings it maintains. This is the case even after reducing the model size during training to enhance precision for production. Graphite occupies substantial space for the large category CAT_1 but has a comparable size to GraphEx for other categories. GraphEx occupies the least size for its models even after constructing graphs for multiple leaf categories. The training times of fastText run into >4 hours for all categories with bigger categories running for days and include multiple epochs and autotuning phases. Graphite has a graph-based construction step that takes around 1-6 minutes while GraphEx takes <1 minutes on all the categories. This is due to the curtailment of the training data to head keyphrases and implementation of the construction step that efficiently constructs and stores the model.
§.§ Production Engineering Architecture
In this section, we describe the engineering architecture used to serve GraphEx keywords to our sellers for their inventories in one of eBay's major sites. There are two components for recommendation Batch and Near Real-Time (NRT) Inference. Batch inference primarily serves items with a delay, whereas NRT serves items on an urgent basis, such as items newly created or revised by sellers. The batch inference is done in two parts: 1) for all items in eBay, and 2) daily differential, i.e. the difference of all new items created/revised and then merged with the old existing items. The NRT inference is done using Python code hosted by eBay's internal ML inference service Darwin. Darwin is then called by eBay's recommendation service, triggered by the event of new item creation or revision, behind a Flink processing window and feature enrichment. Note that GraphEx serves as one of the keyword recommendation sources in the whole Batch/NRT framework.
The GraphEx batch inference is done using eBay's machine learning platform Krylov<cit.> and runs on a single node with 70 cores and 900 GB RAM. GraphEx inference is so fast that the time required to run on a space of 200 million items is just 1.5 hrs. This is a huge improvement over fastText and Graphite which take 1.75 days and 1.5 days respectively. Another batch job in Spark joins these sources in Hadoop and injects them into a Key-Value store (NuKV), which is then called by the eBay platform's inference api and served to eBay's sellers. This architecture can scale to billions of items and hundreds of billions of keywords that serve eBay's platform. The architecture is illustrated in Figure <ref>. Due to the nature of its algorithm, the GraphEx model is bounded by the label space on which it trains. However, since GraphEx training is inexpensive as Graphite, the model can train in a matter of minutes even for very large categories, making it ideal for daily model refresh. This makes it possible for GraphEx to cater to newer keywords that arise every day. This is a huge improvement over fastText which takes a day or more to train on these large categories and has a monthly refresh schedule.
§.§ Impact
GraphEx was deployed for the sellers of a particular site on eBay to replace Graphite keywords. After its release, a differential pre-post analysis was done to gauge the impact of GraphEx keywords in comparison to Graphite which it replaces. The differential analysis also involved measuring the impact of all keywords generated by GraphEx over a period of 2 weeks, compared to the other sources of recommendations. GraphEx provides 43% more distinct item-keyword associations than Graphite with the average search volume of its keywords nearing 30x of RE, and 2.5X of fastText. In terms of performance, GraphEx delivers an incremental lift of 8.3% in total ads revenue and a 10.3% in Gross Merchandise volume Bought (GMB), i.e. the total money made by selling the item. In terms of Return on Ads Spend (ROAS), given by ROAS = GMB/Ads Revenue, it is the most successful amongst cold-start models. Among all models its ROAS is only beaten by RE and RE-trank which are non-cold start ground-truth recalling models, and GraphEx beats them in terms of item coverage (more than 3x items covered by GraphEx). We cannot disclose anymore details due to business and proprietary reasons.
§ CONCLUSION
We introduce a novel graph-based extraction method called GraphEx which is tailored for online advertising in the e-commerce sector. GraphEx efficiently solves the permutation
problem of token extraction from item's title and mapping them to a set of valid keyphrases. It is not limited by the vocabulary of the item's title and the order of tokens in them. This method produces more item-relevant keyphrases and also targets head keyphrases favored by advertisers, ultimately driving more sales. It is currently implemented at eBay, a leading e-commerce platform serving its sellers with billions of items daily. We show that traditional metrics do not provide accurate comparison amongst the models, and using a single metric for comparison will be misleading. Thus, we use a combination of metrics with AI evaluations to provide a better picture of the practical challenges of keyphrase recommendation. We evaluated its performance against the production models at eBay, demonstrating superior results for our model across the various metrics. Additionally, GraphEx offers the most profitable cold start keyphrase recommendations for advertisers with the lowest inference latency in eBay's current system and allows for daily model refreshes to serve our ever changing query space.
ACM-Reference-Format
§ APPENDIX
§.§ AI Evaluation Prompt
Prompts were generated for Mixtral 8X7B to determine whether a keyphrase was relevant to an item based on similarity to the item's title as described in Section <ref>. The structure of the prompt is shown below. The response is an yes or no answer for each corresponding keyphrase indicating if it is relevant or not relevant to the item.
§.§ Ablation Studies
§.§.§ Data Curation Effects
A critical component of GraphEx's training involves the process of data curation. We find that the Search Count defined in Section <ref> is crucial for predicting relevant as well as head keyphrases. A low Search Count of 1 inculcates many bogus user queries and hence needs a much higher threshold. An ideal threshold would be keyphrases that are queried at least once daily, which equates to 180 over a span of 6 months. However, as indicated in Table <ref>, this threshold results in a reduced number of unique keyphrases, necessitating a relaxation of the limit.
To comprehend the influence on recommendations, we evaluated two GraphEx models constructed with search counts of 90 and 180, respectively. A random subset of 1000 items from CAT_1 was utilized for testing. Approximately 20.1% of the items had identical recommendation sets from both models. For the remaining 80% of items, 20% had similar relevant keyphrases and 7.2% had the same relevant head keyphrases. For the remaining keyphrases (about 60%), the proportions of relevant and head keyphrases for the Search Count thresholds of 90 and 180 are presented in Table <ref>. The benefit obtained with head keyphrases at the 180 search count surpasses the benefit obtained for relevant keyphrases at the same count when compared to a search count of 90.
§.§.§ Dissecting Predictive Count Variance
The models under comparison and can be divided into two groups; Non-cold start predictors (RE, RE-rank, and, SL-query) and Cold start predictors (SL-emb, fastText, Graphite, and, GraphEx). The cold start predictors are those that can recommend keyphrases for a new item that doesn't exist in the universe of items. These predictors, which are used for new items without historical data, a higher number of predictions is necessary to ensure that enough relevant keyphrases are identified. For new items, there is no established ground truth, making it difficult to gauge the accuracy of predictions. The primary predictors for any item are the non-cold start predictors. Thus the models in figure <ref> are deduplicated from left to right as discussed in section <ref>.
§.§ Interpretability
The applications in E-Commerce domain frequently require that a model be interpretable. As this helps to comprehend the rational behind its predictions and decision process. In our use case, it is essential to trace where the words in the keyphrases arrive from. Neural Network models typically require converting input text into vectors, which often obscures the contribution of individual tokens to the decisions. While, interpretability techniques such as LIME and SHAP offer post-hoc explanations, treating a Deep Neural Network as a black-box. They also require much effort to figure out the contributions of each input feature.
Unlike the black-box models, the GraphEx algorithm has 3 transparent phases: keyphrase curation, keyphrase mapping, and ranking. The data curation process gives perspective as to how the keyphrases in GraphEx's label set were curated. The keyphrase mapping phase details how GraphEx's candidate keyphrases were mapped from the keyphrases extracted from the item's title to GraphEx's candidate keyphrases. The ranking algorithm which then ranks the mapped candidates is transparent as well. It uses
Label Title Alignment (LTA) outlined in Section <ref> which is a token based algorithm ensuring majority of the tokens in the keyphrases match the title. This ensures that GraphEx's predicted keyphrases are explainable and interpretable.
|